Yes, qualitative and quantitative research can be combined, and doing so is a well-established approach known as mixed methods research. The National Institutes of Health defines it as the strategic integration of rigorous quantitative and qualitative methods to draw on the strengths of each. Far from being a niche technique, mixed methods has become increasingly common in health sciences, education, and social research over the past two decades.
Why Combining Methods Works
Quantitative research tells you what is happening: how many, how much, how often. Qualitative research tells you why and how it happens, capturing experiences, motivations, and context that numbers miss. Each approach has blind spots. Surveys can reveal that 40% of patients are dissatisfied with a service, but they can’t explain the specific frustrations driving that number. Interviews can surface rich personal stories, but they can’t tell you how widespread those experiences are.
Mixed methods research offsets these limitations by combining inductive thinking (building ideas from observations) with deductive thinking (testing ideas with data). The result is a more complete picture than either method produces alone. In health research, for example, one Australian study used a quantitative survey to measure general nursing care quality in public hospitals, then followed up with qualitative interviews to explore what factors influenced whether nurses adhered to care standards. A logistic regression quantified the associations between nursing practices and patient satisfaction, while a thematic analysis of the interviews explained the reasoning behind those patterns.
Three Core Designs for Combining Methods
There are three basic ways to structure a mixed methods study, and the right choice depends on what you need to learn and in what order.
Sequential Exploratory
Qualitative data comes first, followed by quantitative data. This design is useful when you’re entering unfamiliar territory. You might conduct interviews or focus groups to identify themes, then use those themes to build a survey instrument you can distribute to a larger population. The qualitative phase generates hypotheses; the quantitative phase tests them.
Sequential Explanatory
Quantitative data comes first, followed by qualitative data. You start with a survey or experiment to identify patterns or surprising results, then use interviews or observations to explain those findings in depth. If your survey data shows an unexpected drop in program participation among a specific group, the qualitative follow-up helps you understand the barriers that group faces.
Convergent (Concurrent)
Both types of data are collected at the same time. The purpose is triangulation: using qualitative and quantitative data together to more accurately define relationships among variables of interest. You might distribute a survey and conduct interviews during the same period, then compare the two datasets to see where they agree, diverge, or add nuance to each other.
Beyond these three basic designs, more advanced frameworks exist for multistage projects, intervention studies, case studies, and participatory research where community members help shape the study.
How the Data Actually Gets Integrated
Combining methods only works if the two types of data genuinely interact. Simply reporting qualitative and quantitative findings in separate chapters of the same paper doesn’t count as integration. There are four recognized approaches to making data speak to each other.
Connecting means using results from one dataset to inform the collection of the other. For instance, survey results might determine which participants you select for interviews. Building means one dataset directly shapes the design of the next instrument, like using interview themes to write survey questions. Merging means bringing both datasets together for comparison, often through side-by-side analysis. Embedding means nesting one type of data within a larger framework dominated by the other, such as adding open-ended questions to a clinical trial.
One of the most practical tools for integration is the joint display, a table that places quantitative and qualitative data side by side, organized by shared constructs or themes. The most common format arranges columns by data type and source, with rows representing the topics being studied. This visual alignment makes it easy to see where the numbers and the narratives confirm each other, where they diverge, and where one type of data fills gaps left by the other. Researchers working on a pediatric oncology quality improvement study, for example, went through four iterations of their joint display before arriving at a structure that clearly separated two data sources (a caregiver survey and medical records) while showing how quantitative and qualitative variables from each source related to the same underlying questions.
The Philosophical Basis
Quantitative and qualitative research have traditionally been rooted in different worldviews. Quantitative work tends toward positivism: the idea that there’s an objective reality you can measure. Qualitative work leans toward constructivism: the idea that meaning is shaped by context and perspective. These aren’t just methodological preferences. They represent genuinely different beliefs about what counts as knowledge.
Pragmatism is the philosophical position that most commonly justifies combining the two. Rather than choosing a side in the objectivity debate, pragmatism focuses on what works to answer the research question. If the question requires both measurement and meaning, you use both. This is sometimes called the “weak” philosophical foundation for mixed methods because it simply permits integration rather than requiring it. Stronger philosophical positions, like dialectical pluralism, go further and argue that mixing methods should be actively encouraged in social science research.
Challenges Worth Knowing About
Combining methods sounds straightforward in theory, but it creates real practical and institutional problems. The most commonly cited barrier is time. Mixed methods research requires designing, collecting, and analyzing two separate types of data, each with its own standards. Researchers consistently report that current academic timelines don’t accommodate this kind of work.
Budget is another significant hurdle. Long-term projects require funding that covers multiple modes of data collection, and funders need to be convinced the added expense is worth it. Finding or training researchers who are genuinely skilled in both quantitative and qualitative methods is also difficult. People who can bridge both traditions and tie the elements together are rare, regardless of whether their institution is oriented toward one approach or the other.
Epistemological clashes can derail projects from the inside. Collaborators trained in one tradition sometimes don’t take the other seriously, viewing their own method as the only rigorous one. The language and jargon of each tradition differ enough that making the parts truly mix, rather than just coexist, requires deliberate effort. Some researchers have described the dynamics bluntly: antagonism, conflicting perspectives, a lack of respect.
Publishing mixed methods work presents its own obstacle. Journal word count restrictions have been getting tighter, often capping articles at 7,000 to 8,000 words. Mixed methods papers need to explain multiple methods, describe multiple data sources, and engage with multiple literatures. Fitting all of that into a standard word limit while also reporting meaningful findings is a persistent struggle.
Reporting Standards for Mixed Methods
To ensure transparency and quality, a widely used set of guidelines called GRAMMS (Good Reporting of a Mixed Methods Study) outlines six criteria that a mixed methods paper should address. These include describing the justification for using a mixed methods approach, specifying the design in terms of purpose, priority, and sequence of methods, detailing each method’s sampling and analysis, explaining where and how integration occurred, noting any limitations one method created for the other, and describing the insights gained specifically from mixing or integrating methods. That last point is key: simply using two methods isn’t enough. You need to show what you learned from their combination that you couldn’t have learned from either one alone.

