Research methods fall into two broad camps: quantitative methods that produce numerical data you can measure and compare, and qualitative methods that capture experiences, opinions, and processes in narrative form. Within those two categories sit dozens of specific designs, from tightly controlled experiments to open-ended interviews, each suited to different kinds of questions. Understanding the landscape helps you choose the right approach for a project or evaluate the strength of evidence you encounter.
Quantitative vs. Qualitative Research
Quantitative research is deductive. It starts with a hypothesis and tests it using experiments or surveys, producing numerical results that can be generalized to larger populations. If you want to know whether a drug lowers blood pressure more than a placebo, or what percentage of voters support a policy, quantitative methods give you that answer with statistical confidence.
Qualitative research works in the opposite direction. It collects narrative data, usually through interviews, observations, or open-ended questions, and builds concepts from patterns in what people say and do. The goal is to understand how something happens or what an experience feels like from the participant’s perspective, not to produce a number. Qualitative findings are rich in detail but harder to generalize because they typically involve smaller, more targeted groups.
Both approaches have blind spots. Quantitative research can tell you that something happened but may miss why. Qualitative research captures the why but can’t tell you how common a finding is across a population. That tension is exactly why researchers often combine them.
Experimental Designs
Experiments are the gold standard for establishing cause and effect. The defining feature is that the researcher actively changes something (the intervention) and measures what happens, rather than simply watching events unfold.
A true experiment randomly assigns participants to at least two groups: one that receives the intervention and one that doesn’t. Random assignment is the key ingredient because it makes the groups comparable at the start, so any difference at the end can be attributed to the intervention rather than to pre-existing differences between people.
When random assignment isn’t feasible or ethical, researchers use a quasi-experimental design. These studies still compare groups or measure before-and-after changes, but without randomization. For example, a researcher might compare infection rates at a hospital that adopted a new cleaning protocol with rates at a similar hospital that didn’t. The tradeoff is that any observed difference could be driven by factors the researcher didn’t account for, like staffing levels or patient demographics, rather than the intervention itself.
The simplest version, sometimes called a pre-experimental design, measures a single group before and after an intervention with no comparison group at all. It’s quick and easy to run but offers the weakest evidence because there’s no way to rule out other explanations for the change.
Observational Studies
Observational studies don’t intervene. Researchers watch, measure, and record what naturally occurs. Three designs dominate this category.
- Cohort studies follow a group of people over time to see who develops a particular outcome. A classic example: tracking thousands of smokers and nonsmokers for decades to compare lung cancer rates. Because events are recorded in chronological order, cohort studies can help distinguish cause from effect, though not as definitively as experiments.
- Case-control studies work backward. Researchers start with people who already have a condition and compare them to people who don’t, looking for differences in past exposures or behaviors. This design is especially useful for studying rare diseases because you don’t have to wait years for enough cases to appear.
- Cross-sectional studies capture a snapshot of a population at a single point in time. They’re relatively quick and inexpensive, which makes them ideal for measuring how common something is (prevalence). The limitation is that a snapshot can’t tell you which came first, the exposure or the outcome, so cause and effect remain unclear.
Qualitative Research Approaches
Qualitative research isn’t one method. It encompasses several distinct frameworks, each designed to answer a different kind of question.
Phenomenology focuses on understanding a specific experience. A researcher using this approach might interview cancer survivors to learn what the first year after diagnosis felt like, aiming to describe the shared essence of that experience. Grounded theory goes a step further: it collects data (usually through interviews or observations) and systematically builds a theoretical model to explain what’s happening. If you want to develop a new theory about how medical students learn empathy, grounded theory is a natural fit. Ethnography requires the researcher to spend extended time embedded in a community or group, observing routines, language, and culture from the inside. It’s the approach anthropologists are known for, but it’s widely used in education and healthcare research too.
The data collection tools for qualitative work include in-depth interviews, focus groups, oral histories, and direct observation. Surveys with open-ended questions can also serve qualitative purposes when the goal is to capture participants’ own words rather than tally responses.
Mixed Methods Research
Mixed methods research deliberately combines quantitative and qualitative data in a single study. There are three basic architectures.
In an exploratory sequential design, the researcher starts with qualitative data, interviews or observations, and uses those findings to shape a subsequent quantitative phase. You might interview patients about barriers to medication adherence, then build a survey from the themes that emerge and distribute it to a much larger group. An explanatory sequential design reverses the order: quantitative data come first, and qualitative data help explain surprising or unclear results. If a survey reveals that satisfaction dropped at certain clinics, follow-up interviews at those clinics could reveal why.
A convergent design (also called concurrent) collects both types of data at roughly the same time, then merges the results for comparison. This is useful when you want numerical trends and personal narratives to speak to the same question simultaneously.
Systematic Reviews and Meta-Analyses
These are synthesis methods. Rather than collecting new data, they pull together findings from studies that have already been published.
A systematic review uses a clearly defined, reproducible search strategy to find every available study on a specific question, then reviews and analyzes the results. A meta-analysis adds a statistical layer: it pools numerical results from multiple similar studies to calculate a combined estimate of effect. Not every systematic review includes a meta-analysis, because sometimes the studies are too different in design or measurement to combine mathematically.
In medicine and the health sciences, systematic reviews and meta-analyses sit at the top of the evidence hierarchy, above individual experiments, because they aggregate evidence across many studies and minimize the bias that any single study might carry. Below them, in descending order of reliability: randomized controlled trials, cohort and case-control studies, case series and case reports, and expert opinion.
Clinical Trial Phases
Clinical trials are a specialized form of experimental research used to test new medical treatments. They follow a phased structure, each phase answering a different question.
Phase I trials are small, typically 15 to 50 participants, and focus on safety. Researchers start with low doses and gradually increase them, watching for side effects and figuring out the best way to deliver the treatment. Phase II trials expand to fewer than 100 patients and test whether the treatment actually works against a specific disease, while continuing to monitor safety. Phase III trials scale up dramatically, sometimes enrolling hundreds to thousands of participants across multiple locations, to compare the new treatment against the current standard. Results from Phase III are what regulatory agencies review before approving a treatment for public use. Phase IV trials happen after approval and track long-term side effects in the broader population.
Sampling Methods
How researchers select participants shapes the quality of their findings. Sampling strategies fall into two categories: probability sampling, where every member of the population has a known chance of being selected, and non-probability sampling, where they don’t.
Simple random sampling gives everyone an equal shot at selection, like drawing names from a hat. Stratified random sampling divides the population into subgroups first (by age, gender, income, or another factor), then randomly samples within each subgroup. This is particularly useful when you need adequate representation from minority or underrepresented groups that simple random sampling might miss. Cluster sampling is practical when the population is too large and spread out to list every individual. Researchers randomly select geographic clusters, like schools or hospitals, and then randomly sample people within those clusters.
On the non-probability side, convenience sampling is the most common method in clinical research. Researchers enroll whoever is available and accessible. It’s fast and cheap but introduces bias because the sample may not reflect the broader population. Purposive sampling deliberately selects participants who have specific characteristics or experiences relevant to the research question, and it’s a staple of qualitative work.
Choosing the Right Method
The best research method is the one that matches your question. If you need to measure how widespread something is, a cross-sectional survey works. If you need to prove one thing causes another, you need an experiment with random assignment. If you want to understand how people experience a process or make decisions, qualitative methods will get you further than any spreadsheet. And if the question is complex enough to need both numbers and narratives, mixed methods let you combine strengths from each side.
The type of method also determines how much weight a finding carries. A single case report can spark a hypothesis, but it takes a well-designed trial, and ideally a systematic review of multiple trials, to confirm it. Recognizing where a study sits in that hierarchy is one of the most practical skills for evaluating any claim you encounter.

