What Research Method Is a Survey, Explained

A survey is a research method built around collecting information from people through structured questions. It can function as a quantitative method, a qualitative method, or a mix of both, depending on how the questions are designed. Surveys using numerically rated items (like rating scales from 1 to 5) produce quantitative data, while surveys with open-ended questions generate qualitative data. Most surveys combine both approaches, making them one of the most flexible tools in research.

Because surveys describe and explore human behavior, they are among the most widely used methods in social science, psychology, public health, marketing, and education. Their core strength is practical: they can gather information from a large number of people relatively quickly and at a lower cost than methods like interviews or direct observation.

How Surveys Fit Into Research Design

Surveys are not a single, fixed method. They are a data collection tool that can be plugged into different research designs. The two most common designs are cross-sectional and longitudinal.

A cross-sectional survey captures a snapshot. It collects data from a group of people at a single point in time, allowing researchers to compare different variables all at once. For example, a cross-sectional survey might ask 2,000 adults about their exercise habits, sleep quality, and stress levels in the same questionnaire. These surveys are faster and cheaper to run, but they cannot tell you whether one thing caused another. They show patterns, not sequences.

A longitudinal survey follows the same group of people over weeks, months, or even years, collecting data at multiple points. This design can detect how attitudes or behaviors change over time and is better suited for suggesting cause-and-effect relationships. The tradeoff is that longitudinal surveys take significantly more time and resources to complete, and participants may drop out along the way.

Common Ways Surveys Are Administered

The method you use to deliver a survey shapes who responds and how they answer. The most common formats today include online questionnaires, phone interviews (sometimes called computer-assisted telephone interviews), face-to-face interviews, and paper-and-pencil forms mailed to respondents. Online surveys dominate modern research because of their speed and low cost, but each mode has strengths. Face-to-face interviews tend to produce more detailed answers and higher completion rates. Phone surveys can reach people who lack internet access. Mailed surveys give respondents time to think through their answers privately.

How Sampling Works in Surveys

A survey rarely reaches every single person in a population. Instead, researchers select a sample, a smaller group meant to represent the larger whole. The method used to select that sample has a major impact on how trustworthy the results are.

Probability sampling (also called random sampling) gives every person in the target population a known chance of being selected. This approach has remained the standard for making claims about large populations because it consistently produces results that are nearly unbiased and have measurable precision. Federal agencies, for example, rarely rely on anything other than probability samples when describing groups like U.S. adults. A common version is random contact sampling, where the researcher randomly selects whom to reach out to from the target population.

Nonprobability sampling, sometimes called purposive or convenience sampling, does not give every person a calculable chance of being selected. Participants might volunteer through an online panel, or a researcher might recruit people who are easy to reach. Because there is no way to calculate a probability of selection for these samples, their use is typically limited to exploratory or developmental research. They are useful for early-stage work like testing whether survey questions make sense before launching a full study, but they carry a higher risk of producing results that do not generalize to the broader population.

Types of Survey Questions and What They Measure

Survey questions fall along a spectrum of precision, and the type of question you ask determines what kind of analysis you can do with the answers. Researchers describe this using four levels of measurement.

  • Nominal: Categories with no ranking. Examples include gender, ethnicity, or yes/no questions. You can count how often each category appears, but you cannot calculate an average.
  • Ordinal: Categories that have a meaningful order, but the gaps between them are not necessarily equal. A satisfaction scale of “very unhappy, unhappy, neutral, happy, very happy” is ordinal. You can rank the responses and find a middle value, but the distance between “happy” and “very happy” is not precisely defined.
  • Interval: Numeric responses where the spacing between values is consistent, but there is no true zero point. Temperature in Celsius is a classic example. With interval data, you can calculate averages and standard deviations.
  • Ratio: Like interval data, but with a meaningful zero. Age, income, and number of hours worked per week are ratio-level. This is the most flexible level, allowing every type of mathematical operation.

Getting this right at the question-design stage matters because it locks in or limits the statistical tools available later. A survey built entirely on yes/no questions, for instance, cannot produce average scores.

How Survey Data Gets Analyzed

Once responses are collected, the analysis depends on the research question and the type of data gathered. The goal is usually one of four things: comparing groups, finding associations between variables, predicting outcomes, or checking whether different measurements agree.

For comparing two groups on a categorical variable (such as whether men and women differ in their likelihood of voting), a chi-square test is the standard tool. When comparing two groups on a numeric outcome (like average test scores between two classrooms), an unpaired t-test is typical for independent groups, and a paired t-test works when the same people are measured twice. If you have more than two groups, analysis of variance (ANOVA) extends the comparison.

For exploring relationships, correlation measures how strongly two variables move together, while regression goes a step further and predicts one variable from another. Linear regression handles continuous outcomes like income, while logistic regression handles binary outcomes like whether someone did or did not complete a program. These tools form the backbone of most survey-based research.

Common Sources of Bias

Surveys are only as good as the honesty and representativeness of the responses they collect, and several types of bias can distort results.

Social desirability bias is one of the most persistent problems. Respondents tend to shift their answers toward what they think the researcher (or society) wants to hear. Socially undesirable behaviors are consistently under-reported. In studies on smoking during pregnancy, for instance, mothers tend to answer “no” even when they did smoke. This effect is strongest for sensitive topics like drug use, income, sexual behavior, and parenting practices. Anonymous survey formats can reduce this bias, but they do not eliminate it.

Nonresponse bias occurs when the people who choose not to respond differ systematically from those who do. If healthier people are more likely to complete a health survey, the results will paint an overly optimistic picture of the population’s well-being. Historically, response rates were treated as a proxy for data quality, though research has shown that response rates alone do not reliably predict how biased the results are. The pattern of who responds and who does not matters more than the raw percentage.

Coverage error happens before a single question is asked. It arises when the list of people the survey could potentially reach does not match the actual population of interest. An online-only survey, for example, will miss people without internet access, who may differ in important ways from those with access.

Ethical Requirements for Survey Research

Any survey involving human participants carries ethical obligations. The most fundamental is informed consent: participants need to know what data is being collected, how it will be used, and who will have access to it before they agree to take part. For prospective surveys (those collecting new data going forward), informed consent is considered essential.

Privacy and confidentiality are separate but related requirements. Privacy is the participant’s right to control what personal information is collected and shared. Confidentiality is the researcher’s obligation to protect that information from unauthorized access, disclosure, or theft. In practice, this means storing data in de-identified or anonymized formats whenever possible. Anonymized data has been treated so that it is impossible to re-identify the individual it came from. Researchers and institutions are expected to have clear processes for data sharing, de-identification, and secure storage before a survey launches.