Citizen science is scientific research carried out with the help of ordinary people who aren’t professional scientists. These volunteers might count birds in their backyard, classify galaxies from telescope images, or monitor air quality in their neighborhood. The Oxford English Dictionary, which added the term in 2014, defines it as “scientific work undertaken by members of the general public, often in collaboration with or under the direction of professional scientists and scientific institutions.”
Where the Term Came From
The phrase “citizen science” was coined independently by two people in the mid-1990s who didn’t know about each other’s work. Rick Bonney, an American ornithologist, used it to describe projects where amateur birdwatchers voluntarily contributed scientific data. Around the same time, British sociologist Alan Irwin used the same term but with a different emphasis: he was arguing that science and science policy should be opened up to the public, giving everyday people a voice in how research is directed. These two origins still shape the field today. Some projects simply ask volunteers to collect or sort data. Others give participants a role in shaping research questions, designing methods, and interpreting results.
How Participation Works
Not all citizen science projects ask the same thing of volunteers. The simplest model is contributory: scientists design the study, and volunteers gather observations or sort data following clear instructions. Think of someone logging the birds they spot in their yard or tagging photos of animals captured by trail cameras. This is the most common setup and the easiest to join.
At the other end of the spectrum are co-created projects, where community members help define the research question itself, design how data will be collected, and participate in analysis. These are sometimes called “community science” to distinguish them from more top-down efforts. A neighborhood group concerned about water pollution, for example, might partner with university researchers to design a monitoring study, collect samples, and use the results to push for local policy changes. Most projects fall somewhere between these two extremes.
Major Projects and Their Scale
A few platforms have become household names in the citizen science world. iNaturalist, a biodiversity platform, has hosted over 50 million verifiable observations covering 300,000 species from more than 1.3 million users, and those numbers continue to grow. Participants photograph plants, insects, fungi, and animals, then the community and automated tools help confirm identifications.
Zooniverse is the largest platform for online citizen science across multiple disciplines. Its most famous project, Galaxy Zoo, asks volunteers to classify the shapes of galaxies from telescope images. That single project has generated 86 peer-reviewed scientific papers. Across all Zooniverse projects, more than 80 additional “meta” publications have examined how citizen science itself works.
These aren’t niche efforts. The data volunteers produce feeds directly into scientific databases, conservation planning, and published research that other scientists build on.
Beyond Ecology and Astronomy
While nature observation and astronomy get the most attention, citizen science has expanded into health, urban planning, and weather monitoring. In Canada, the PriCARE research program has involved patients in studying how primary care clinics manage people with complex chronic illnesses, giving participants a role in shaping the research tools themselves. Some projects have developed patient-oriented versions of standardized health questionnaires, balancing scientific rigor with language and formats that make sense to the people actually filling them out.
Weather monitoring is another growing area. Volunteer-run weather stations now supplement official networks, and researchers validate the data by comparing citizen measurements with established stations using statistical methods like time-series analysis and spatial interpolation. In urban planning, initiatives like Germany’s “Stadtradeln” collect cycling data from participants, giving local politicians concrete information about how to improve bike infrastructure.
How Reliable Is the Data?
A fair question: can data from untrained volunteers actually be trusted? The short answer is yes, when projects are designed well. Quality control typically works on multiple levels. Many platforms use consensus filtering, where several independent volunteers classify the same image or observation, and the majority answer is taken as correct. Expert reviewers spot-check submissions. Automated algorithms flag entries that look unusual or inconsistent.
For environmental monitoring, researchers routinely compare citizen-collected measurements against professional instruments to verify accuracy. The key is that no single volunteer’s observation carries too much weight. With thousands or millions of data points, individual errors get washed out by the volume of correct entries.
What Participants Get Out of It
The benefits don’t flow in just one direction. Research on students who participated in a citizen science project focused on animal behavior found that their self-assessed comfort and skill with scientific research improved by 30% over the course of the project. Their comfort with explaining scientific results to others verbally rose 21%, and written communication confidence grew 17%. Perhaps most striking, when asked “Do you know what citizen science is?” their self-rated understanding nearly doubled.
These gains weren’t just feelings. When tested objectively, roughly 79% of participants could correctly identify a scientific hypothesis, and 88% understood that the data they collected could answer a real research question. Citizen science, in other words, doesn’t just produce data for researchers. It builds scientific literacy in the people who participate.
Influence on Policy
Citizen science data increasingly shapes real-world decisions. The CityCLIM campaign, for instance, goes beyond collecting climate data: participants actively co-design city policies based on what they’ve measured and analyzed. Projects like these expand observational networks far beyond what government agencies could achieve alone, covering more geographic area and longer time periods than professional monitoring stations typically manage.
This expanded coverage is especially valuable for environmental issues, where regulators often lack the granular, neighborhood-level data needed to justify action. When communities collect their own evidence of pollution, habitat loss, or climate impacts, they bring something concrete to policy discussions rather than relying on anecdotes.
Who Owns the Data?
One unresolved tension in citizen science involves intellectual property. In the United States, courts have generally been reluctant to grant participants ongoing property rights over data or samples they contribute to research. Copyright law protects original works and vests ownership in the author, while patents belong to whoever conceived the invention. Where citizen scientists fit into these categories isn’t always clear.
Different projects handle this differently. Some require participants to surrender ownership interests as a condition of joining, keeping all outputs openly available. Others invite volunteers to co-author published papers, which gives them copyright interests in the work. The European Citizen Science Association has published principles urging project leaders to give participants access to datasets, inform them of research outcomes, and take intellectual property issues seriously from the start.
The Role of AI
Artificial intelligence is increasingly paired with human observation. At the simpler end, smartphone apps help volunteers identify species in the field, acting as a first pass that the community then confirms or corrects. More advanced machine learning algorithms detect patterns in the massive datasets that citizen science generates, handling tasks like filtering noise from weather station data or flagging unusual animal behavior in camera trap footage.
The combination works because each side compensates for the other’s weaknesses. AI can process millions of images faster than any group of volunteers, but humans are still better at handling ambiguous or unusual observations. Volunteers, meanwhile, provide the labeled training data that machine learning models need to improve. There’s also a growing movement to involve citizen scientists not just in data collection but in the analysis stage, closing the loop so participants see how their contributions lead to results.

