Will Artificial Intelligence Harm or Benefit Humankind?

Artificial intelligence will almost certainly do both. The technology is already delivering measurable gains in medicine, science, and economic productivity, while simultaneously introducing real risks around job displacement, bias, and concentrated power. The outcome depends less on the technology itself and more on how societies choose to deploy, regulate, and distribute its benefits. Here’s what the evidence shows so far.

The Economic Upside Is Enormous

AI is projected to contribute $15.7 trillion to the global economy by 2030. Of that, $6.6 trillion comes from productivity gains and $9.1 trillion from effects on the consumer side, things like new products, personalized services, and time savings. China stands to see the largest national boost, with AI expected to increase its GDP by 26% by 2030. North America follows at a projected 14.5% increase, driven largely by early adoption and a head start in AI research.

These aren’t speculative figures. Companies are already using AI to automate routine tasks, optimize supply chains, and accelerate product development. The gains compound: when businesses become more productive, they can lower prices, expand into new markets, and reinvest savings. The question is who captures that value.

Jobs Will Shift, Not Simply Vanish

The World Economic Forum projects that 92 million jobs will be displaced by 2030, but 170 million new roles will emerge in the same period. That net gain of roughly 78 million jobs sounds reassuring, but the math hides a painful transition. The people losing jobs are not necessarily the same people filling the new ones. A factory worker displaced by automation doesn’t automatically become an AI systems manager.

The jobs most vulnerable are those built around repetitive, predictable tasks: data entry, basic customer service, routine document review. The roles emerging tend to require skills in data analysis, AI system oversight, and technical maintenance. The gap between those two categories is where the real harm sits. Without large-scale retraining programs, entire communities could be left behind while aggregate employment numbers look fine on paper. Countries and companies that invest early in workforce transition will absorb the shock far better than those that don’t.

Medicine Is Already Changing

One of the clearest benefits of AI is in drug discovery. Developing a new drug traditionally takes 4 to 6 years just to get through preclinical stages, with costs running into hundreds of millions of dollars. AI is compressing that timeline dramatically. In 2021, the company Insilico Medicine used AI to identify a new drug target for a serious lung disease called idiopathic pulmonary fibrosis and advanced a candidate into preclinical trials in just 18 months, at a cost of roughly $150,000 (excluding lab validation). That’s a fraction of the typical time and budget.

The speed comes from AI’s ability to process multiple streams of biological data simultaneously: genetic information, protein structures, chemical interactions. Traditional research handles these largely one at a time. AI models can run them in parallel, potentially compressing years of preclinical work into months. This matters not just for rare diseases but for pandemic preparedness, where shaving even a few months off vaccine or treatment development could save millions of lives.

Smarter Predictions for a Warming Planet

AI is proving especially useful in climate science and disaster preparedness. A hybrid AI framework recently tested against the U.S. National Water Model showed that AI-enhanced flood predictions were 4 to 6 times more accurate than the traditional model alone, across forecast windows of 1 to 10 days. That kind of improvement translates directly into better evacuation planning, smarter infrastructure investment, and fewer deaths.

This matters more every year. Flooding impacts are projected to increase more than 20-fold by the end of this century due to climate change and human activity. Having prediction tools that can give communities days of accurate warning, rather than hours of uncertain guidance, changes what’s possible in terms of response. AI won’t stop climate change, but it can help societies adapt to its consequences with far more precision.

Education Gets More Personal

AI-driven tutoring systems are showing real results in classrooms. In a controlled study of medical students, those using an AI personalized learning platform scored significantly higher on post-tests than those in a traditional learning group (84.5 versus 81.7 on average), with an effect size of 0.72, which researchers consider moderate to large.

The biggest gains appeared among students who were struggling the most. Students who started with scores below 70 improved by an average of 12.3 points when using the AI platform, compared to 8.7 points in the control group. That’s a meaningful difference for learners at risk of falling behind. AI tutors work by adapting in real time to what a student understands and where they’re confused, essentially providing the kind of one-on-one attention that’s impossible in a lecture hall. The technology is still early, but the pattern is consistent: personalized pacing helps, and AI makes it scalable.

Bias Gets Baked In

AI systems inherit the biases present in their training data, and the consequences can be severe. A major study by the National Institute of Standards and Technology evaluated facial recognition algorithms and found that false positive rates for Asian and African American faces were 10 to 100 times higher than for white faces, depending on the algorithm. American Indian and Alaska Native groups had the highest false positive rates of all. For African American women specifically, false positive rates were elevated in both one-to-one and one-to-many matching scenarios.

These aren’t abstract statistics. Facial recognition is used in law enforcement, airport security, and identity verification for banking. A false positive can mean being wrongly flagged as a suspect or denied access to services. When the error rates are 10 to 100 times higher for certain groups, the technology effectively creates a two-tier system where some people are far more likely to be harassed or misidentified. This is one of the clearest examples of how AI can cause harm even when it’s “working” in a technical sense.

Weapons Without Human Judgment

The development of lethal autonomous weapons, systems that can select and engage targets without human intervention, represents one of the most serious risks of AI. Despite eight rounds of formal international discussions since 2014 under the Convention on Conventional Weapons, there is still no binding treaty restricting these systems. Progress has been blocked by consensus rules that allow individual nations to veto agreements.

Human rights organizations, including Human Rights Watch and the Harvard Law School International Human Rights Clinic, have called for a new standalone treaty, arguing that the current diplomatic framework is insufficient. They’ve pointed to successful precedents: the Mine Ban Treaty and the Convention on Cluster Munitions were both created outside the traditional consensus-based process. But for now, the technology is advancing faster than the diplomacy. Multiple nations are actively developing autonomous weapons systems, and the absence of clear international rules creates a race-to-the-bottom dynamic where restraint is penalized.

Regulation Is Starting, Slowly

The European Union’s AI Act is the most comprehensive attempt at regulation so far. Its prohibitions on certain AI practices, such as social scoring systems and manipulative AI, took effect in February 2025. The majority of the Act’s rules, including enforcement mechanisms, kick in by August 2026. The law categorizes AI systems by risk level and imposes stricter requirements on higher-risk applications like those used in hiring, law enforcement, and critical infrastructure.

Outside Europe, regulation remains patchy. The U.S. has relied more on sector-specific guidance than comprehensive legislation. China has introduced rules targeting specific applications like deepfakes and recommendation algorithms but within a framework that also promotes rapid AI development. The gap between how fast AI is being deployed and how fast governance structures are catching up is one of the defining tensions of this period. Companies operating globally now face a patchwork of rules that differ by jurisdiction, which makes consistent safety standards difficult to enforce.

The Balance Depends on Choices

The evidence points in both directions simultaneously. AI is making drug discovery faster, flood predictions more accurate, and personalized education more accessible. It’s also amplifying racial bias in facial recognition, displacing millions of workers, and enabling weapons systems that operate beyond human control. The technology is not inherently good or bad. It amplifies whatever goals, values, and blind spots its developers and deployers bring to it.

What separates a beneficial outcome from a harmful one is largely a matter of policy, investment, and accountability. Societies that fund workforce retraining, enforce algorithmic audits, and establish clear rules for high-risk applications will capture more of the upside. Those that let deployment outpace governance will absorb more of the damage, and that damage will fall disproportionately on people who are already vulnerable.