Artificial intelligence is becoming a true industry, with all the pluses and minuses that entails, according to a sweeping new report.
Why it matters: AI is now in nearly every area of business, with the pandemic pushing even more investment in drug design and medicine. But as the technology matures, challenges around ethics and diversity grow.
Driving the news: This morning, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) released its annual AI Index, a top overview of the current state of the field.
- A majority of North American AI Ph.D.s — 65% — now go into industry, up from 44% in 2010, a sign of the growing role that large companies are playing both in AI research and implementation.
- “The striking thing to me is that AI is moving from a research phase to much more of an industrial practice,” says Erik Brynjolfsson, a senior fellow at HAI and director of the Stanford Digital Economy Lab.
By the numbers: Even with the pandemic, private AI investment grew by 9.3% in 2020, a bigger increase than in 2019.
- For the third year in a row, however, the number of newly funded companies decreased, a sign that “we’re moving from pure research and exploratory small startups to industrial-stage companies,” says Brynjolfsson.
- While academia remains the single-biggest source worldwide for peer-reviewed AI papers, corporate-affiliated research now represents nearly a fifth of all papers in the U.S., making it the second-biggest source.
- The drug and medical industries took in by far the biggest share of overall AI private investment in 2020, absorbing more than $13.8 billion — 4.5 times greater than in 2019 and nearly three times more than the next category of autonomous vehicles.
The catch: While the field has experienced sudden busts in the past — the “AI winters” that vaporized funding — there’s little indication such a collapse is on the horizon. But industrialization comes with its own growing pains.
As AI grows, the ethical challenges embedded in the field — and the fact that 45% of new AI Ph.D.s are white, compared to just about 2% who are Black — will mean “there’s a new frontier of potential privacy violations and other abuses,” says Brynjolfsson.
- The AI Index found that while the field of AI ethics is growing, the interest level of big companies is still “disappointingly small,” says Brynjolfsson.
Details: Those growing pains are at play in one of the most exciting applications in AI today: massive text-generating models.
- Systems like OpenAI’s GPT-3, released last year, swallow hundreds of billions of words along the way to producing original text that can be eerily human-like in its execution.
- Text-generating AI models could help polish human-written resumes for job search, but could also potentially be used to spam corporate competitors with realistic computer-generated applicants, not to mention warp our shared reality.
- “What we increasingly have with these models is a double-edged sword,” says Kristin Tynski, a co-founder and senior VP at Fractl, a data-driven marketing company.
What to watch: The growing geopolitical AI competition between the U.S. and China.
- The National Security Commission on Artificial Intelligence warned in a major report this week that “China possesses the might, talent, and ambition to surpass the United States as the world’s leader in AI in the next decade if current trends do not change.”
- “We don’t have to go to war with China,” former Google CEO Eric Schmidt, who chaired the committee that authored the report, told my Axios colleague Ina Fried. “We do need to be competitive.”
Yes, but: While researchers in China publish the most AI papers, the U.S. still leads on quality, according to the Stanford survey.
- And while a majority of AI Ph.D.s in the U.S. are from abroad, more than 80% remain in the country when they take jobs — a sign of the lasting attraction of the U.S. tech sector.
The bottom line: AI still has a long way to go, but the challenges the field faces are shifting from what it can do to what it should do.