For decades, Hollywood has made millions off of our fears that artificial intelligences such as HAL in 2001: A Space Odyssey and Skynet in The Terminator could one day control us or even wipe out humanity.
But that was then. Today, we have kindler, gentler, real-life AIs like iPhone’s Siri and Amazon’s Alexa, and according to a new survey overseen by a team of University of Delaware researchers, many of us are more than happy to include this technology in our daily lives.
The results of the survey, released this month, show that almost half of all Americans say they use a voice-activated personal assistant such as Siri or Alexa. Those who use such assistants are particularly likely to support developing AI (63%) and public funding for research on it (46%), while those who do not utilize these services show less support (51% and 37%, respectively). Furthermore, people who use voice assistants are especially likely to see AI as having positive effects on society and to feel hopeful about the technology.
“More and more Americans are using personal assistants in their everyday lives,, and that may be helping to pave the way for broader acceptance of AI,” said UD communication professor Paul Brewer, one of the study’s co-authors. “People who talk to Siri or Alexa are especially supportive of AI in general.”
For the study, funded by a grant from the Charles Koch Foundation, the survey was conducted on March 17-27, 2020, by the National Opinion Research Center. A nationally representative sample of 1,900 adult U.S. residents in NORC’s AmeriSpeak Panel were interviewed online. Results were weighted by age, sex, education, race/ethnicity, housing tenure, telephone status and Census Division to reflect U.S. population values.
The survey found that 67% of Americans who frequently watch science fiction programs and movies believe we should do more to develop AI. Meanwhile, 55% of those who don’t watch science fiction favor developing AI technology. Avid science fiction viewers are also more likely to say that AI will benefit rather than harm society.
“We found that science fiction fans are particularly favorable toward developing AI technology,” said Brewer. “Which is particularly fascinating when you consider how AI is often negatively portrayed in science fiction films.”
Yet the survey showed that broader use of AI technology did not necessarily translate into familiarity. Most Americans reported only a slight familiarity with AI. Roughly a quarter (27%) said they heard a lot about AI, while more than half (59%) said they had heard a little about it. The remaining 13% have heard nothing about AI.
The survey results suggest that Americans want non-political experts to manage AI. The public trusts experts at universities, technology companies and the military to handle AI, but they do not trust the federal government. Almost three-fourths of Americans (73%) trust university researchers a great deal or a fair amount when it comes to developing and using AI. Majorities also trust technology companies (61%) and the U.S. military (60%) to develop and use AI. However, only 35% of Americans trust the government in Washington to do so.
“The high distrust in government is notable, particularly in contrast to the support for regulation. My hunch is that this reflects broader distrust in government among both traditional anti-government and anti-Trump folks,” Brewer said.
When it comes to trust with AI, the public probably needs more specific questions about who, how and why, said co-author David Wilson, a professor of political science and international relations.
“These concerns reflect a level of mistrust, but also a desire to see more AI use,” Wilson said. “These tensions are what make AI fundamentally political, and as technology advances, researchers should continue to monitor public attitudes.”
There are some partisan gaps, but the issue doesn’t seem sharply polarized right now – unlike, say, climate change or now COVID. That makes sense given that there’s no sharp, clear divide between Republican and Democratic leaders on the issue, Brewer said.
But while this study mines some of the typical territory – such as partisan divides and gender gaps – it also breaks new ground because of its focus on the various determinants of political messaging, Wilson said.
“Not only did we look at issues related to trust and general support or opposition, but we tested how the framing of AI matters with regard to scary or gentle images,” he said. “These types of innovations help us to better predict how individuals fill in the knowledge gaps about AI, when they do not have strong attitudes.”