People are scared of artificial intelligence – here’s why we should embrace it instead – World Economic Forum

People are scared of artificial intelligence – here’s why we should embrace it instead – World Economic Forum

  • Artificial intelligence (AI) needs to be democratized to help more people understand it and embrace its potential;
  • We need to develop regulations for AI that are agile and adapt to this rapidly progressing technology;
  • A focus on “Trustworthy AI” offers a promising model for innovation and the governance of AI.

Artificial intelligence (AI) has gained widespread attention in recent years. AI is viewed as a strategic technology to lead us into the future. Yet, when interacting with academics, industry leaders and policy-makers alike, I have observed some growing concerns around the uncertainty of this technology.

In my observation, these concerns can be categorized into three perspectives:

  • Many people lack a full understanding of AI and therefore are more likely to view it as a nebulous cloud instead of a powerful driving force that can create a lot of value for society;
  • Some companies or individuals worry that they will fall behind as AI becomes more prevalent;
  • As is often the case with new technology, AI is increasingly used despite policy and regulation being behind the pace.

It is understandable that people might have these concerns at this moment in time and we need to face them. As long as we do, I believe we don’t need to panic about AI and that society will benefit from embracing it. I propose we address these concerns as follows:

1. We should democratize AI

Instead of writing off AI as too complicated for the average person to understand, we should seek to make AI accessible to everyone in society. It shouldn’t be just the scientists and engineers who understand it; through adequate education, communication and collaboration, people will understand the potential value that AI can create for the community.

2. In AI no one will be “left behind”

We should democratize AI, meaning that the technology should belong to and benefit all of society; and we should be realistic about where we are in AI’s development.

We have made a lot of progress in AI. But if we think of it as a vast ocean, we are still only walking on the beach. Most of the achievements we have made are, in fact, based on having a huge amount of (labelled) data, rather than on AI’s ability to be intelligent on its own. Learning in a more natural way, including unsupervised or transfer learning, is still nascent and we are a long way from reaching AI supremacy.

From this point of view, society has only just started its long journey with AI and we are all pretty much starting from the same page. To achieve the next breakthroughs in AI, we need the global community to participate and engage in open collaboration and dialogue.

Machine learning projects took home the most AI funding in 2019

Machine learning projects took home the most AI funding in 2019

Machine learning projects took home the most AI funding in 2019

3. We should take an agile approach to the governance of AI

We can benefit from AI innovation while we are figuring out how to regulate the technology. Let me give you an example: Ford Motor produced the Model T car in 1908, but it took 60 years for the US to issue formal regulations on the use of seatbelts. This delay did not prevent people from benefitting significantly from this form of transportation. At the same time, however, we need regulations so society can reap sustainable benefits from new technologies like AI and we need to work together as a global community to establish and implement them.

The World Economic Forum was the first to draw the world’s attention to the Fourth Industrial Revolution, the current period of unprecedented change driven by rapid technological advances. Policies, norms and regulations have not been able to keep up with the pace of innovation, creating a growing need to fill this gap.

The Forum established the Centre for the Fourth Industrial Revolution Network in 2017 to ensure that new and emerging technologies will help—not harm—humanity in the future. Headquartered in San Francisco, the network launched centres in China, India and Japan in 2018 and is rapidly establishing locally-run Affiliate Centres in many countries around the world.

[embedded content]

[embedded content]

The global network is working closely with partners from government, business, academia and civil society to co-design and pilot agile frameworks for governing new and emerging technologies, including artificial intelligence (AI), autonomous vehicles, blockchain, data policy, digital trade, drones, internet of things (IoT), precision medicine and environmental innovations.

Learn more about the groundbreaking work that the Centre for the Fourth Industrial Revolution Network is doing to prepare us for the future.

Want to help us shape the Fourth Industrial Revolution? Contact us to find out how you can become a member or partner.

By addressing the aforementioned concerns people may have regarding AI, I believe that “Trustworthy AI” will provide great benefits to society. There is already a consensus in the international community about the six dimensions of “Trustworthy AI”: fairness, accountability, value alignment, robustness, reproducibility and explainability. While fairness, accountability and value alignment embody our social responsibility; robustness, Reproducibility and explainability pose massive technical challenges to us.

“Trustworthy AI” innovation is a marathon, not a sprint. If we are willing to stay the course and if we embrace AI innovation and regulation with an open, inclusive, principle-based and collaborative attitude, the value AI can create could far exceed our expectations. I believe that the next generation of the intelligence economy will be forged in trust and differentiated by perspective.

License and Republishing

World Economic Forum articles may be republished in accordance with our Terms of Use.

Source

Leave a Reply

%d bloggers like this: