Is that artificial intelligence ethical? Sony to review all products – Nikkei Asia

Is that artificial intelligence ethical? Sony to review all products – Nikkei Asia

TOKYO — In a future smart city run entirely by artificial intelligence, you might walk into a bar for the first time but “the bartender knows your favorite drink.”

So expounded Bjarke Ingels, founding partner at Danish architecture firm BIG, in an online discussion for the Web Summit tech conference this month.

Information on the weather, eating habits and other details of daily living would be pooled to understand residents’ needs.

But even as it increases convenience, AI could unintentionally employ discriminatory algorithms, leading to embarrassing problems.

Sony will start screening all of its AI-infused products for ethical risks as early as spring, Nikkei has learned. If a product is deemed ethically deficient, the company will improve it or halt development.

Sony uses AI in its latest generation of the Aibo robotic dog, for instance, which can recognize up to 100 faces and continues to learn through the cloud.

A man pets his Aibo at Sony’s fan meeting in Tokyo. The robotic dog can recognize up to 100 faces and continues to learn through the cloud.   © Reuters

Sony will incorporate AI ethics into its quality control, using internal guidelines.

The company will review artificially intelligent products from development to post-launch on such criteria as privacy protection. Ethically deficient offerings will be modified or dropped.

An AI Ethics Committee, with its head appointed by the CEO, will have the power to halt development on products with issues.

Even products well into development could still be dropped. Ones already sold could be recalled if problems are found. The company plans to gradually broaden the AI ethics rules to offerings in finance and entertainment as well.

As AI finds its way into more devices, the responsibilities of developers are increasing, and companies are strengthening ethical guidelines.

In 2019, regulators in the U.S. state of New York announced an investigation into Goldman Sachs for possible gender discrimination in the algorithm powering the Apple Card. Married cardholders had found that the husband had a much higher credit limit than the wife, even though the wife had a better credit score.

This followed a 2018 incident at Google, where thousands of employees successfully protested to halt a project using the company’s AI technology for military drone video analysis.

If AI makes the wrong decision or leads to physical harm, “it could pose a risk to the continuity of a business,” said Gakuse Hoshina, AI Center lead at Accenture.

AI ethics was a big topic of the World Economic Forum in January. 

“AI is one of the most profound things we are working on as humanity,” Alphabet chief executive Sundar Pichai said. “It’s more profound than fire or electricity or any of the other bigger things we have worked on.”

But, he said, there are negative consequences.  

“As democratic countries with a shared set of values, we need to build on those values and make sure when we approach AI, we are doing it in a way that serves society. And that means making sure AI doesn’t have bias, that we build and test it for safety.”

Additional reporting by Kazuyuki Okudaira in Palo Alto, U.S.

Source

Leave a Reply

%d bloggers like this: