Alphabet, the parent company of Google GOOG , is the leading tech company that decided to invest a lot of resources and funding in artificial intelligence. So much so, that the WSJ recently announced that AI is central to Google’s future.
Not surprisingly, Google has been dealing with different challenges concerning its top AI executives and researchers. Activists shareholders are also showing interest in this. Recently, there is a rise in shareholder proposals calling on boards to ensure proper AI governance.
We live in a new technological era, one where board members have to be prepared for situations where artificial intelligence (AI) affects and maybe even disrupts their deliberations, with regards to both shareholders and stakeholder groups.
To illustrate, the latest controversy with regards to Google and ethical AI, involves the departure of one of its leading stars, Stanford Professor Timnit Gebru, who left (or was let go) after her research exposed the company’s vulnerability and approach to AI regarding its diversity efforts.
To try to figure out the ways in which tech companies, like Google, should incorporate AI in their decision-making processes, I decided to interview my long-time friend Sergio Alberto Gramitto Ricci. Sergio and I have known each other since I was a doctoral student at Cornell Law School; he is a Lecturer at Monash University, in Australia, and previously had a Visiting Assistant Professor of Law position at Cornell Law School. I reached out to him to discuss his research on the use of artificial intelligence in the boardroom.
MORE FOR YOU
Looking at the future, we considered forms of artificial intelligence that can develop their own “views” on given matters and the following different scenarios: the assistance of the directors’ decision-making scenario, the hybrid-boards scenario and the directors’ replacement scenario.
The following are some questions I asked him about his latest Cornell Law Review article Artificial Agents in Corporate Boardrooms.
Q: What would you answer to those who think that artificial intelligence could improve how corporations are run?
A: With respect to accountability – human directors’ decision-making should not be replaced or influenced by unaccountable artificial intelligence’s decision-making. I warn that using artificial intelligence to make decisions in boardrooms could lead to a void of accountability. The use of artificial intelligence in boardrooms could raise other issues as well. For example, I caution about the risk that directors could get captured by the artificial intelligence’s “views.”
Q: Do you expect directors not to feel comfortable disregarding the “views” of AI or deviating from such views because these views are provided by uber-intelligent machines?
A: I believe there is an omnipresent specter that directors would prefer to avoid taking the risk of disagreeing with uber-intelligent machines.
Q: Can AI machines serve as directors?
A: At least in Delaware, this would not be workable, because Delaware corporate law, arguably the corporate law the rest of the world looks at, requires directors to be natural persons, human beings.
Q: What about jurisdictions that allow corporations to appoint other corporations as directors?
A: Even in jurisdictions that allow corporations to be appointed as directors of another corporation, human beings would ultimately sit in the boardrooms.
Q: For a second, let us envision a scenario in which AI machines are granted legal personality and legal persons can be appointed as directors. Can you provide an example of issues that might arise in a hybrid-board scenario?
A: In my opinion there would be more than one issue. For example, let us imagine that human directors (or some of them) are in favor of a decision, but the AI-directors are against it. There is a considerable risk that human directors would conform their view to that of the AI-directors. Human directors, who have consciousness and a conscience, would be accountable; whereas I do not know how AI-directors could effectively be held accountable. This would be an instance in which the risk that directors lose their independent judgment intertwines with the accountability issues possibly arising from the use of artificial intelligence in corporate boardrooms.
Q: Could you explain what role a conscience and consciousness play?
A: A conscience requires human beings to deal with the value of their actions. Consciousness allows human beings to anticipate and experience the effects of their actions.
Q: It seems that you place emphasis on consciousness and a conscience when you discuss accountability.
A: I think that we can simply say that those who deal with the value of their actions and are able to experience and anticipate the effects of their actions are more accountable.
Q: What about accompanying the use of artificial intelligence with a mandatory insurance policy that would cover damages caused by “artificial intelligence’s” decisions?
A: An insurance policy would not solve the accountability issue. It could have a restorative effect ex post. But different than accountability, it would not operate ex ante. Accountability is not only about repairing; it is also about preventing. Possibly accountability is even more than the sum of repairing and preventing. As I said when I was discussing the role of a conscience, if grounded in a conscience, accountability is also about dealing with the value of actions.
Q: What if artificial intelligence developed a conscience and consciousness then?
A: Philosophers warn us that if artificial intelligence developed a conscience and consciousness, it could also possibly experience suffering. Uber-intelligence could lead to uber-suffering. As I wrote in my article, “no potential benefits resulting from the use of AI in the boardrooms, in corporate governance, or in other settings could be worth the risk that artificial agents could suffer; even more drastically, no potential benefit resulting from the use of AI is worth the risk that relations between natural beings and artificial beings could evolve into exploitative relations.”
One thing is clear, we need more transparency and accountability on how decisions around AI ethics initiatives are made in the boardrooms of large tech companies.
On the policy side, several lawmakers, including Sen. Elizabeth Warren (Mass.), are pushing for the Algorithmic Accountability Act, which would require companies to audit and correct race and gender bias in algorithms. I’m hoping that we will see some positive progress on this.