Is Artificial Intelligence Artificial Enough? – C-Suite Quarterly

Is Artificial Intelligence Artificial Enough? – C-Suite Quarterly

GHOSTS IN THE MACHINE

The casual reader may struggle with the concept of biased technology. After all, isn’t technology the objective and unemotional translation of zeros and ones into something we can use? How can technology have bias, or any other human emotion? The answer lies in what AI is—and what it isn’t. 

AI, despite its name, is not some miracle technology that replicates the human brain and neural processes to approximate human thought and cognition. AI, at its most basic level, is made up of very complex and very advanced code that enables systems to gather immense amounts of information from different sources, analyze these for patterns and relationships, draw multiple conclusions with different probabilities, and act on these conclusions—all at incredible speed. Advanced AI systems will learn over time and refine their processes, using more complex code and algorithms that continuously incorporate previous patterns and outcomes as additional inputs to refine future analyses.

The message is that CEOs are almost uniformly male, mostly white with some space for East and South Asian model minorities, and women and Black and Latinx and Indigenous people need not apply. This is human bias reflected in technology algorithms.

Every stage of designing, implementing and operating these AI systems is driven by humans. It is humans who write the code; humans who define the analytical algorithms; humans who provide sample data and scenarios to test the algorithms; humans who define how analyses are translated into hypotheses and hypotheses translated into conclusions; humans who define what actions correspond to what combination of conclusion and probability.

[To read more of Edward C. Wilson-Smythe’s thought leadership click here]

And every person brings with them their own preconceived notions and biases. These are reflected in what AI algorithms search for, what correlations are made, what conclusions are arrived at—and most importantly, what actions are initiated. The whites-only automatic soap dispenser, more primitive than an AI system, was designed by humans who configured the infrared sensor to work with the light absorptive properties of pale skin. The facial recognition software does not work well with black and brown faces, because—let us face it—while we obsess socially about the infinite shades of blond hair and blue eyes (why else would we have strawberry blond and cornflower blue?), all people with black hair and brown eyes “look the same” to us. When the Google algorithm matched photos of Black men with its database of gorilla photos, Google fixed this by removing tags of monkey, chimp and gorilla from photos of monkeys, chimps and gorillas, so that photos of Black men would not be correlated with these images. The question yet to be answered is why Google algorithms were matching Black human beings to images tagged as monkey, chimp or gorilla in the first place. 

HYPER-ACCELERATED HARM

Don’t believe me? Just google the word “CEO” and look at the images. Only 7% are women. Even worse, the book “Marrying a CEO” published by BWWM Romance (that is short for black woman/white man, and I am not making this up) comes up before the current or former CEOs of GM, Anthem, UPS, Oracle, IBM, HP, or Pepsi. The first Muslim person listed links to a story about his gruesome murder, and the first image of a Black man is 17 lines down, referring to an internship program for underprivileged people to experience being “CEO for a day”. 

The message is that CEOs are almost uniformly male, mostly white with some space for East and South Asian model minorities, and women and Black and Latinx and Indigenous people need not apply. This is human bias reflected in technology algorithms. 

As AI becomes more pervasive, the potential for harm escalates to pose an immediate and severe risk to minorities and marginalized communities. AI eliminates people with “black” and “ethnic” names five to seven times more often from recruitment processes than people with “white” names; suggests similar male names (Stephen vs. Stephanie) when we search for women on professional networking sites; wrongly flags Black defendants as high risk to reoffend at twice the rate of white defendants; misidentifies Latinx people for wanted criminals at immigration check points; misidentifies transgender, non-binary and gender nonconforming people while dispatching emergency services; defines majority-minority neighborhoods as crime hotspots when actual crime rates are much more evenly spread out; and deploys heavily-armed police to fight crime that just did not happen. In the name of technology advancement, we are putting people directly in the firing line—sometimes literally—of technology biases. This needs to stop.

NOT-SO-INVISIBLE HANDS

AI must reflect an equitable and fair approach to human variability for it to truly serve the common good of all people. The only way to achieve this is to acknowledge the biases that are already integral to AI systems, and then work affirmatively to create systems that eliminate these biases. Having worked in large technology companies that are leading the charge on innovating and commercializing AI at scale, collaborated with multiple startups and academic institutions that are driving much of the research and innovation, and advised both large corporations and governments on how to leverage AI for the common good, I believe there are four steps we all must take to minimize human bias in AI:

  1. Ensure representation: All parties in the AI ecosystem must make growing, hiring, training and promoting diverse talent a priority. This will not necessarily eliminate bias, but the different lived experiences of women and Black, Latinx, Indigenous and LGBTQ people balance the current biases reflective of a small, privileged, relatively uniform sample of society as diverse teams work together, discuss options and design solutions.
  2. Empower dissent: Technology is often a race to be the fastest or the cheapest, or both. Any voice that gets in the way of that mission is discouraged. It took more than 20 years of failed technology projects for quality assurance to become an independent discipline. Technology ethics must similarly become an independent empowered voice, with the power to identify issues, challenge consensus and direct improvements.
  3. Do no harm: AI algorithms often harness the power of technology to solve pre-defined problems, with limited focus on potential real-life implications of analysis and decisions beyond the immediate problem statement. Any AI must specifically test for known risks and invest in identifying and testing for potential unknown unintended consequences, to confirm that the design purposefully mitigates and eliminates these issues.
  4. Remember you are human: Technology is a frontier, and the leaders who are charting new paths demonstrate the same combination of confidence, hubris and arrogance as any other group of people exploring new frontiers. The belief in the infallibility of technology is a belief in the infallibility of people pushing the boundaries of progress. All people, even exceptional technology innovators and entrepreneurs, are fallible. Remember that and be humble.

[For more on NTT Data Services’ approach to Digital Operations click here]

THE CHOICE BEFORE US

Without purposeful redirection of innovation to address the human biases that proliferate in AI, the promised benefits will accrue almost exclusively to the same select few to whom the benefits of all change accrue disproportionately. It will expose our most vulnerable communities to the same biases that have served to systematically disenfranchise, impoverish, criminalize—and let us admit it, kill—except with the heightened efficiency, speed and anonymity that technology accords. We should not need activists to put Powered by AITM markers on graves before we act. We as leaders must step up to acknowledge these flaws, accept our role in creating these risks, and take urgent actions to make AI less reflective of the worst tendencies of the human condition. We have landed men to the moon, made cars that drive themselves, and can connect people around the globe instantaneously. Surely, we can create technology that serves all humankind.

Source

Leave a Reply

%d bloggers like this: