IE 11 Not Supported
For optimal browsing, we recommend Chrome, Firefox or Safari browsers.
As detailed in a 116-page newly released strategy, city officials in New York City are looking to proactively build ethics into machine learning and AI usage as the technologies become vital pieces of everyday life.
The plan, dubbed The New York City Artificial Intelligence Strategy, was released yesterday by the NYC Mayor’s Office of the Chief Technology Officer. This plan is perhaps unprecedented in many ways, marking the most extensive and proactive action taken toward one of the world’s fast-evolving technologies by a U.S. city government.
“AI will touch virtually every area of life in the years ahead,” said John Paul Farmer, NYC’s chief technology officer, during a conversation with Government Technology.
While most prominently the report features the city’s planned approach toward supporting AI, it also serves in part as a primer on the basics of how AI works. What is perhaps most striking for those familiar with AI discourse is the document’s tone. The report largely forgoes the common fear that AI is a looming danger, or a threat to dominate or even destroy the world (see several popular films, one of which involves a franchise starring Arnold Schwarzenegger), in favor of responsible optimism. Throughout these pages is a guiding idea that with ethical human involvement, AI and machine learning can be nurtured and grown into a massively productive tool for daily life.
Farmer pointed to five main sections of the report. These sections detail how to modernize the city’s data infrastructure; the areas wherein AI can do the most good with the smallest potential for harm; ways the city can use AI internally to better serve residents; the importance of external AI partnerships with research bodies and academia; and the significance of ensuring the digital rights of New Yorkers are protected, with equitable opportunities built into a growing “AI ecosystem.”
While a good deal of the report is about plans for the future, one of its architects also discussed how AI is being used in the city already. Neal Parikh is NYC’s director of artificial intelligence, and he noted that New York City Cyber Command uses AI in its work around cybersecurity, as does the citywide administrative body that manages energy consumption.
Both of these civic organizations use AI to manage gigantic data sets that would take humans massive amounts of time to sort through, time that would be spent doing mundane and repetitive tasks, rather than higher-level functions that require unique problem-solving and creative approaches. Tapping AI for monotonous data management thereby saves the local government hours of work it would have to pay for with taxpayer dollars.
In the report there’s also a bedrock concern about what might happen if AI is misused, though the concern doesn’t lean toward a sci-fi doomsday scenario as much as one might think.
As with most new technologies, there’s potential for AI to inadvertently reinforce existing problems such as biases and inequities. There’s also a concern about the unknown, meaning that AI without human supervision could create new and unexpected problems, said Eileen M. Hunt, a professor of political science at the University of Notre Dame. Hunt studies the application and potential of AI and machine learning, and she has written extensively on the subject.
Overall though, Hunt described NYC’s new strategic plan around AI as “fascinating and very heartening,” crediting it for the breadth of community stakeholders that helped shape it. Those stakeholders include members of the New York business community, members of the local startup community, academics and human rights activists.
Hunt stressed how valuable it was for NYC to incorporate real people into its AI strategizing, both in forming this plan and in the ecosystem it seeks to create moving forward.
“Humans are not outside of the AI system,” Hunt said. “They are part of the system, and must understand themselves as that.”
In her writing, Hunt has compared humanity’s relationship with AI to the classic novel by Mary Shelley, Frankenstein. Humans, she said, have a parental responsibility to manage and shape AI and machine learning, especially with how the technologies are applied to the lives of real people. In the novel, Doctor Frankenstein creates an AI with his monster before leaving it to grow on its own, untended and lacking the human guidance it needs to be healthy.
The stakes for taking a responsible approach to the work in these early phases of AI’s development are quite high. To quantify how vital AI is becoming, a recent survey of state governments found that 60 percent of respondents have found vital uses for the tech, up from 13 percent three years ago. Investment money is also flowing into government-adjacent AI startups across the country.
Social media — which often uses AI and machine learning algorithms to shape how content appears on its platforms — is perhaps somewhat of a cautionary tale for letting this evolution occur without thought. Social media platforms were roundly left to operate unchecked, until they began to affect everyday life, from the way people feel about their bodies to the outcome of democratic elections the world over. Simply put, in the early days of social media, major local governments were not creating strategies about how to handle it responsibly.
In this way, the NYC AI document is quite significant. It builds equity, justice and human involvement into a local government plan for nurturing, growing and governing the use of the technology.
As the report opens, Farmer emphasizes NYC’s desire to take responsibility amid the rapid growth of AI.
“As a global epicenter of innovation and home to nearly nine million people, New York City has a key role to play in shaping this future,” Farmer writes. “Through the NYC AI Strategy, we are laying out the next steps needed to make the most of artificial intelligence, to protect people from harm, and to build a better society for all.”
Never miss a story with the GovTech Today newsletter.