Earlier this week, Mayor Eric Adams announced a new roadmap for the city’s use and regulation of artificial intelligence. Many fixated on some bizarre details, particularly the mayor’s mention that among the tools the city is using is a system to make robocalls in his voice, including in languages he doesn’t speak.
Yet beyond the silly stuff, the document actually presents a relatively robust vision of how the city can shape its approach to a set of potentially transformative technologies in a way that’s responsible and serves New Yorkers first and foremost. While Adams’ personal affinity for technology as a whole has been well established — recall his desire to be paid in Bitcoin, and his unrelenting enthusiasm for expanding the NYPD’s tech arsenal — the plan itself, surprisingly and thankfully, heavily emphasizes regulation. Its first initiative is to design a framework, driven by a steering committee and featuring public engagement and outside counsel, to constrain the use of AI and increase transparency and reporting.
There’s plenty in there about efforts to increase development and adoption of AI tools within the government, to be sure, but the entire plan weaves through the point that the technologies — ranging from large language models like ChatGPT that have become quickly popular to things like pattern analysis to find anomalies in datasets and even neural networks that can more closely approach creative problem-solving — are potent and potentially hazardous, and it’s government’s responsibility to oversee their responsible use.
Just one example includes the establishment of a process to continuously review AI tools already in operation and “assess the tool’s effectiveness at fulfilling its stated goals, protecting against drift or shifting objectives, and facilitating project improvement.” Frequent auditing of government programs is a hallmark of effective oversight and it’s very welcome to see it here, especially in recognition that these uses are far from hypothetical. AI has surged just in the last several years, and there’s no reason to think its increasing dominance in our lives is going to be reversed. Local governments will be under significant pressure to roll out “innovation” without really understanding it or having existing rules in place to manage the tools, a combustible combination that can really hurt a lot of people. This play provides a better path forward.
One other good thing is the active effort to demystify AI itself. So far, efforts at both regulation and positive rollout have been hampered somewhat by the public perception — also held by a concerningly large contingent of public officials — that AI technology is, cumulatively, basically magic, which will either solve all our problems with its inexhaustible potential or destroy the world, depending on who you ask.
I don’t mean to minimize the potency of the technology, and in fact a relatively simple tool can have enormous impact. Generally, though, that’s not because it is magic, but because we treat it like magic. Take one of the earliest examples I can remember reading about regarding the damage done by the government’s use of an algorithm, the 2016 ProPublica story “Machine Bias,” which detailed how so-called criminal justice risk assessment programs were failing wildly to predict whether or not a particular defendant would reoffend and did so in a provably racist way.
I encourage you to read the story in full, but the gist is that these algorithms were spitting out scores used by judges and prosecutors to determine the length of criminal sentences and whether people might be released on bail or parole. Their selling point, in a twist of irony, was that they would help the criminal justice system be fairer, as they would eliminate some of the bias inherent in having these decisions made completely by flawed people. The machine cannot be racist, after all, right?
It turns out, though, that when you take the results of a malfunctioning and racist system and use them as the basis for a predictive system, that system will have the same biases. So, the algorithms were flagging Black people — or people who were homeless or otherwise already overly policed — as more likely to commit violent crimes, because they had historically been over-policed. You could even make the argument that using this tool, which was sold as a way to eliminate unjust outcomes, was actually worse, not because it necessarily was more biased than any given prosecutor or judge, but because it presented the façade of impartiality. If a particular judge or prosecutor seemed to consistently flag Black people for suspicion, that might draw some scrutiny around their motivations. A computer doing the same thing is likelier to be overlooked.
That’s why the objective to “prepare city personnel to effectively and responsibly work with and on AI, recognizing that AI literacy is critical not just for those in technical roles, but also for the many public servants who use, manage, or make decisions about AI tools,” is so crucial. It recognizes that there is danger in having not just the public at large but particularly city workers and decision-makers think of AI as inscrutable or magic. Hopefully, this effort will train public employees to understand that each AI tool is just that: a tool, like a hammer or a calculator.
Each has its own uses and pitfalls, and none of them are like the artificial general intelligence that we know from science fiction, replicating human reasoning and inventiveness (and which has its own risk profiles, up to and including some experts’ concern that it would literally try to kill us all). Having city workers understand what these technologies can or can’t do, and actually building out frameworks to constrain how they’re used before they’ve already been widely deployed, is more crucial than a lot of people realize. It’s very hard to put the toothpaste back in the tube, and unfortunately the recent history of government response to rampaging tech is very reactive, taking action only after people have already been hurt. Crypto exchanges that were obviously scam unregulated financial marketplaces have received serious scrutiny only after they’ve crashed and vaporized billions of dollars in customer funds.
AI could do a lot of great things here. It could help identify where buildings are failing to control emissions, or flag students who should be screened for attention disorders, or help New Yorkers navigate complicated processes more seamlessly. It could also do a lot of harm. If the city really takes this effort seriously, and follows a mandate to cultivate the former and control the latter, it could really be a trend-setter for municipalities nationally. One of the initiatives in the plan, by the way, is to “foster public engagement,” which will begin as a series of public listening sessions and eventually involve rolling public engagement on AI use, so now’s a good time to start giving some thought to your own concerns and ideas.