Voters looking for information on next year’s city comptroller race may have been confused to stumble upon websites named for two of the candidates — Assemblymember Jenifer Rajkumar and Councilmember Justin Brannan — that on the surface appeared to be campaign sites, but were actually fake sites assailing the two candidates.
As Gothamist reported, the sites seem to have been set up by the same person or group, who wanted to tie the two candidates to Mayor Eric Adams and spouted personal insults about them with a right-wing sensibility. These sorts of dirty tactics by nondescript actors have become commonplace in national elections but the sites raised concerns that the approach is trickling down to the local level, turbocharged in part by AI tools.
AI and mis- and disinformation aren’t exactly the same thing, but the former certainly makes it possible to do the latter at a scale not otherwise achievable. Some chatbots have gotten good enough that they can be given some simple prompts and instructions and unleashed on social media to pretty convincingly spoof voters with certain political leanings.
The proliferation of the political bots has sometimes become amusingly apparent when other users get the bots to ignore prior instructions and reveal themselves to be AI. Earlier this year, The New York Times made its own liberal and conservative chatbots as a marker of how plausible they can sound. When the paper asked the conservative one what it thought about liberals, it responded among other things that “I think theyre all insane. Theyve been brainwashed into thinking trump is the devil incarnate when he has done nothing but good for America,” grammar and syntax errors and all.
Online troll armies are nothing new, but they at least used to have to be staffed by actual people, like the now-infamous Russian-backed groups that seized on pressure points in the 2016 U.S. presidential election to push for a Trump victory, which lately have been spreading anti-Ukraine propaganda. With large language models like ChatGPT, a small and ad hoc group of people can conceivably keep thousands of troll accounts running constantly.
Beyond sheer scale, AI tools also permit wholly new ways of producing misleading or false political content. Right here in NYC earlier this year, the voice of former Assemblymember and Manhattan Democratic Party leader Keith Wright was spoofed in a ten-second recording that sounded like Wright was unleashing a vitriolic tirade against Assemblymember Inez Dickens. Like most other such bits of misinformation, the recording was spread anonymously and set off a lot of consternation in local political circles before it was revealed to have been fabricated; even just a few years ago, convincingly faking someone’s voice like that would have been impossible.
What a lot of researchers will tell you about this is that it’s not just about the misinformation campaigns themselves, which, at least at the city and state level, have not been as widespread as they are nationally. The problem is the erosion of a sense of trust in political information altogether. Wright’s was a fake, but what’s to stop a politician caught on a hot mic saying something racist or inflammatory from saying that recording is a fake, too? Not everyone has to believe, but just enough to dampen the electoral impact.
The whole JD Vance couch controversy can pretty much be traced to a false claim by a single online shitposter before blowing up nationally. Part of what made it spread like wildfire (I’m not going to detail it here, click the link if you want) is that the original poster didn’t just make up the rumor but threw in a citation, supposedly to Vance’s breakout book Hillbilly Elegy. I think that little detail is what lit the spark, because it seemed like a bit of evidence that almost no one would bother to check. I’d imagine most politically aware people now realize it wasn’t true, but some people probably think it is, and at some point it’ll get digested into a half-remembered morsel of information whose truth or untruth is beside the point. It’ll just be part of the Vance story.
That’s all about a frankly ludicrous rumor with a made-up source. It’s much worse with something more believable and meticulously faked, and with the lower profile of a local race out of the blazing hot profile of a national presidential race. So if locally there are going to be more “parody” websites that start looking more like traditional campaign websites with misinformation thrown in, or dark money political websites meant to look like news sites, or more deepfakes of political figures, then we’d all best be ready to sift through it ourselves.
Ideally, it wouldn’t just be up to the electorate and news consumers to counteract torrents of misinformation that are getting easier to produce and more sophisticated all the time, and there are some good signs when it comes to regulation. The Federal Communications Commission has moved to ban AI-generated robocalls after one spoofed President Biden’s voice earlier this year. States around the country, including New York, have recently begun to ban undisclosed political deepfakes, in theory forcing all such generated content to be labeled. But there is also a good amount of enthusiasm around AI, and some pretty potent interests with significant political and economic clout have very clear interests in pushing the technology, come what may.
Gov. Kathy Hochul herself is now pushing to make the the state a hub of AI research, with an eye to the economic benefits that the industries promise. And there certainly are some benefits; the use of AI in medical research or certain aspects of fraud detection or optimization of public works, for example, can be a real boon. Yet much of the consumer-facing technology right now seems to be solutions in search of problems, with significant corollaries that go well beyond just political misinformation.
Adams infamously deployed an official NYC chatbot meant to orient businesses wanting to operate in the city, and which was found to instead be advising business owners to break the law in various ways, including by engaging in wage theft. As City & State reported, the governor’s AI plan alone seems to threaten the state’s compliance with its long-term climate goals given the sheer volume of electricity the systems hoover up.
In both AI and traditional non-AI misinformation schemes, there are also big pending questions of enforcement. Many of these laws and regulations are new and we’re not sure exactly how or if they’ll be getting enforced. Look at what happened with cryptocurrencies, which moved so fast that we saw several multi-billion-dollar collapses before regulators could really catch up. One thing is for certain, the shadowy groups hoping to influence elections and political movements with false or artificially generated information won’t be giving up, and their tools will only get more potent.
To check our more of our political coverage, visit here.