I thought today I might talk about something that’s a little bit more esoteric than our usual on-the-ground civic concepts, but which I find to be just as crucial in shaping political conversations and moving us forward to better outcomes, which is the question of metrics and outcomes.
What I mean by that is, a modern city, and certainly one the size of New York, is constantly measuring things: how much revenue is coming in and expenses are going out, how many bus and subway trips are being taken, how many people are utilizing social services and assistance, how much agencies are spending on overtime, and on and on.
We as residents tend to think of this as just a process constantly running in the background, practically automatic, but it’s worth keeping in mind that exactly what ends up being counted and how it gets logged shapes a lot about how we understand our city and the efforts of our elected officials. Data is not an absolute truth, and it can be just as manipulable as anything else.
For the students in my journalism classes, I often invoke the specter of twisted data using one of my favorite examples, a story from a few years ago about how the Border Patrol had reported an enormous increase in assaults on agents nationwide, prompting a flurry of indignant headlines about how the agency was under siege and the victim of some sort of organized attack. The reported 73 percent spike in assaults year-over-year certainly looked alarming, but it took a dedicated beat reporter with an intimate knowledge of the border region and the patrol’s operation to smell something fishy and ask some basic questions about how these numbers were being arrived at, and what they really meant.
In the end, as reporter Debbie Nathan wrote in The Intercept, the method for counting the assaults was practically comical. The agency was multiplying the number of assailants by the number of weapons used by the number of agents involved. Hence, six people throwing three different types of things (branches, rocks, and bottles) at seven agents was logged as 126 separate assaults, instead of the more sensical seven. There was nothing mathematically wrong with the calculation, but it was obviously not what a reasonable person would understand the number to mean.
Yet that doesn’t stop these incredibly flawed or often incomplete numbers from being trotted out all the time to influence public opinion and political discourse. The Border Patrol example is an extreme one, but we see versions of that everywhere, including here in the city. When someone talks about apartments designated affordable, you might have a gut sense of what exactly that means, but do you really know what the official affordability income bands are? Did you know, for example, that some of the higher-end units in affordable housing lotteries are designated for families with incomes over $160,000 a year, and that these lotteries are much easier to win than for the much more desperately needed lotteries for lower income band units
Sometimes, the most useful measurements and metrics aren’t just misleading, they’re missing altogether. Infamously, the $850 million ThriveNYC mental health program run by former NYC First Lady Chirlane McCray collapsed in a heap after reporter Amanda Eisenberg found that the program was barely keeping track of the outcomes it was supposed to be improving. While the program’s goals — improving postpartum depression, helping people deal with substance abuse issues, preventing suicide, among others — were laudatory, the lack of detailed accounting of whether the initiatives were actually working made the whole thing untenable, and even counterproductive. Tons of money was being sunk into a system whose results were uncertain at best, and that’s ultimately money that could have been sent to effective programs that needed it.
More recently, we on the New York Daily News editorial board and others have asked Mayor Eric Adams to be more transparent about how some of his signature public safety policies, including the sweeps of homeless encampments, are actually panning out. The mayor’s office very occasionally reports absolute numbers like the number of encampments that have been broken up and the number of people that have ostensibly received city services, but leaves out crucial data, including how many of these encampments appear to be just reconstituted versions of ones that were broken up before and how many of the people who do receive services ultimately find new shelter or permanent housing versus end up back on the street. How many of their personal belongings end up getting tossed out, given reports that this is common? How many have substance abuse issues that are helping cause their homelessness?
The same is true for recent policies around NYPD’s ability to forcibly institutionalize people with mental illness, or efforts to get guns off the streets with technology, or in terms of street design leading to traffic crashes and fatalities or a lack thereof, for example. Testing in schools is one of those data points that we endlessly debate the effectiveness and meaning of, but for whatever reason even civically engaged New Yorkers often simply accept the data that is provided by the government — what it measures, on what scale, with what tools.
It’s not just a minor thing. Ultimately, these are lenses through which we understand the world, and consequently how we choose to change it, and what change to demand. What are some ways that you think we could all benefit from better government metrics? What are some metrics that fall short, or that are a good example of what better data can achieve? Feel free to let us know.