Throwing the Poor Crumbs Isn’t “Increasing Access”
From crypto speculation to Chatbot "therapists" to "tiny homes," our media routinely accepts the premise of manufactured austerity to justify a cruel caste system.
Neoliberalism, above all, is an ideology defined as endless choices but no options. We are “free to choose” between working the night shift at Taco Bell or destitution. We are “free to choose” giving our money to this landlord, or his buddy, the other landlord down the street. We are “free to choose” this underfunded school or this other underfunded school a bit further away. We are permitted the freedom to spend hours looking at all the options in the AMA marketplace. “Free to choose” the $750 monthly plan with high deductibles or the $920 monthly plan that doesn’t include our insulin pump. We live in the golden age of capital “C” choice, but little-to-no real options.
A similar ideological regime has informed a great deal of reporting that’s helping codify caste systems in finance, healthcare, and housing. Increasingly, a system that provides the poor substandard, exploitative, and shoddy “choices” for basic life needs is being presented as an inclusive, Best Available Option. In other words, a system that artificially cuts out 20 or 30 percent of the population who are too poor to afford basic needs is said to “increase access” because the poor are permitted a shoddy, substandard version of what the wealthy get.
Nowhere of late has this type of myopic media framing been more apparent than in articles detailing the rise of “AI” chatbots replacing flesh-and-blood human care. One NPR report from January, “Therapy by chatbot? The promise and challenges in using AI for mental health,” uncritically repeated Silicon Valley claims these chatbot “therapy” start ups were making mental health care “accessible.” Quoting one user, the article insists, “at a practical level the chatbot was extremely easy and accessible.” A CEO of an AI “therapy” bot insists the company’s goal is to “broaden access to care.”
One BBC report from earlier this month was oozing with a similarly credulous “access” framing:
Around the world there are almost one billion people with a mental disorder, according to the World Health Organization (WHO). That is more than one person out of every 10. The WHO adds that "just a small fraction of people in need have access to effective, affordable and quality mental health care." And while anyone with a concern for either his or herself, or a relative, should go to a medical professional in the first place, the growth of chatbot mental health therapists may offer a great many people some welcome support.
Similar articles in trade rags Forbes, VentureBeat, and Fast Company trafficked in similar “expanding access” framing. In its February report on AI therapy, The Washington Post had a throw-away line establishing that “existing mental health care is expensive, inaccessible for many people, often of poor quality”—while its own editorial board repeatedly demagogues against Medicare For All that would go a long way to solve this underlying problem.
Nowhere in any of the articles is there a discussion of why millions cannot afford therapy. No mention of why 28 million in the US are without any health insurance, or why 57 million are under-insured. A story about people, alienated and seeking human connection, who are not being provided for by our society, is turned into a story about how this very same system is actually coming to their rescue because companies found a way of creepily mimicking human interaction at a scalable profit by effectively tricking people into feeling as if they’re receiving the facsimile of human connection.
To be fair, many of these articles are skeptical of the upside. But they largely accept that the problem is one of market quirks—agency-free accidents of nature—rather than a widespread political and moral failure to democratize medical care in the United States. Many do note, for example, that chatbots cannot replace humans. Some even naively suggest they’re not meant to. But a libertarian logic informs the entire narrative, even when skeptical: If it works for you, go for it. But it only “works” for people because they have no other option. This central fact is avoided, replaced instead by a framework of Concern and how ethicists are handling the “changing landscape.”
Make no mistake: No sufficiently rich person will ever use AI chatbots for therapy. They will, instead, because they actually have a “choice” to do so, pay another human being to administer the nuanced and complex human connection required for adequate therapy.
When the choice is nothing or a substandard, exploitative product, this is not a choice at all—it’s extortion.
What should be a human right, what should be guaranteed by the richest country in the history of the world, achieved by paying people to go to medical and graduate school so these slots can be filled with human beings, is treated like a unfortunate market failure being solved or mitigated by this fun new tech. False austerity is simply presented as inevitable and natural, and the reporter moves onto how Silicon Valley can solve (see: profit off of) this widespread systemic failure by creating what is, in effect, a subprime market of low-cost robotic therapy cliche machines.
A similar perverse logic fueled the crypto bubble of 2021 and 2022. While US media largely avoided this particular angle, crypto boosters’ in-house comms and marketing strategy leaned heavily into squishy slogans about crypto serving underbanked and unbanked minorities, selling the final, bottom rung of the pyramid scheme to the most vulnerable. That poor people are exploited by banks and shut out of credit because they’ve been screwed over by an unfair, racist credit rating system, is repackaged as them simply not having “access” to other, even more exploitative financial systems. Employing the language of “access” and “inclusivity,” Spike Lee and other black celebrities worked to find the last untapped vein of credulous capital remaining.
Running out of new injections of cash to artificially prop up their products, crypto moguls pushed to those least able to withstand the profound risk of securities speculation. Lo and behold, once the crypto crash inevitably occurred, it disproportionately hurt black people.
Increasingly, as I noted two weeks ago, lowering standards in housing for the poor is presented by many in our media as a smart, innovative way of expanding “choice.” Tiny houses, new laws permitting basement and garage living spaces, living in cargo containers, homes without kitchens or private toilets are fresh new ways of increasing “access” to housing by giving lower market consumers a “choice” to live in small, dark, less safe apartments. Instead of reporters examining why this false austerity exists, and criticizing a lack of investment in public housing, we are mostly presented with new ways of cutting corners, providing incentives for private developers, and gutting safety standards, labor laws, and environmental oversight. These are said to be the Only Available Option for providing homes to the poor.
How convenient this is. When asking larger questions about our system is stigmatized as too “ideological” or “biased” or outside the scope of standard reportage, the narrow, austerity-driven prepackaged framing will be the only one allowed. This isn’t to say, of course, no reporters are asking these more systemic questions, but resources for doing so are rare and largely segregated from popular trend pieces on these new “innovations”. After all, “tech reporters,” “housing reporters,” “healthcare reporters” are supposed to simply stay in their lane, own their beat and not ask why millions don’t have “access” to basic needs that overlap with this beat. They’re supposed to just accept that the poor don’t, as axiomatic, and put a recorder in front of the faces of all the smart, savvy people to explain how capitalism will come along and solve the problem it created in the first place.