How AI Will Create a "Permanent Underclass"
Hint: it's not how you think.
What is an “underclass”? As Isabel Sawhill wrote in her introduction to a 1989 issue of the Public Interest dedicated to the topic, “violent crime, drug abuse, teen pregnancy, illiteracy, joblessness—these are some of the hallmarks of what has come to be called ‘the underclass.’” In Sawhill’s telling, the underclass were not merely the poor, but the poor who lived on the dole despite being able to work. The group, moreover, often engaged in the “tangle of pathology” Sawhill identified as characteristic of it.
Although it is now largely forgotten, this group for a time consumed a great deal of popular attention and debate. They were the subject not only of articles in scholarly quarterlies but also acts of Congress. The consensus was that the underclass’s position was the result of both social and policy factors, interacting and reinforcing each other to produce a culture of dysfunction.
It is probably a coincidence that Silicon Valley denizens have now repurposed the word “underclass” to describe the result of artificial intelligence causing mass unemployment. As Jasmine Sun put it in her recent New York Times essay on the topic,1 “most people I know in the A.I. industry think the median person is screwed, and they have no idea what to do about it.” As Sun explains it, many of those building the future think that “people have a limited window of time to build wealth before A.I. and robotics are advanced enough to fully replace human labor. At that point, we will get frozen in our current class positions: The rich will be able to deploy superintelligent machines to do their bidding, and everyone else will be rendered useless and unemployable, left to live off welfare scraps.”
Sun notes that most economists and AI experts “do not expect this scenario.” But then she somewhat inexplicably asserts that “a social underclass is a policy choice. Instead of waiting for impact, we need to think seriously—now—about how we plan to support workers through A.I. disruption.”
The economists and AI experts are correct—a permanent underclass is not coming. But the favored solutions of those enamored of the theory—an AI-funded universal basic income (UBI), jobs guarantee, or the like—do threaten to recreate an underclass of similar viciousness to and greater magnitude than the one Americans concerned themselves with in the 1990s. The real danger is not that AI will produce a permanent underclass, but that the panic over technological change will lead policymakers to build one themselves.
As I have written elsewhere, the bearish case for AI’s effects on unemployment tends to make elementary economic errors. As AI becomes cost- and skill-competitive with humans on certain tasks, it will inevitably replace them in those roles. But there’s a reason past technological innovation has not resulted in mass unemployment: people’s wants are infinite, so human labor freed from one purpose by automation can inevitably be redirected to another one. It’s hard for us to imagine the jobs of the future, but it was also hard for people to imagine the jobs enabled by innovations like the airplane or the printing press.
Sun, to her credit, does not go as far as some do in predicting mass unemployment. But she does nod to the idea that the labor-market adjustments associated with AI-enabled automation—the people who will lose jobs and need to retrain—will be substantial and socially disruptive. And she intimates that politicians need to be “gutsy” in proposing big policy ideas to address that disruption.
There are problems with both Sun’s premises and her conclusions. It’s not obvious ex ante how big the adjustment will be—especially because those most at risk of automation (high-skilled white-collar workers) are also those most likely to be able to switch jobs quickly. The examples of adjustment risk that Sun points to—young workers losing their jobs to AI and the effects of offshoring to China on manufacturing unemployment—are substantially contested. AI is a big technology shift, but other such shifts have often had surprisingly small effects on employment even in affected sectors.
But the bigger problem is with Sun’s conclusion that AI presents an opportunity for populist politicians “to push ideas that are usually too radical for moderate voters to swallow,” especially in the upcoming 2028 presidential election. One imagines a redux of the 2020 Democratic primary, in which candidates vied to name the biggest possible handout—a UBI, a job guarantee, Medicare for All, and so on. This time, however, it will all be funded by a punitive tax on AI and the wealth it generates.
The policy goal, presumably, is to insulate those who might otherwise end up in the “permanent underclass” from the consequences of AI-induced unemployment. But shifting toward an AI-funded “post-work” welfare state risks creating exactly the underclass it is meant to abate.
Consider the effect of transfers on employment, which are generally negative. Small programs, exemplified by randomized control trials of unconditional cash transfers, tend to reduce work by small amounts. Larger programs have larger impacts. As Sean Speer wrote here recently, the decline in labor force participation since the turn of the millennium is significantly driven by the growth in Medicaid rolls. Similarly, Speer writes, “the Earned Income Tax Credit, for instance, has increased workforce participation among single mothers by rewarding work. Disability insurance expansions have been linked, at the margin, to lower participation among some groups.” To that, we might add that welfare reform sent millions of single mothers to work.
Of course, many people would still work under an AI-funded UBI. But the hypothetical AI-funded regime would also underwrite the lives of those who already do not work—a population disproportionately prone to drug use, crime, unwed childrearing, and the other pathologies that Sawhill identified back in 1989.
In the days before welfare reform, there was sometimes discussion of people being “trapped” in the culture of poverty—an arrangement memorably depicted by Charles Murray in 1984’s Losing Ground (a book published with the support of Manhattan Institute). It is easy to see how an AI-funded welfare state could replicate that dynamic—producing the very “permanent underclass” it is meant to avoid.
Then, as now, welfare is not the exhaustive cause of the underclass lifestyle. But government dollars can and do subsidize the underclass lifestyle, perpetuating and reinforcing the culture, discouraging people from exiting it, and encouraging others to enter it.
How, then, do we respond to the problem of AI-induced dislocation? As Adam Lehodey has argued here, the competitiveness of automation against labor is partly the result of policy systematically disadvantaging labor, specifically by imposing regulations that make workers less competitive. Removing policies that raise labor costs and cutting barriers to job switching, combined with strategic retraining initiatives, will help keep those at risk of automation nimble and able to provide for themselves.
Sun is right about one thing: progressives will use the specter of AI to push for their preferred economic policies come 2028. But using automation as a pretext for more generous transfers isn’t a way to save people from the underclass—it’s a way to put them there.
To be fair to Sun, she does briefly mention that “In the United States, the term ‘underclass’ gained currency in the 1960s to describe the factory workers left behind by the postwar automation boom.” I don’t think that’s really an accurate gloss, but at least she nods to the history.




I thought this was going to be about how a lot of people are going to get addicted to AI (and VR) slop, consigning them to a underclass based on discipline levels.