For Workers, ‘AI’ Means ‘Apocalyptic Insecurity’
With Lynn Parramore and Alissa Quart.
Just when you thought work couldn’t get any worse ...
Lynn Parramore and Alissa Quart wrote an excellent article for The New Republic on (so-called) AI in the workforce. Its headline reads, “For White-Collar Workers, AI Also Stands for “‘Apocalyptic Insecurity.’” The article makes a valuable contribution to the technology discourse by focusing on the subjective human experience of AI on the job.
Conversation topics included the psychological dread of “terra infirma”; software’s inability to provide the “final 1 percent” that comprises high-quality work; and the inhumanity of forcing people to participate in their own obsolescence by training the machines that will replace them.
(As an aside, I brought up my reactions as a writer when I read articles on “how to spot AI writing.” Readers are commonly told to “look for em dashes (—) because AI overuses them.” As I told Lynn and Alissa, “I’ve been an em dash abuser in my own writing for years. Now I’m paranoid every time I use one.”)
I hope you find Lynn and Alissa’s insights as enlightening as I did.
About my guests:
Lynn Stuart Parramore is a cultural historian, essayist, and senior research analyst at the Institute for New Economic Thinking. She is the author of Reading the Sphinx and co-editor of How the Occupy Movement Is Changing America.
Alissa Quart is the author of seven books, most recently Bootstrapped: Liberating Ourselves From the American Dream and Squeezed: Why Our Families Can’t Afford America. She is the executive director of the Economic Hardship Reporting Project. (They do great work; check them out here.)
Selected Observations
From Lynn:
“The only wealth is life. And that seems especially important right now, because AI is not alive. We are.”
“For the first time, collectively, white collar workers are thinking: not only may I lose my job, but what is going to happen to the quality of my job?”
“‘What is happening to my brain, what is happening to my capacities?’ That is the thing we’ve been hearing — and it goes very, very deep.”
From Alyssa:
“71% of Americans are now scared AI will steal their livelihoods. And work itself has become what I call terra infirma — unstable ground.”
“Poetry kind of breaks AI. I kind of love that.”
“I feel optimistic about people. I feel less optimistic about systems.”
From Richard:
“When people are forced to make themselves obsolete, they’re not only living with terror on a daily basis — they’re being reduced to a set of functions. That to me is the ultimate dehumanization.”
“Back in the ‘70s or ‘80s there was an advertising campaign called ‘Look for the union label.’ Maybe it’s time we had a ‘Look for the human label’ campaign.”
“It’s like being murdered by smiley face icons.”
Transcript (lightly edited by—yes, I confess—AI)
Richard Eskow: We talk about AI a lot on this program, and we talk about labor issues just as much. There’s an excellent new article in The New Republic on exactly that topic, written by Lynn Parramore and Alyssa Quart.
Lynn Parramore is a good friend of the program. She’s a cultural historian, essayist, and senior research analyst at the Institute for New Economic Thinking. She’s the author of Reading the Sphinx.
Alyssa Quart is the author of seven books, most recently Bootstrapped: Liberating Ourselves from the American Dream and Squeezed: Why Our Families Can’t Afford America. Alyssa is also the executive director of the Economic Hardship Reporting Project, a terrific nonprofit that produces journalism—as you might expect—on economic hardship for working people in this country. Check them out.
Their article is headlined “For White Collar Workers, AI Also Stands for Apocalyptic Insecurity.” Lynn and Alyssa, welcome to the program.
Lynn Parramore: Great to be here.
Alyssa Quart: Thanks, Richard.
Richard Eskow: Alyssa, let’s start with you, because I understand you’re the one who came up with this striking phrase, “apocalyptic insecurity.” We all know about insecurity, but what is apocalyptic insecurity?
Alyssa Quart: Well, it was a play on words—AI, the other AI. Our dread. Our fear is being manufactured and hyped up by tech oligarchs, by employers, by so many different people. The message is that AI is either going to save us or destroy us. That’s what the pundits say, that’s the word on the street. And one of the things we need to do, as workers facing this, is to take control of our own dread—a dread that is being pushed on us from the outside.
Lynn wrote about this as well, drawing on her concept of looking back at Frederick Taylor, the late 19th and early 20th century thinker who transformed factory production and led to the systematic disempowerment of workers. This is a similar moment. This is a watershed moment where we have to fight back as thinking people and as workers. The dread itself has become part of the problem. Seventy-one percent of Americans are now afraid AI will steal their livelihoods, and work itself has become a source of uncertainty. I call it terra infirma—unstable ground. And we’re living on it—not just because of AI, but because of the larger polycrisis we’re living through: an unstable climate, an unstable economy, tariffs, job loss, unstable health care, unstable information. We’re fed AI slop, and now even unstable knowledge, as the current administration cuts back on scientific research, including cancer research. So, yeah. That’s the world we’re in.
Richard Eskow: That’s a great phrase. And when you mention Frederick Taylor and Taylorism, what came to mind for me was that this goes all the way back to the Industrial Revolution. The art and social critic John Ruskin, writing in that era, said, “You can either make a tool of the creature, or you can make a man of him. You cannot do both.”
In a sense, that’s been the labor struggle ever since: they keep trying to make tools out of us creatures, while our instinct is to become fully human. Does that resonate with you as a cultural historian, Lynn?
Lynn Parramore: Yes, absolutely. And thinking about Ruskin, one of the lines that means the most to me is this: “The only wealth is life.” That seems especially essential right now, because AI is not alive. We are. And beyond all the uncertainties Alyssa described, there’s a deeper one: uncertainty about the self, about our value and relevance as human beings.
I think going back to thinkers like Ruskin is so important right now, to reground us in the sense that we are not replaceable by machines—and that we should be their masters, not their tools.
Taylorism had been creeping into white collar work for decades, going back to the 1940s, when typists had their keystrokes counted. But it has really ramped up now. I think that, for the first time, white collar workers are collectively thinking, “Not only might I lose my job, but what is going to happen to the quality of my job? I used to pride myself on the knowledge I’d accumulated through years of study and experience. And now I’m suddenly being told that knowledge is worth very little.”
As Alyssa said, we need to step back and recognize that this message is, in many ways, meant to discipline us. The AI hype is also, of course, a story companies tell to attract investment, because they’ve spent enormous amounts of money building these programs. So there’s a lot going on, and it’s genuinely hard for workers to figure out what to do, what to think, or how to chart a path forward. All of that uncertainty can be paralyzing.
We spoke with sociologists and business psychologists who noted that uncertainty can be even more harmful to workers than actually losing a job. When you lose a job, at least you know what to do next; you look for another one, you have some kind of plan. But when you’re sitting there not even knowing whether your job will exist in six months? That’s a different kind of suffering. Uncertainty has always been part of working life, but not quite at this scale or this speed. This is a turning point.
Richard Eskow: Right. It’s one thing to be sinking in water; it’s another to be spinning and unable to find the bottom of the pool. You don’t know which way is up. There are so many dimensions to this, but let’s talk about the labor dimension for a moment.
In the 1960s, there was a lot of talk—overhyped, as it perhaps is today—about “automation.” The big concern back then, and you can find all sorts of literature and commentary on this, was: what will working people do with all their free time? The assumption was that they would earn the same or more, but only work a few hours a week. The sociologist David Riesman talked about the need for a “department of leisure,” warning that without one, society would face a leisure crisis. And I love the example of The Jetsons, which captured that perfectly. George Jetson worked three hours a week at Spacely Sprockets and made enough to support a family of four — saucer house in the sky, flying car, robot maid. Obviously a cartoon, but it reflected a genuine cultural expectation that automation would deliver broad prosperity.
Now the programming we’re getting is completely inverted: AI is going to make billionaires even richer, while the rest of you figure out how to get by on a fraction of what you’re already not getting by on.
The psychological precarity you’re describing, this apocalyptic insecurity, feels to me like the product of a sales job that’s been successfully run on us for sixty or seventy years. Part of what I notice in the experiences you encountered is that people lack something that existed in the 1960s, imperfect as that era was: a sense of labor solidarity. Unions were more pervasive. So, Alyssa, to what extent do you think what we’re seeing is an internalization of the oppressor’s consciousness—to use that phrase--where it simply doesn’t occur to people that they can act collectively, that they can team up with others in the same position?
Alyssa Quart: I definitely think so. It’s been drummed out of us. You were pointing to automation in the 1960s, but in my last book, Bootstrapped, I traced this even further back, to the 19th century, and to writers we both probably love, like Emerson and Thoreau. There’s a long history of a self-made myth being pushed down our throats, initially rooted in something genuinely romantic—religious freedom, individual dignity—but over time it became hypercommercialized. Think of the Ayn Rands of the world, who preached radical self-sufficiency even though, if anyone wants to look it up, Rand herself ultimately depended on Social Security and Medicare.
This is part of why fighting AI determinism and doing some myth-busting around the manufactured elements of this moment matters so much, alongside building a counter-narrative around collective action. We used to be thirty percent unionized; now we’re around ten percent. And we should look at new alternatives. I’m thinking of the AI dividend. It’s currently reaching about three hundred people, but the hope is it will eventually help thousands. It’s being managed by something called the AI Commons Project. And that’s a recognition that one way to fight dread is by creating a more secure income floor. Universal health care would be another. We need a foundation in place before catastrophe hits, rather than simply letting others capitalize on workers’ dread in order to control them better — which is really what’s happening right now.
Richard Eskow: Agreed. And for the record, I think AI should be collectively and publicly owned, because it is mining our own human activity, on and off the job. It belongs to us. It’s our resource. But I hadn’t heard of the AI Commons Project. That’s interesting.
I want to talk about one aspect of your piece that not only contributes to this apocalyptic insecurity, but feels almost sadistic: the fact that workers often feel they’re being forced to participate in their own obsolescence—that by training or building AI systems, they are helping to construct the thing that will replace them.
I think that’s dehumanizing on multiple levels, because we’ve also been trained to believe that our worth comes from our work. I had a friend named Jimmy who was dating a very famous actress. Jimmy was a bartender. When he’d go to her parties and people asked what he did, he would say, “About what?” But that’s not how most of us are wired. So when people are made to participate in their own replacement, they’re not only living with that terror every day, they’re also being reduced to a set of functions. That is, to me, the ultimate dehumanization. Did you find that confirmed in your conversations?
Lynn Parramore: It absolutely was. I’m thinking of a data scientist I spoke with named Claire, who is in her early thirties. She used to be genuinely excited about going to work at her startup —writing Python code, doing the kind of creative problem-solving that gave her meaning and identity. Now she says she’s basically a manager of AI agents. She doesn’t even know what to call herself anymore. Is she an AI engineer? Is she just directing these agents and watching them run? And it’s boring. It’s not exciting. She says she can feel her cognitive skills slipping away, and that is what drives her insecurity: What is happening to my brain? What is happening to my capacities?
We’ve been hearing and reading about this a lot. And it goes very deep. What are we losing when we hand work over to artificial systems? Never mind that a lot of the time they don’t even do the work particularly well, and people are spending large portions of their working hours correcting errors. I spoke to an accountant who said that yes, AI can do wonderful things with data input and sorting, but there’s something like a thirteen percent error rate—so someone has to go back through everything, because you can’t file error-ridden reports.
It’s affecting people in very concerning ways and creating what I call “anticipatory obsolescence”: Am I going to matter as a human being anymore? Never mind my phone becoming obsolete—am I, as a person, becoming obsolete?
Richard Eskow: And not only that; you’re being made obsolete by something with a smiley-face icon. It’s the ultimate indignity. I was struck by the anecdote about Jade, which gets at the “tech weirding” you describe in the piece. Jade gets chirpy emails from management at her insurance tech firm insisting that AI is there to “help, not replace.” And her observation? Those emails are the most AI-sounding writing she encounters. Of course they are. It’s the uncanny valley; almost human, but not quite. Horrifying.
And Alyssa, I wonder to what extent the Jades of this world even know how hard they have it, because of all the chirpy, smiley-face messaging. Are they fully aware of what they’re experiencing, and of the compassion and support they deserve for it?
Alyssa Quart: That’s such an interesting question. I do think people, especially young people, don’t fully understand how much blame they’re absorbing when they can’t find a job, when they can’t move out of their parents’ house, when they can’t afford health care. There’s an epidemic of self-blame around all of that.
A lot of the people we work with at the Economic Hardship Reporting Project (check us out at economichardship.org, which I co-founded with the late, great Barbara Ehrenreich) are former staffers who’ve been laid off, or freelancers who’ve taken a real financial hit, in part because the market for freelance journalism has collapsed. As Lynn and I can tell you, people get paid $350 for a piece if EHRP isn’t supplementing it. There’s just no way to be a freelance journalist and afford health care.
And yes, I can still tell when something’s been written by what I call “the robot.” The robot gaze, as we call it in the piece. When that gaze has touched information and transformed it, you get this subpar, generic word salad. It’s not only often inaccurate; it’s also somehow vulgar.
Lynn Parramore: And creepy.
Alyssa Quart: Creepy and utterly lacking in specificity. That’s the written equivalent of the uncanny valley. Where the visual tell is six fingers, the written tell is the complete absence of any particular feature. It’s like information that has been thinned and homogenized until it has no face. And there’s something deeply unsettling about something that is utterly generic.
We spoke for this piece with professional writers who used to write content for hospital websites. Now, those sites are being written by AI, with one human “AI copilot”—their terrible term for someone who has been demoted to supervising an AI—checking the output. And that is what patients and families will read when they need medical information: something written by a machine that is, say, fifteen to thirty percent inaccurate, in a tone that could have been produced by a box of Kleenex.
Richard Eskow: Right. And one of the things that drives me crazy—I shouldn’t laugh, because it’s genuinely alarming—is the business of spotting AI writing. People always say, “Watch out for em dashes.” Well, I’ve been an em dash abuser in my own writing for years. Now I’m paranoid every time I use one.
Lynn Parramore: I have the exact same problem. The tell that bothers me most is what I’d call contrastive framing: “it’s not this, it’s that.” Apparently that pattern appears constantly because the large language models were fed enormous amounts of advertising copy, where that framing is a standard device. So now it’s seeping into everyone’s writing, even in contexts where it has no business being there.
Alyssa Quart: I love that insight. I didn’t know that, but it makes complete sense. Although I do think poetry breaks AI. I’ve asked ChatGPT to generate poems in the style of my favorite poets, and it just doesn’t understand metaphor.
Lynn Parramore: It’s not funny, either. It has no sense of humor.
Alyssa Quart: None whatsoever. So poetry may be the thing that breaks it—and I kind of love that.
Lynn Parramore: But there’s another layer to this. Jade, the woman we interviewed—she works from home, as a lot of people do now, so she doesn’t get much human contact during the day. And she says that even the brief, casual emails she gets from colleagues have started to sound AI-generated. Because people reach for ChatGPT or something similar when they want to strike just the right tone in a quick message. So now even her casual interactions with coworkers are filtered through a machine. That adds a whole other dimension of alienation.
Richard Eskow: Right. And these programs always want to suggest phrases I don’t use and don’t want. I never let my editors impose language on me like that. Why would I let a machine? Back in the ‘70s or ‘80s there was an advertising campaign called “Look for the union label.” I’ve thought about something like “look for the human label” as a counterpart to that.
Alyssa Quart: I’ve thought about that too, like a fair-trade symbol. “No robots were used in the production of this.” Like the little bunny on cruelty-free products. “Robot-free.” And it could be a genuine selling point. It might be like the return of the LP after the MP3, or the return of the physical book after the Kindle. I look around at the art market and painting is big again. Is that partly because people want something haptic — something they can touch, something that is bodily and real? People seem to be hungering for the physical: dancing, live music. I wonder if there’s going to be a kind of mania for the tangible.
Richard Eskow: I really hope so. I think we should promote it, start a movement, get a logo, the whole thing.
As a guitar player, I’ve thought for years that playing guitar feels like performing a Japanese tea ceremony. It’s ritualistic, precise, and deeply satisfying, but somehow it had come to feel arcane. Now it’s coming back. Maybe we can use all that leisure time we were promised.
Lynn Parramore: Right! We’re still waiting on that. And with every technological breakthrough, that’s what they say: work is going to get easier, you’ll have all this extra time for the good stuff. When has that ever actually happened? The Internet is great for some things, and genuinely terrible for others—but did it make people’s jobs better? Did it give them more leisure time? Did it raise their pay? The answer to all of those questions is no. Maybe it made certain jobs more interesting for a few people at the very top, and I think AI will do the same thing. I’ve spoken to scholars who call it an amazing tool for this or that—and for them, great. But we live in a political economy shaped by decades of financialization and the relentless prioritization of shareholder returns. In that environment, technology is simply not going to be used to improve anyone’s job unless workers fight for it.
Richard Eskow: Right. And this connects to that famous graph—I believe it was first produced by Dean Baker’s group, or the Economic Policy Institute, or the Center on Budget and Policy Priorities—showing how after World War II, worker wages and corporate profits rose together, until they split apart around 1968. I call it the Challenger graph, because it looks like the trajectory of the shuttle when it veered off course. That divergence is the story of the hoarding of productivity gains.
But that’s a good segue to another aspect of your piece. One of the things you highlighted is that white collar workers are now, for the first time, all in the same boat. It used to be that if you worked on a tool-and-die floor and a new machine came along, that was the story. Now it’s the accountant, the attorney, the professional managerial class — all of them facing the same threat. And the added indignity is that the machine may not even do the job as well as they do. It may be mediocre. But the system doesn’t care, because mediocre is cheap.
So with all that awfulness cutting across the old class barriers, is there an opportunity for a new kind of labor solidarity, one that transcends those divisions? Alyssa, what do you think?
Alyssa Quart: Definitely. In Squeezed, I wrote about something I called the “middle precariat”: the middle class meets the proletariat meets the precarious. These are people like document-review lawyers and adjunct academics and journalists, people who have graduate degrees and who have found their previously stable work turned contingent—people who weren’t always insecure but suddenly are.
And I think we’ve been seeing the political expression of this. What was behind the elections of Zohran Mamdani in New York, the new mayor of Seattle, recent wins in Maryland and Maine? Alliances between these different precarious classes. And the organizing we’ve seen, of journalists, screenwriters, academics, museum workers—these aren’t people who traditionally formed unions. Union membership is lower than it’s been in decades, but union awareness and approval are higher.
There’s another factor that came up when we spoke with people who work as lawyers or advocates for both blue-collar and white-collar workers at the same companies: they’re now beginning to see their common cause. They recognize that both the contractor and the salaried manager are at the mercy of the same fickle tech overlords. And they’re trying to help each other. That’s the kind of thing we need more of: people like us helping contingent journalists; well-established lawyers becoming aware of and advocating for the document-review lawyers being pushed out by AI, and recognizing the fragility of their own profession too. Unions, awareness, self-respect, and a counter-narrative to AI determinism. Some of it is structural, and yes, some of it is psychological.
Lynn Parramore: You know, when I was in graduate school at NYU, I was briefly a member of the United Auto Workers, because NYU graduate students who were teaching banded together and formed a union. It was a brief period of glory. NYU worked assiduously to shut it down, and eventually the union was crushed. But it happened. And for that moment, here were graduate students pursuing doctorates standing in solidarity with autoworkers. Things like that have happened before, and I think we’re going to see them happen again.
And again, to Alyssa’s point, union membership is down, but approval is very high. There was a Gallup poll a couple of years ago showing it’s higher than it’s been in decades. Those are genuinely positive signs. And in an otherwise sobering conversation, that’s one of the real bright spots.
If the Democrats had any sense, they would pay attention. They’ve been trying to connect with blue-collar workers for a long time, with limited success. But if they can think about bringing blue-collar and white-collar workers together and actually doing something for them collectively — that could be an exciting moment in politics.
Richard Eskow: Absolutely. And on that cultural point: my brother, who passed away last year, was a screenwriter. He lived in Kingston, New York. He had a strike T-shirt during the writers’ strike, and he said everywhere he went—the guy at the food stand, the newspaper vendor, whoever it was—people would say, “We’re with you.” Not because they had any glamorous association with Hollywood, but because they recognized fellow workers getting screwed. And that solidarity, that recognition, is a genuinely beautiful thing.
Let me end on one more topic from your piece that I found fascinating. You interviewed Jennifer Vertesi, a sociologist, who said that we are “effacing expertise instead of enabling it.” And then the neuroscientist Giorgio Ascoli expands on that thought with a remarkable quote. What did she mean by that?
Alyssa Quart: Vertesi is a sociologist who has worked with NASA on spacecraft systems and specializes in the sociology of AI and technology. What she meant is that there are people who have put in — as Ericsson argued — ten thousand hours to achieve expert status. They put in that time, and it taught them something the machine simply cannot know: the final one or two percent of how to finish something.
That’s actually what Ascoli was speaking to. He’s a neuroscientist who draws extraordinarily complex neurological diagrams—elaborate trees of branching structures—and he said that an AI could reproduce his work up to a point, but would miss the last two percent. It would be an incomplete project. And that incompleteness is precisely what decades of expertise protects against. Mastery lets you make critical judgments at the very end of a complex process, when everything else has been done.
There’s something else that’s missing, too: what I’d call the sympathetic imagination. When you’re a journalist, or a scholar, studying human experience, you bring to it a kind of understanding that comes from being human. An LLM can scrape an archive. But can it interview all the people we interviewed, and then apply our final five percent of judgment about that material? No. Reporting may be one of the last places where that matters. And as I said — poetry. Those are the places AI cannot touch.
Richard Eskow: We may never find that AI is capable of genuine excellence, because it is, after all, an average of everything that has already been done. How could averaging produce anything better than mediocre? What I took from what Vertesi and Ascoli described is something about the process of learning itself — that the act of learning to do the work makes you more capable of doing the work. Dewey said, “Education is the lighting of a fire, not the filling of a pail.” There’s something in the process, of learning to learn, that gets you across the finish line in a way that a system trained only on outputs cannot replicate.
Alyssa Quart: That’s the sympathetic imagination. That’s creativity. That’s the mastery that lets you invent new things. And yes, that’s exactly what Ascoli was describing — the dozens of years he had spent drawing those diagrams had taught his hand what to do at the very end of the process in a way that simply cannot be transferred. Interestingly, his own children work in AI, and even they agreed: the machine is not going to be able to draw the tree in the way their father draws the tree.
Richard Eskow: And Lynn, maybe to conclude, you’ve both stared into the abyss of this apocalyptically insecure future, and it stared back. Where did you wind up? Pessimistic? Optimistic? Gramsci’s “pessimism of the intellect, optimism of the will”?
Alyssa Quart: I always feel optimistic about people and less optimistic about systems. That’s where I always land. And I have to say, doing this work with Lynn made it better. I don’t do much collaborative writing outside of film work and my time with Barbara Ehrenreich. Going into this world together with Lynn—I don’t know if you feel the same way, Lynn—but it really helped. Otherwise it would have been deeply depressing.
Lynn Parramore: Yes, absolutely.
Alyssa Quart: And then coming across something like the AI dividend—people thinking about this, trying to organize around it, even if it’s still very small. It’s something.
Richard Eskow: I love that you mentioned working with Lynn as an important part of getting through it, because we are social creatures. That’s the missing element from so much of the AI conversation, and from the last fifty years of drift toward an automated, individualistic, “the machine stops” kind of world. We have mirror neurons. We’re more complete with each other. But, Lynn, the last word is yours.
Lynn Parramore: That’s absolutely true. And this is one more thing AI simply cannot do: commune with other human beings. It can’t share insights or supplement our imaginations the way working together does. It can’t have a work date. Alyssa and I used to meet during the day just to work side by side, and that matters in ways that are hard to quantify.
More and more people are now being managed by algorithms. Their boss, effectively, is a machine. How do you fight back against that? There are encouraging signs. The European Union, for example, has been genuinely advocating for worker dignity and privacy. We can take pages from their playbook.
But yes, one of the antidotes to everything depressing in this conversation is simply working together. Finding each other. Gig workers and contractors face real barriers to organizing, but they’re finding creative ways to make their voices heard collectively.
So between the two choices you offered, I feel motivated. I don’t feel optimistic, exactly. Like Alyssa, I’m optimistic about human beings and very pessimistic about the political economy. Which is precisely what makes me motivated to fight back as hard as possible.
Richard Eskow: “Motivated” is actually a better answer than either of the options I offered you. Let’s leave it there.
