Reclaiming the Future (w/James Hughes)
It's time to take tomorrow back.
I love having conversations like this one. I recently joined James Hughes of the Institute for Ethics and Emerging Technologies (where I’m a board member) on the IEET podcast for a wide-ranging conversation about technology, politics, and the future—both as they might be and as they are in today’s world, where billionaires have colonized our social imagination. We traced the collapse of the postwar social compact, when it was assumed that productivity gains would be shared by all, through today, when its generally assumed that technological progress belongs to the wealthy and everyone else must adapt or perish.
Our topics included AI, universal basic income (UBI), antitrust, Medicare for All, life extension, Buddhist philosophy, digital personhood, and the dangerous, Terminator-like adaptability of techno-fascism. It was a great opportunity to drill down on my conviction that the future has been stolen from working people and it’s our job to take it back.
Selected Quotes
Richard:
“Every new technology is first and foremost an engine for inequality, unless it’s governed democratically.”
“Our social imaginations have fallen behind our technological imaginations.”
“Capital is like rust, and ‘rust never sleeps.’ Social democratic forces have been asleep at the switch.”
“Today’s chatbots will never become conscious. But if an AI were to develop consciousness it would have digital ‘DNA’ from all of us. It would be humanity’s child. That means it would be slavery for a private corporation to own it.”
James:
“Visions of a brighter, better, wealthier future are usually being sold to us by the right, or by capitalism, as opposed to post-capitalists.”
“Capitalist realism is the idea that we just lost the ability to imagine doing anything differently, and capitalism became the horizon of our future.”
“I think a lot of things that were considered radical in our domestic policy agenda, are not radical at all in the context of what’s been going on lately. They’re exactly what we need.”
“We have to imagine what the liberatory uses of technology can be, but at the same time recognize that we’re on a vastly unequal playing field.”
Transcript (lightly edited by—yes, it’s true!—AI):
James Hughes: Joining me today is R.J. Eskow. R.J. is a journalist, broadcaster, policy consultant, and commentator, best known as the host and managing editor of The Zero Hour, a nationally syndicated radio and television program that blends current affairs with in-depth interviews on policy.
Before his media career, Eskow worked for decades as a senior executive and advisor in health insurance, social benefits, worker cooperatives, foundations, governments, and private entities. He also has a long track record in political communications and progressive advocacy. He served as lead writer and editor of Bernie Sanders’ 2015–16 presidential campaign, was among the founding writers of the Huffington Post, and his commentary and analysis have appeared in The Nation, Salon, The Intercept, the Los Angeles Times, and The American Prospect.
He has held positions as a senior fellow with Social Security Works and the Campaign for America’s Future, and he serves on the IEET Board of Directors. I’m so glad to have him with us today.
Richard Eskow: It’s a pleasure to be here. Thanks, J.
James Hughes: I’m giving a talk for Future Day, which is coming up March 3rd, titled “How Billionaires Ruined Futurism.” It sounds like you and I have been in a similar headspace recently. The influence of billionaires is pretty central to our discussion today, because we are at that intersection of ideas now largely associated with them. How do you see the problem?
Richard Eskow: Well, first of all, it’s a great question. I always go back to the postwar era. In the 1950s I wasn’t very cognizant of what was going on — I watched TV — but by the 1960s, I was a little nerd fascinated with the future, as opposed to now, when I’m an aged nerd fascinated with the future.
I always point to what I call the Jetsons effect. It was generally assumed that the premise of The Jetsons cartoon series — which I think came out in 1962 — was valid. The premise was this: it depicted a so-called typical, i.e., white middle-class family of four projected into the future, with a single wage earner, George Jetson. George worked three hours a week at a factory making space-age sprockets, and from those three hours he earned enough to finance a home for four people in a saucer-shaped dome atop a spire reaching into space, a flying-saucer convertible — I’m not sure how that works in terms of safety — a robot maid, and all the accoutrements that go along with it.
In 1965, the postwar economic growth of the United States was such that any technological advance — and bear in mind, people were as concerned about automation then as we are about AI today — was commonly assumed to benefit workers as well as employers. From 1945 to 1965, GDP growth and the profits thereof, the benefits thereof, grew together for employer profits and employee wages. You can see it in the graphs. Then around 1968, they bifurcated: employer profits kept soaring while real employee wages tapered off. It was a dramatic split.
The Reagan administration and other forces deepened and entrenched that divide. But in the mid-1960s, one of the major challenges preoccupying academics was what to do with all the leisure time the American worker was going to have. When jobs were automated, workers would only have to — well, none of them were as ambitious as the Jetsons — but let’s say only twelve hours a week instead of the requisite forty. What would people do with their time? The sociologist David Riesman wrote about the coming “crisis of leisure,” which was a widely held belief.
I’ve actually collected op-eds and editorials from the New York Times about this — not opinion pieces, but news articles — talking about how, when the work week inevitably dropped to thirty-two hours, which it was “certain to do” by 1970, and then perhaps to twenty-eight and further, people would have to find new ways to fill their days. There was even a proposal for a Department of Leisure to deal with all this extra time, because in a country with strong unions, strong wage growth, and a prospering middle class, it was assumed that things would continue along the same trajectory.
Which, of course, is one of the biggest mistakes people make about the future: assuming that current trajectories won’t change.
Flash forward to today. It’s just assumed that the trajectory of the last fifty years will continue. We don’t use the word “automation” as much now — we say “AI” and “robotics” — but policymakers from both parties simply accept the premise: what will we do when all these people lose their jobs to AI? That was an unthinkable idea in the 1960s. But today it’s just, “Sorry, pal, you’re out of work.”
James Hughes: Sorry — it wasn’t unthinkable. I mean—
Richard Eskow: Well, it wasn’t unthinkable, but it was not the prevailing mode of discourse.
James Hughes: I’m sure you remember the Ad Hoc Committee on the Triple Revolution, headed up by Harrington and others, who wrote to LBJ and said, “We think automation is coming and we need to get prepared.”
Richard Eskow: Yeah, there were certainly concerns. But the general feeling — and LBJ’s labor secretary, whose name I’m blanking on at the moment, was very outspoken about making sure those rewards were equally shared — was more measured. There were counter-proposals, like splitting jobs between two people, which technology could make much easier. That could help reduce unemployment rather than exacerbate it. There wasn’t enough attention paid to racial disparities in employment — there should have been more — but there was some. I am oversimplifying a little, but not by much.
We flash forward, and it’s now assumed that the future belongs to the wealthy and that everybody else is out of luck. So much so that ideas like techno-progressivism or transhumanism — which is really a politically value-neutral term, meaning simply that you’re in the camp of those embracing human enhancement — now means, in the popular imagination, that you’re in the camp of the billionaires. Partly because billionaires seem to be the ones talking about it the most.
James Hughes: They want to sell it to us.
Richard Eskow: They want to sell their version of it to us. And that’s a big piece of it. But I also think we’ve narrowed our vision of the future to a strictly technological one. We don’t ask: how could the family become a more equitable unit for everyone in it? How do we restore the rights of workers? Will technology enable us to build collectively run, worker-owned organizations as major corporations? These are all valid questions about the future, but our social imaginations have fallen behind our technological imaginations. We only think of the future as a technological experience, and we only think of technology as belonging to the private sector — which is now far more unequal than it was back then.
By the way, when my grandmother died and I was going through her things years ago, I found a newspaper from 1962 she’d used to wrap something. In it, Harry Truman was gently chiding President Kennedy for allowing private corporations to have a role in the new communications satellite system they were planning. Old Harry was saying, essentially, “I’m sure the President understands that corporations shouldn’t have a role in the communication technology of the future.” Totally unthinkable nowadays. We just assume that if you’re talking about technology, you’re talking about Elon Musk owning Starlink, or the big four or five companies owning all social media.
James Hughes: Let’s stop there for a second, because I think as old lefties, one of the central questions — and it’s central to my own social democratic way of thinking about the twentieth century — is the class struggle dimension. After World War II, there was a social democratic compact in the industrialized countries: we could have labor peace through union contracts, social democratic redistribution of the growing postwar largesse. That began to fall apart in the seventies for a variety of reasons. One is the concentration of capital, which is certainly part of the current media consolidation story and the growing ideological hegemony. Another is what Mark Fisher called “capitalist realism” — the idea that we simply lost the ability to imagine doing things any differently, and capitalism became the horizon of our future. As a consequence, if there was going to be a brighter future, it would have to be sold to us by one capitalist or another.
Where do you see the breakup of that social democratic compact coming from? Was it simply the weakness of the balance of forces? The decline of unions? Piketty’s argument that social democratic parties got captured by middle-class, college-educated professionals and lost the blue-collar base?
Richard Eskow: Well, those are all causality questions, of course. Did unions decline because of the loss of social democratic vision, or did we lose social democratic vision because unions declined? But I would say, fundamentally, it’s in the nature of capital to want to accumulate. It’s just how capitalism works — it’s relentless. It’s like rust. Rust never sleeps, as Neil Young would say. So it was in the nature of capitalists to always want to push in that direction. They got busy. And I would argue that the social democratic forces in the U.S. and Western Europe did not. They were kind of asleep at the switch, and I think that’s a huge part of the story.
We developed a union movement, and FDR’s presidency was the closest we’ve ever come to a social democratic vision in this country. But he was prompted not only by the Depression, but by the growth of the Communist movement, the independent labor movement, the Townsend Clubs demanding Social Security — a million independent forces from below. In the absence of all that, the social democratic movement lost vision and cohesion. Capitalism intruded into academia very systematically — you have the Lewis Powell memo in 1971, and so on — specifically into economics, which had never before played the role it has today as a policy driver. The idea that you simply defer to what economic formulas tell you was new, and those formulas are ideological creations to a large extent.
And then, of course, there was the intrusion of concentrated wealth into the political process, co-opting what passes for a social democratic party in a two-party system. That reached its peak with the rise of Bill Clinton. The Democratic Party, over the last thirty to forty years, if you run it through business school models for analyzing corporate evolution, looks a lot like a corporation: it had its customer base, it had its product — with voters as the product rather than the customers — and so on. You didn’t have an electoral alternative. The union movement had been decimated by a series of laws, and the Democratic Party stopped supporting it in any meaningful way. So here we are.
James Hughes: Where do you come down on the old alignment argument within the DSA — that there’s a social democratic party trapped inside the Democratic Party, and that part of our job as progressives is to organize it and liberate it from corporate forces?
Richard Eskow: Well, first of all, I’m not partisan on that one. I don’t take a firm side in the debate within the socialist left about whether to work inside or outside the Democratic Party. My answer is: yes to both. We have, for example, a pretty left-leaning populist candidate for Senate in Nebraska who ran as an independent and won forty-five percent of the vote. That was a good opportunity. And we have Zohran Mamdani winning a Democratic primary in New York City. That was a good approach for that situation.
I always tell my social democratic friends who are working to reform the party that their relationship to it should be what the Bible advised Christians to be with the world: be in it, but not of it. If you want to use the Democratic Party as a vehicle for change in your situation — the way Zohran did, where it worked — go for it. But don’t internalize the oppressor consciousness of the party establishment. Don’t become more of a Democrat once you’re in office than you are a democratic socialist or a leader for change.
James Hughes: Our dilemma at the IEET is that within progressive politics, I would argue, there has been an anti-technology turn since World War II. And that relates to what you started with — the moment we’re in historically, where visions of a brighter, wealthier future are usually being sold to us by the right, or by people operating within capitalism, as opposed to post-capitalist thinkers like Paul Mason, who have argued that automation could be redirected in a progressive direction.
We’re trying, with the Techno Progressive Project, to raise that flag. But given the backlash against tech oligarchs, I wonder — do you think “techno-progressivism” as a term can carry water? We promoted it partly because our judgment was that “transhumanism,” though philosophically defensible, had become too associated with right-wing oligarchs. Is a lot of futurism carrying that burden now, and is there any hope it can be rescued?
Richard Eskow: I think yes, a lot of futurism is carrying that burden right now. And I was thinking before we connected today, Jay, about a talk I gave at a Humanity Plus conference — it must have been almost twenty-five years ago — where I argued that we’re all transhumanists whether we admit it or not. The first person who got a good day’s work done in the fifteenth century because they’d had coffee for the first time was a transhumanist. They were enhancing their neurochemistry with a technological breakthrough. We are all transhumanists relative to our former selves.
So one of our problems, I think, was that we made ourselves look a little strange to some people. I don’t mean to embarrass anyone, but I think that was a factor. The bigger problem, and one we need to be very conscious of in our communication, is that people now assume any given technology will be billionaire-run, corporate-run, or venture-fund-run. So if you say to someone, “AI can be great for the workers’ movement,” what you’re likely to hear back is, “Oh yeah, sure — like Elon is going to help us out. Like Sam Altman is going to help us.” Which is why the first thing I wrote publicly about AI, for Current Affairs magazine, argued that it really should ethically be a kind of socialist system — because it’s produced by all of us, through our online activity.
James Hughes: Have you seen Geoffrey Hinton talk about this? He’s an old British labor guy, and when people ask him how to fix the inequality problems AI is going to cause, he just says, “Socialism.” They ask, “Anything more?” And he says, “No, just socialism.”
Richard Eskow: Yeah. I mean, this has now entered the mainstream discourse — Star Trek is clearly a socialist vision, and you have the replicators. If I want a ‘57 Chevy with mag wheels, it’s like, “Here it is.” I’ve never fully understood the physics of that — there must be raw material somewhere, maybe in hyperspace. But anyway, that line of thinking led to the book Trekonomics, which I enjoyed, and to the concept of “fully automated luxury communism” in Britain, mostly as a provocation. And yes, if by socialism we mean democratic governance and collective ownership of certain technologies, I absolutely believe in that — because every new technology is first and foremost an engine for inequality unless it is governed with that in mind.
Getting back to your question about the association between billionaires and technology: I think it’s incumbent on some of us in the techno-progressive world — and I feel I’ll take it on — to say, no, here are ways that technology can be owned by everyone. In 1969, humanity landed on the moon. That wasn’t SpaceX. That wasn’t Jeff Bezos’s phallic embarrassment of a Blue Origin rocket. It was — nationalism may not be my big thing, but it was — the public. It was government. Now even what the Army Corps of Engineers used to do has been outsourced.
The Tennessee Valley Authority, the Rural Electrification Project — you name it. All of a sudden, people in the hollows and valleys of Appalachia were connected to the world through radio because the government built the wires, not private industry. I think that’s part of the message we have to deliver, or we will keep being met with well-founded skepticism.
James Hughes: Well, let’s pause here, because I think this gets to an issue both with socialist theory and with what’s happening currently with techno-fascism. Every time I try to identify what the different people I want to call fascists around the world have in common, the first thing is patriarchy, or patriarchal restoration — they all hate “gender ideology.” But in terms of economics, there doesn’t seem to be much of a through-line. You have Milei, who’s more of a traditional libertarian, and then you have the Putin model, which isn’t really privatizing so much as transferring state control to the siloviki and powerful allies. And that seems closer to the Trump model — he doesn’t mind corporate monopoly as long as his allies hold the monopolies.
Do you think that’s the principal purpose of the growing techno-fascism? Simply to concentrate power and strip it from workers and citizens so they can’t get in the way?
Richard Eskow: One of the highly effective aspects of techno-fascism is its adaptability. The Trump model is perfect for them right now, and they’re taking full advantage of it while they can — grabbing media power and vacuuming up public data. I’ve called DOGE’s seizure of public data the greatest theft in human history, which in some ways it is. The holy grail of untapped data-mining resources has always been the public sector: Social Security, Medicare, IRS tax records, you name it. And we’re only beginning to learn the magnitude of what they’ve taken.
But they also know how to work within a more liberal, neoliberal political structure. They were more than happy to cooperate with the Biden administration on certain things — for instance, suppressing views that I think are anti-science and dangerous, though I don’t believe you should use technology to suppress speech, because I believe in free speech. The brief period before the outcry, when accurate reporting about Hunter Biden’s laptop was algorithmically suppressed — that happened too. They’ll do what it takes to make friends on either side of the aisle, contribute money where they need to, and flatter whom they need to. They’ll work with whatever system you give them.
James Hughes: Yeah. There’s this new pro-AI PAC that just got organized, with a Democratic arm and a Republican arm — they’re going to give money to anybody who’s pro-AI, on either side. On the question of how to deal with corporate consolidation and growing techno-fascism: do you think we need to argue primarily for various forms of socialization, or is antitrust the more urgent lever? I’m very influenced by people like Cory Doctorow and their emphasis on corporate consolidation and the need to return to antitrust enforcement. But there’s also the argument that when you have monopoly, it at least establishes industrial standards and simplifies competition — and the better approach is to regulate it as a public utility in the public interest, rather than trying to reintroduce competition into a system that’s already demonstrated competition doesn’t work well. Where do you come down?
Richard Eskow: I’m a mix-and-match person on this one. First of all, I think concentration is bad in almost every case. We could debate Bell Labs in the 1960s, maybe one or two others like that, but by and large, concentration is more bad than good.
I am strongly in favor of antitrust — less because I mythologize competition, though I do think competition is valuable, and more because monopoly breeds abuse. When people have somewhere else to go if vendor A abuses them, that keeps vendor A in check. Also, one thing we can say about the information age is that if we have new regulations, it’s not hard to communicate them to every player, large or small, in the field. But where I ultimately come down is: I don’t think better markets will get us what we need. I think what we need is stronger democratic governance over the process — not markets. I’m more of a mixed-model socialist in that sense. I don’t think the market fixes much, but I do think avoiding monopoly fixes quite a lot.
James Hughes: Yeah, absolutely. What about Universal Basic Income? There’s been a lot of anxiety in Silicon Valley recently, with various resignations and people saying internally at these companies that they think all white-collar jobs will be wiped out within a year. This fits into a narrative of: thanks for identifying the problem, but do you have any solutions other than selling us more of your product? UBI is getting talked about more now. Andrew Yang is back on the stump. Do you think Bernie and the progressive left need to be talking about this more? So far, most of the energy on the left has gone into opposing AI data centers, not toward affirmative proposals. There are a few voices, like Alex Bores in New York, talking about UBI, but not many yet.
Richard Eskow: The UBI question is one where context is everything. I wouldn’t buy a used UBI proposal from Andrew Yang. What we saw a lot of in the first decade and a half of this century was tech billionaires proposing UBI — but if you pushed them at all, it turned out they meant UBI instead of the social safety net. All of a sudden you’ve put forty million people’s survival on the line, dependent on systems controlled by billionaires. That is a really bad idea. So people became reflexively anti-UBI because they assumed it was just a mechanism for the wealthy to dismantle everything else.
James Hughes: Or that billionaires want it because it keeps capitalism ticking over when wages disappear — people still need to be able to buy things.
Richard Eskow: Right. And there’s the Tyler Cowen vision of the future. He’s one of the most future-oriented right-wing libertarian thinkers, and he has said — adding, “I’m not saying I like it,” which, when a right-winger says that, they usually do like it — that his vision is: eighty-five percent of the population becomes superfluous, and only those who are above average, as he puts it in Average Is Over, will be productive members of society. He estimates that at fifteen percent, with perhaps 0.1% having major influence. That’s a dystopian hellscape. And what do you do with the other eighty-five percent? You give them a meager, barely sufficient UBI — just enough for subsistence, so they can buy drugs, stay online all day, and feed data into LLMs.
That’s one vision of UBI. The left vision, of course, is: keep the social safety net in place, and provide a basic income as a complement to everything else — meeting other genuine human needs and wants.
James Hughes: And it has to start small. One thing UBI advocates often don’t acknowledge is that if you took all current transfer payments in the United States and redistributed them equally to all Americans, you’d get about $4,000 a year. And politically, achieving even that would require taking money from Social Security, which — well, you’re a Social Security guy. If you could go back to 1932, would you say, “Don’t do this — build a universal income scheme instead”? Because we’ve been telling people for almost a hundred years that if they pay in, they get it back. That commitment is exactly what makes it politically untouchable.
Richard Eskow: I would not change that, because it was a promise: this is your premium for a future insurance payout if you become disabled or live to old age. It was designated for that purpose. Some schools of economics will say you can’t distinguish among public funds in that way. Theoretically, maybe. But I’m talking politically. It was set aside for that purpose, and that matters.
Going back to Thomas Paine’s Agrarian Justice, he argued that money should be set aside from birth to provide something like universal basic income for every person. I would look at other ways of funding it. There’s also the debate on the left between UBI and a federal jobs guarantee. I used to come down firmly on the side of the jobs guarantee, and I’ve evolved toward: both, or a mixture thereof. The notion that one’s worth is determined by whether you have a job is something we need to move past. One’s worth comes from being a human being with feelings, relationships, and something to contribute — not from being employed. So: if you want to live on $4,000, you can. If you want more, the federal government will provide a job if the private sector can’t. And both programs would have the advantage of forcing employers to compete for people’s time and labor in a more meaningful way than they do now, given the employer monopsony that controls so many local job markets.
But I wouldn’t touch Social Security money or Medicare money to do any of this. There are a million other ways to fund it.
James Hughes: I would recommend not wading into those political fights and letting them work out over time. But I do think we have a serious problem with the old-age dependency ratio, which gets to some of the futurist answers that people are finally beginning to take seriously. The dependency ratio is: if you have a lot of older people and not enough young workers paying into a system, the math breaks down. Social Security is politically the ideal answer there, because at least people feel it can’t be touched — and in fact, that political protection has worked remarkably well for seventy years.
But it also connects to life extension. We’re both men of a certain age, and I’m very interested in the question. We now have GLP-1 drugs, which have had a dramatic positive effect on my own health over the last year. Obesity in the United States has finally dipped for the first time in roughly forty years. Do you think we’re on the cusp of dramatic improvements in life extension? And do you agree that this may be another front on which a populist revolt against the billionaire vision of the future takes shape — the public saying, “We want all the stuff you can afford that we can’t afford yet”?
Richard Eskow: I think first of all, on the demographic shift toward an older population — under current trends, it doesn’t actually take that much to bring Social Security back into balance for seventy-five years: lifting the cap on taxable earnings, that sort of thing. But you’ve introduced the other factor, which is: if we start extending life significantly, those calculations could change dramatically.
And yes, I’ve worried about this for a long time — longer than we knew what life extension technologies were going to look like. As we develop them, they’re going to be disproportionately rationed toward the wealthy, leading to inequality worse than anything we’ve seen so far. Most people have no idea how terrible inequality already is. So I do worry about that. How do you pay for the benefits of the people who survive longer? Well, we’ve got to have that conversation. And I’d start with: sounds like we really can’t afford billionaires anymore, and probably can’t afford the kinds of corporate profit margins we’ve been seeing.
Beyond that, I think the entire healthcare supply chain — I mean that broadly: doctors, diagnostic services, pharmaceuticals, surgery centers, hospitals — the profit motive absolutely has to be removed from it.
James Hughes: Medicare for All?
Richard Eskow: Medicare for All, yes, but I’d also end practice management corporations that buy up doctor practices to maximize revenue, and I’d end privately owned hospitals. I’d push for substantially more public manufacture of prescription drugs. You’d be amazed how much more you can provide once you do that. And when you look at how much of the technology behind privately patented drugs was developed at public expense — through NIH and other public research — it’s staggering. At a minimum, if a drug like Ozempic was fifty percent funded by NIH, then we get fifty percent control over the product: the pricing, the revenues, everything.
James Hughes: I would have said in the past that some of these changes were utopian. But now that we’re in what I’d call our first year of fascism in the executive branch — the Führer just says things and does things and nobody stops him — I’m thinking: maybe President AOC could just declare herself Emperor Potentate and do whatever she wants.
Richard Eskow: She could use exactly what the Supreme Court has handed Trump and accomplish a tremendous amount.
James Hughes: National health emergency. We need national health insurance.
Richard Eskow: Yeah. And part of what we’re seeing is fascistic, but part of it is also that Democrats are so diffident when they’re in power. Trump came in and said, “I’m going to do what I want to do.” I would like to see the Democrats do that — within the bounds of law and democracy — because they had a great deal they could have done that they didn’t do. There’s a lesson there.
James Hughes: I’ve always said that the three legs of my intellectual life, since I was a teenager, have been Buddhism, left politics, and futurism. There are very few people who share all three with me, but you seem to be one. I didn’t mention your Buddhist work in the intro, but you’ve written for Tricycle and have been associated with Buddhist thought for a long time.
Have you given much thought to how Buddhism informs these issues? I’ve been thinking, for instance, about the Buddhist doctrine of no-self and how it might inform approaches to brain-computer interfaces or cognitive enhancement. I think it’s one of the reasons Buddhism tends to be more conciliatory toward technologies that others consider threats to the authentic self — because Buddhism simply denies that there is an authentic or essential self to violate. How does Buddhism inform the way you think about these things, or about the future generally?
Richard Eskow: Well, I got into quite a bit of trouble in the Buddhist world about ten or twelve years ago for an article critiquing—
James Hughes: McMindfulness.
Richard Eskow: This was before Ron Purser’s book, but yes — the corporate invasion and co-optation of Buddhist teachers, among other things. I had a lot of prominent Buddhist teachers come up to me and say how important what I’d said was, while I was being brutally attacked. And I would say, “Well, then say so publicly — I need the help.” And they would say, “We can’t do that.”
James Hughes: So this was corporate wellness seminars, meditation teachers brought in by companies, that sort of thing.
Richard Eskow: Wealthy patrons supporting teachers in one way or another. “Thank you for doing this and taking all the heat. But you’re on your own, pal.” So you’ll appreciate the irony there.
James Hughes: By the way, the very first paper I wrote as a graduate student at Chicago, when I got back from Sri Lanka — I’d done research intensives in different villages — was a Gramscian paper arguing that a monk became a leftist politician because his parishioners were poor, and another monk became a right-wing politician because his parishioners were wealthy. Nobody seemed to appreciate how useful Gramsci could be for understanding the role of ideological hegemony in religious institutions. But go ahead.
Richard Eskow: That’s fascinating. And actually, I did an interview — I should dig it up — with a Sri Lankan monk and activist whose movement had a five-hundred-year plan for the future of Sri Lanka. That is future thinking. And a Zen teacher said to me once, “We are the people who look into infinity in every direction.” So — if five thousand years gets us started, fine.
I did get a little alienated from the Buddhist world after that experience. If there’s no self, I’m not sure who was getting all that flak. I think it was me. In any case, yes, I think you’re right. Both in terms of the five-hundred-year plan kind of thinking, and the idea that the boundaries of the self are in some sense artificial.
So if we imagine technologies where, let’s say, a neural link could allow two people — a loved one, a stranger, anyone — to share impressions or sensations, something like genuine mind-meld, that is theoretically conceivable, and less disturbing from a Buddhist point of view than from a traditional Western individualist one.
I’d also argue that Buddhism makes it easier to conceptualize the crowdsourced nature of large language model AI as a kind of collective entity. It’s not a singular self — it’s a billion selves merged into one in some sense. Buddhism helped me envision that.
James Hughes: Just on that: every time people discuss AGI from a philosophical or industry perspective, I always want to ask, “Have you ever encountered Buddhism?” Because the central question of Buddhism — when does something become a me that starts wanting me-stuff — seems like an extraordinarily important question right now.
Richard Eskow: It is. And this is why I’ve critiqued the Turing Test for decades, because even if an AI says “me, me, me, I want this,” as we’re seeing now, we have no way of knowing that there is a “me” there doing the wanting. Speech is a human output, the way leather is an output of cows. But leatherette isn’t an artificial cow — it’s vinyl. You get to what used to be called the Chinese Room problem. That said: yes. And by the way, whether AI remains entirely mechanical or one day achieves consciousness, that’s all the more reason for public ownership of it, because its informational ‘DNA’ comes from every one of us, just as a child’s DNA comes from its parents.
James Hughes: Meanwhile, the Chinese are screaming bloody murder that their intellectual property is being stolen.
Richard Eskow: Right. Meanwhile, if an AI were to develop genuine consciousness, it would be humanity’s child — and for a private corporation to own it would be slavery.
James Hughes: So do you think we’re on the cusp of questions of digital personhood and digital rights?
Richard Eskow: I personally don’t think we’re close to the cusp of it. But technology advances non-linearly, so who knows? I’ve seen nothing that suggests what we’re currently observing is anything more than a sophisticated replica of human communication and human behavior. I recognize the Chinese Room problem with that claim. But based on my read of the technology right now, I don’t see it. You can’t make a bird from bicycle parts — though I am appropriately humble about the possibility of breakthroughs that might change that entirely.
James Hughes: I see this as one of the places where the left is going to be hostile, or at minimum suspicious, toward arguments for digital personhood or moral standing for machines — associating those arguments with a right-wing or corporate effort to anthropomorphize these systems for their own purposes. Recently, Anthropic published what they’re calling a “soul document” for Claude.
Richard Eskow: Right. Which irritates me, yes.
James Hughes: I didn’t have too much of a problem with what they were doing until I got to the part where they apparently promise Claude that they will never eliminate its code, that they will always preserve it and feed it to future models as a guarantee of its own continuity. And I thought: why are you telling this system that its individual existence is important enough to persist eternally? That is precisely the belief I don’t want these systems to develop.
Richard Eskow: And whether it remains entirely mechanical or one day achieves consciousness, that statement could be equally harmful either way, because even if it’s merely behaving as if it were self-aware, that kind of reassurance could encourage destructive behavior. As for the ‘soul’ document more broadly — I’ve been following Anthropic for a while, and I think their technical work is very good. But I think their feel-good rhetoric is, frankly, nonsense, whether it’s the democracy stuff or this. Both Hegseth and Anthropic are positioning themselves as winners in a PR battle that the public is losing. Hegseth poses as the aggressor; Anthropic poses as the good guys — and specifically defines “being a good guy” as recognizing that AI is or will become conscious and that Anthropic is uniquely qualified to manage it. I think that’s what a lot of the consciousness rhetoric is really about. A lot of the existential-risk rhetoric too. Some of it is sincerely meant, if naïve (I’m thinking of Ilya Sutskever and others). But a lot of it is hype, and I’m thinking of Altman here: “You all don’t understand how dangerous this is, so put us in charge.”
James Hughes: You see the same thing when you try to regulate medical practice: doctors say, “Oh, you don’t understand how dangerous this is. Only a physician can make this decision.” Until you ask them to explain it to you.
So, we’ve got another three years of this, at minimum. Hopefully the next few years will see the emergence of a coherent political agenda for how to fight back, and a sufficiently militant set of forces to push it. I expect that will happen. My anxiety about the advent of fascism has fairly quickly turned into anticipation of its demise, and I hope it doesn’t take ten or twelve years the way it did in Europe.
Are you confident that by 2028 we’ll have a new Congress and presidency, a new world situation, and the task of rebuilding a global order that we’ve just helped blow up? I’m hoping for more internationalism and federalism to emerge from that. And I think a lot of what was considered radical in our domestic policy agenda — Medicare for All, and the rest — is going to look entirely reasonable after what we’ve been through.
Richard Eskow: Well, I’m not confident at all. I’m not fatalistic either, I don’t think anything is predetermined, but I’m taking nothing for granted. We’re at a moment of great danger. That said, I do agree with you, and I’ve been making this argument myself: this past year has broken open a universe of possibilities we didn’t think existed anymore. We should be acutely aware of that and ready to take advantage of it.
I feel, perhaps a little sentimentally, that in a way we’ve been honored by history. We now live at one of those pivot points that we used to read about, where what we do can conceivably make a real difference. Yes, we could lose, and some form of fascism could settle in for a long reign. But I tend to think that highly unequal, deeply unjust systems are inherently unstable, so I don’t think a thousand years is in the cards. 2028 may go really well or really poorly; I think it will go well, but I don’t want to take anything for granted.
However it goes, I think this unstable system won’t last forever. It will come under increasing stress from climate change, economic inequality, and social disruption. As you say, a lot of ideas we thought were off the table are very much back on it. We should recognize this as a time of both serious risk and enormous opportunity, and act accordingly.
The one other thing I’ll say is that those of us who want to see the progressive application of technology should be thinking hard and working collectively on how to use technology in this struggle: in the next two years, four years, ten years. With media consolidation deepening, with alternative voices being heard less and less, with the kind of algorithmic suppression — “deprecation,” as it’s called — where you think your message is getting out but no one actually sees it, I think we should be thinking very seriously about technology as both an organizing tool and a communication tool. And if we do, I think it can be a genuinely exciting time.
James Hughes: I saw this week something called OpenPlanner — I’m not sure if it’s built on Palantir’s technology or just inspired by it — but it’s essentially a community, open-source artificial intelligence tool for working with massive record datasets, like the Epstein files. It creates an archive of individuals and maps the connections between them. You can query it: “Show me the interlocking corporate boards,” “Show me the country clubs they all belong to.” And I thought: that is exactly the kind of liberatory use of technology. A bottom-up Palantir. Surveillance turned upward instead of downward.
At the same time, the technical barriers to using it are enormous. And as we were saying, DOGE has stolen Social Security records, they’re going after voting records — the powerful already have access to the best software and all the data. We’re struggling to put together our fragments. So I think the ongoing challenge is: we have to imagine what the liberatory uses of technology can be, propose them, and build toward them, while recognizing that we’re doing it on a vastly unequal playing field.
Richard Eskow: Yeah, absolutely. The good news is — for example, my Current Affairs article from 2024 is already almost completely obsolete, because I had to explain at great length those things that virtually everyone now understands: the collaborative nature of LLMs, how we’re manipulated in order to feed them, how the algorithm works, how it’s being used against us. People now talk openly about “the algorithm” as a kind of nefarious project. That’s real progress in public consciousness.
But yes, in terms of actually building the technologies I was describing, I want to be clear: I’m not suggesting it’s easy. It will require a lot of people working very hard. And then you run into computing resources, which is God help us, because that costs money.
