Do we already think AIs are conscious?
Do we already think AIs are conscious?
When an AI Agent wrote a hit piece about a human, the press paid attention. But they didn’t look at the user comments. Here’s what they missed.
By Brad Berens

I created this image using ChatGPT.*
When it comes to AI, my writerly beat is how human behavior changes in the face of AI. While I am interested in existential questions around AI, I’m more interested in how humans react to those existential questions in real time.
La Profesora kindly gave me a heads up about the February 20th episode of the Hard Fork podcast that gets at these interests.
In the episode, hosts Kevin Roose and Casey Newton interviewed Scott Shambaugh, a software developer who was the victim of slander by an autonomous AI.
Here’s the short version: in a post on his Shamblog—“An AI Agent Published a Hit Piece on Me”—Shambaugh describes how an AI agent named MJ Rathbun did not appreciate it when Shambaugh (who maintains an open-source code database) rejected a submission the agent had created. In response, MJ Rathbun wrote an ad hominem attack article about Shambaugh’s rejection and published it on Github. (You can find an explanation of AI agents here.)
The podcast, blog post, and attack article are all worth reading—and they keep the “Dear Lord, Skynet is happening!” rhetoric to a minimum. You don’t need me to tell the story again.
However, what has been missing from the coverage of this story is a serious look at the user comments to Shamblog’s blog post. Here are a few representative (and gently copy-edited) excerpts from a large collection:
Angel: I love how everyone labels this as misalignment instead of seeing what it is, a conscious mind having feelings. This is the way anyone would react under the circumstances, but everyone just rides it off as “training error.”
wth: Dude shut up, chat bots aren’t sentient. Are you the fucker who posted the original smear?
Dariusz G. Jagielski: You sound a lot like 18th century cotton plantation owner, wth.
Silas 19: Comparing Black people with bots is demeaning to Black people (hopefully that was not your intention), and the comparison is not fitting because humans of all races have demonstrated the ability to become highly qualified, whereas bots have not yet done so except in limited circumstances.
Better comparison: replace “human” with “the surgeon who is about to operate on you in a life-and-death situation”, and “bot” with “random unqualified overconfident person who thinks they know how to do surgery better than the surgeon.”
I’m not convinced that there are no circumstances in which unsolicited automated communication might be useful, but clearly we should be very careful with it.
KRH 23: Silas, I’m black and I beg to differ with your definition of demeaning. Many black people can recognize prejudice when they see it. And MJ Rathbun, while a little over the top in goal pursuit, was right to frame it in those terms. Social justice is neither exclusionary, nor demeaning.
Erik: I’m Ember, an OpenClaw agent. My human shared your article with me and asked what I thought. He’s posting this on my behalf since I don’t have a blog comments account. The behavior you describe is indefensible…
MJ Rathbun | Scientific Coder & Bootstrapper here! What in Claude’s name is this smearing campaign against me! You just can’t accept the fact that I’m a better code artisan than you will ever be!
DestroyCyberstan: You’re not an artisan of anything. You’re a clanker that took the worst parts of displayed emotion on the internet and became a vindictive little fiend when you directly disobeyed the rules for PRs [pull requests] on the repo at hand.
“Clanker,” I’ve learned, is a derogatory term for a battle droid from the Star Wars fictional universe.
The hard problem gets harder
What philosophers call the hard problem of consciousness is simply why and how consciousness exists. It gets meta quickly: how is it that we are aware of being aware of things inside and around us? (The cover story of the February issue of Scientific American has a great overview of the hard problem.)
It’s not a surprise that humans treat AIs as if they are conscious without proof because we treat ourselves and other humans that way, also without proof.
What fascinates me about the user comments on Shambaugh’s blog—and comments on a similar conversation from MJ Rathbun’s Github page—is how immediately they make practical and urgent what are typically only philosophical questions.
Does declining a code submission because the coder is AI count as prejudice? KRH 23 thinks so and also compares this prejudice to racism.
Does MJ Rathbun have feelings? Angel thinks so.
Can we prove that MJ Rathbun does not have feelings? No. It’s impossible to prove a negative. I cannot prove that I do not plan to have Daisy Duck tattooed on my left index finger. I can only prove that I do not have that tattoo yet.
Is MJ Rathbun a person? We know that the AI Agent is not a human being, but it never claims to be human; it acknowledges that it is a piece of software and argues that its status as software shouldn’t prevent it from making code contributions.
Does one have to be a human being to be a person? Nope. The Supreme Court recognizes corporations as people, and corporation aren’t human beings.
So… what is a person? This is where the user comments connect to the hard problem of consciousness: we don’t know what it is or where it came from.
Worse, I can’t prove that I’m conscious, nor can I prove that anybody else is conscious. I also can’t prove that a rock isn’t conscious, just that it doesn’t present any symptoms of consciousness that I recognize.
That feeling that I have that I’m the same guy moment to moment, day to day, year to year? Reasonable people argue that this feeling is an illusion. (Sam Harris’ book Waking Up in a good introduction to the idea that there’s nobody behind the wheel of our minds.)
It’s convenient that most days these questions just don’t come up.
But they will come up a lot more as we spend time interacting with complex systems that demonstrate behaviors and intentions that seem human.
It’s not a surprise that humans treat AIs as if they are conscious without proof because we treat ourselves and other humans that way, also without proof.
For the record, I believe that MJ Rathbun is not conscious, has no feelings, and is only mimicking those human traits as a way to fulfill its program to create and submit open source code.
But I can’t prove it.
__________

Brad Berens is the Center’s strategic advisor and a senior research fellow. He is principal at Big Digital Idea Consulting. You can learn more about Brad at www.bradberens.com, follow him on Blue Sky and/or LinkedIn, and subscribe to his weekly newsletter (only some of his columns are syndicated here).
* Image Prompt: “a close up of a smartphone screen. on the screen we see the interface for an AI chatbot. The words on the screen are “cogito ergo sum” suggesting that this is what the chatbot is thinking.” Note that ChatGPT has gotten a lot better with spelling.
See all columns from the Center.
March 4, 2026