It’s robots all the way down

The media have been all abuzz about the latest AI language platforms. It is clear that these systems are going to have a dramatic impact on just about everything, perhaps greater than the impact smartphones had a couple decades ago. It is also clear that the net impact will be negative: these platforms represent a solid and resonating click of the ratchet of techno-dependency.  

It goes far beyond students cheating on term papers. It’s future students never learning to write—because why would you ever need to write if your smart device can do it for you with zero effort on your part? And it goes far beyond the classroom. There are a large number of jobs—entire career paths—that are going to evaporate because they will have been completely outsourced to the technology.

News stories and articles about these systems follow a familiar techno-sycophantic pattern. They lead with the obviously problematic features of the new tech, reveling in the dystopian implications. But then they quickly shift to excited declarations of imagined future benefits. “They can be beneficial for kids with learning disabilities” and “they can be incorporated into the classroom as a powerful teaching aid” are two entirely unsupported claims that I heard recently.

Sounds familiar. When smart phones entered my classroom and immediately siphoned off my students’ attentional reserve, I distinctly remember many of my colleagues gleefully altering their course curricula to include “smartphone activities.”

This is how all major technological innovations are received: first acknowledge the obvious negative, and then exaggerate any crumb of potential positive until the negative fades into the background as a small price to pay for progress. The technology itself is always seen as entirely neutral and benign. “It’s not the technology’s fault that students are using it to cheat.” It is also always seen as being completely inevitable, like a force of nature that emerges from the techno-ether and develops according to its own ontological imperatives.

On second thought, maybe the net impact of these AI systems might be positive after all. Here’s a possibility that just occurred to me. ChatGPT and its relatives are only the early stages in what is quite likely to be a complete technological absorption of public communicative acts. In a short time, social media content will become entirely AI generated and AI curated. There will eventually be nothing “social” remaining. It will be robots responding to tweets and memes and videos that were created by robots. And at that point, there will be no reason at all for actual human beings to engage with the system. Real life will be the only place left where actual humans can interact with other actual humans.

Imagine what that would be like. Imagine how wonderful it would be if you and I could spend all our time with each other IRL.

Author: Mark Seely

Mark Seely is an award-winning writer, social critic, professional educator, and cognitive psychologist. He is presently employed as full-time faculty in the psychology department at Edmonds College in Lynnwood, Washington. He was formerly Associate Professor and Chair of Psychology at Saint Joseph's College, Indiana, where for twenty years he taught statistics, a wide variety of psychology courses, and an interdisciplinary course on human biological and cultural evolution. Originally from Spokane, Dr. Seely now resides in Marysville.