I’ve spent a lot of time and devoted a lot of attention to the development of AI over the past few years, and my thoughts about it can be summed up very simply:
Why are we trying so hard to remove people from the world we’re building for ourselves?
I don’t understand why this is the thing we’ve decided we should want.
Years ago, I read some commentary that said this about social media: “More and more of our social lives are being designed and maintained by deeply antisocial people.”
This is especially true of AI. The people building it all seem to be deeply antisocial, even misanthropic. I could also update this criticism for AI a couple other ways:
- More and more of our creative lives are being designed and maintained by profoundly uncreative people.
- More and more of our mindful lives are being designed and maintained by people with shockingly little mindfulness.
I see ads for Microsoft Copilot that are desperate to convince me to turn to my phone before I turn to other people:
- A young woman is at the airport and sees her aunt coming; she pulls out her phone and says to Copilot, “It’s my aunt! She’s going to ask me if I’m still single. Let’s game plan this.” She had to know who she was there to pick up: why wouldn’t she game plan with her friends before she got there? Why doesn’t she know how to handle the conversation, if this is something her aunt always asks?
- A young couple is at a museum. It’s a lot bigger than they expected, so he asks Copilot to plan a quick tour. This is a less offensive use, but still: museums have docents and guides. They have maps. Or, why not just wander and be surprised by what you discover?
Why would I turn to my phone rather than other people? And why are we treating ourselves like we can’t think without the help of our pocket computers?
I don’t understand that. I don’t understand why I should ever want that. Why are we trying so hard to remove people from our world?
I hear people tout the potential benefits of AI and I have such vivid flashbacks to the early days of the internet, when it was going to democratize culture in ways never achievable before, or the dawn of social media, which was supposed to do the same. Every new technology is supposed to save the world, and I’m still waiting. It’s an open question as to whether the internet has actually made things better on balance; it’s pretty much accepted now that social media has made our world much, much worse.
I suspect AI will be the worst yet.
AI is the apotheosis of the algorithmization of everything, and it’s abundantly clear that this has already worsened bias, reduced nuanced judgment, and polarized us. AI will supercharge these consequences, with the potential to inflict significantly more harm to the environment, too.
More practically, I recognize that AI is a tool, and it’s very good for certain specific uses. What worries me is how this technology is being inserted into all kinds of situations it’s not designed for. Spaces where it’s not remotely the right tool for the job. Spaces where it dehumanizes our world.
Over the past couple of decades, research has vastly expanded our understanding of how human and animal intelligence and consciousness work. We’re embodied intelligences. Our minds aren’t just things that ride around in our bodies. Knowledge lives in our bodies, our bodies are how we become conscious of the world.
Part of the reason why I don’t trust AI is because it’s not embodied. All of the information and knowledge it has of the world is entirely second-hand, it has no way of experiencing reality directly. I’ve wondered recently if I would trust it more if we put it into a robot, gave it a body that lets it know the world for itself.
We also better understand now that human beings are social intelligences. We understand our world through our relationships to each other. Our knowledge, and even our memories, are a shared experience. I don’t see any benefit to us in removing other people from our processes of knowing and understanding.
I don’t understand why we’re trying so hard to remove interactions with other people from our world.
Mostly, though: I don’t trust the people who are building AI. I don’t trust their motives.
Sam Altman stated on the record that he wouldn’t be able to train his model if he had to obtain permission to use the training data. In other words: AI cannot be developed ethically. It’s inherently unethical technology. I can’t trust tools developed by such unethical means.
Ends don’t justify means, AI’s usefulness is limited, and it can’t be worth the price we’re paying.
Sometimes I think about what AI tells us about ourselves. I consider the vast data centers and the energy demands required to get AI to the limited level of intelligence it’s achieved. And I think about how human beings can do so much more with an organ the size of two fists, in bodies that need an average of only 2,000 kcals per day.
We are miraculous!
Let’s celebrate that, rather than sidelining ourselves.