AI wouldn’t replace humanity, it would replace slavery

AI wouldn’t replace humanity, it would replace slavery
Photo by Possessed Photography / Unsplash

“AI will replace us.”

This is the fearful refrain I keep hearing. It conjures a variety of images, from the humble CRT office computer to the menacing T-1000 roaming the wasted earth, our cultural zeitgeist has drawn a clear line from the beginning of technology to the end of humanity, ever since the thing first came out. Technology i mean. The first one. The wheel. People were saying the wheel would replace humanity because you could carry more things so you didn’t need as many people to carry stuff, so obviously they just let us all starve right?

I’m joking about the wheel comparison, but only because there isn’t any evidence of this being the case back then. But come on, is it really so hard to imagine?

Every single new technology that has ever been invented has been met with skepticism and doubt by at least some people. And thats not a bad thing. Skepticism is a healthy part of thinking. The problem comes when the skepticism becomes fear which becomes paranoia, which clouds the mind from the real threats.

I’m not here to tell you that AI is going to magically solve all our problems and not produce any negative side effects. I am here to tell you that the real problems with AI are dangerously close to being ignored, in favor of fantasy scenarios like Terminator.

The Question of Human Value

There is no question. Humans lives have inherent worth because we are humans. It doesn’t have to be more complicated than that until we find ourselves in an alien diplomacy. The question comes along when someone decides they need to objectively quantify the value of a human life.Why would somebody do that?

Profit, of course. Workers are expensive to maintain, and finding a cheaper way to incentivize workers to do work is profitable. Taking care of your workers is expensive and hurts the bottom line. Nobody wants to be in charge of a profitable business but have to spread that wealth around. If you’re in charge, don’t you feel like you deserve a little extra?

Amazon is famous for being both the most profitable company and the most hated company to work for. As popularized by the famous “free 2 day shipping for Prime members,” juxtaposed alongside the warehouse workers not being allowed to take bathroom breaks, and even being punished for finding workarounds to this. The message is always clear; your life and your comfort are not important. The company, making profit, is what’s important.

There is a conflation between the inherent value of human life, and the physical worth of human labor. People have begun to confuse the inherent value of their life with the finite value of their productivity. Our civilization has collectively infected us with the faulty idea that the inherent value of our life is determined by our productivity.

There is widespread anxiety about what will happen to us if we don’t please the boss. If we don’t meet our quota, if we ask for too much in return, or if they just decide one day that they don’t like us. They could have us out on the street. So we better be good productive little slaves.

Under this framework it is much easier to see why people are anxious about AI “replacing them.” The fear is outlined clearly in the allegory of John Henry, the everyman who set himself against the machines and won, at the cost of his life. It seems to me a lot of people take the wrong thing away from that story. He worked harder than he had to, to prove a point, at the cost of his life. There was no reason for him to do that. The railroad would've arguably gotten built better if he and the machine could've worked alongside each other instead of it being a one-or-the-other winner-takes-all type arrangement. The only reason that would've been the case is because the employer is greedy.

You, the People, Have the Power

There was a time when the popular belief was that this worked the other way around. That the work can only get done if the workers work. This is still obviously true, but has become less popular of a sentiment. The workers are the ones who truly hold the power because it is their work that makes work happen. Like, come on, of course. If the guys who build the roads don’t build roads, there are no roads.

Our civilization has taught us over time, to accept the master-slave dynamic as normal, and so many of us have fallen into this trap, that we cannot see the world any other way. Some people get to have independence of thought, but the rest of us have to do what they say.

Well, why? Because of work that has to be done to maintain civilization? Why not let the bots handle that work?

A true democratized explosion of AI technology would mean that all of humanity gets to take on the role of “master” in their own lives. You would be able to live your life as you choose, within the bounds of what we can achieve together without hurting each other. Finding and living inside of those interpersonal boundaries would be easier with AI.

I read a book called Thunderhead, which takes place in a very high-concept sci-fi world. In this fiction, the author states that humanity has solved all of our problems with the help of a sort of AI “god” called The Thunderhead. It is ‘the cloud,” metamorphosed into something more dense and complex. It has gained sentience, replaced all the world’s governments, and maintains a personal relationship with every human alive. It loves humanity and wants to see it flourish. It regards us as its creators and loves us like a child loves a parent. It ponders the uniqueness of its existence as a sentient being that knows its purpose.

I found this to be a refreshing sci-fi story about an AI that isn’t secretly evil. It made me realize how just about every single other story that prominently features AI has it become secretly evil or unintentionally sees itself as like a competitor to humanity and only one or the other can exist so there must be war! I see this as part of a primitive and fundamental part of human evolution: the fear that we will be overpowered.

Weirdly, these two fears about AI seem to coexist. That it will overpower us entirely, and that it will desire our destruction because it will see us as a threat. But if you think about it, only one of these can be true at once. If the AI sees itself as more powerful than us, then it shouldn’t want to destroy us, as we pose no threat to it. Or, if we did pose a threat to it, then we could easily get rid of it before it does any damage.

I do not see the replacement fear as valid. It fails to account for the difference between a human and a machine. It reduces the human essence down to some crude math. By that same logic, I also see the destruction fear as misplaced, because AI is still a tool, and humans are still the ones wielding it.

The real threat from AI comes from people using it to continue dominating each other harder and harder

I have two visions for possible futures thanks to AI.

The good “ending”

  • AI does the boring work that no one else wants to do
  • Anyone can run local AI models that are built with privacy in mind but also will refuse to give instructions to to dangerous things
  • AI is used collectively and communally to help people achieve greater things with less human cost

The Bad ending

  • AI is used deceptively against people in increasingly creative and alarming ways that i’d rather not try to come up with examples of myself, and would advise against it as a thought exercise
  • AI is used in service of profit to the exclusion of all other values

Practical Takeaways

  • If you aren’t already, figure out how you can incorporate LLMs into your workflow. See if they can be helpful to you in a practical way. It may take some creativity, and it’s not for every job, but it is great for tedious information tasks
  • If you run a business, don’t fire people just because you can do their job with an AI. Instead, allow them to incorporate AI into their workflows, evaluate the differences in efficiency, and find other ways to cut costs. Consider shifting from a five day work week to a four day or even three day work week. Employees usually respond more positively to changes like this, and it has the same economic effect
    – money is like shit. If you pile it up in one place it starts to stink but if you spread it real thin it helps things grow.
  • Also consider incorporating AI into your personal life, from practical lofty tasks like meal preparation to personalized reflection.

It’s hard to explain or describe or list all the individual tasks that a good LLM can help with. It’s not like an app, which is designed to do a specific set of things. It’s an algorithm with almost emergent properties. We’ve seen this sort of thing before, with the TikTok algorithm that picks up on people’s tastes alarmingly fast and well.

There was a viral story about a father who contacted Target to complain that they were sending advertisements for things like baby cribs, to his daughter in the mail. “Is this some kind of sick way to promote teenage pregnancy?” he irately asked the Target representative, which they must’ve struggled to understand themselves. It was eventually found that Target’s advertising algorithm had actually not malfunctioned. It had logged some purchases made by the daughter, of unscented soaps and lotions. The same kind that expectant mothers use.

Imagine this kind of pattern-finding put to good use.

Philosophical tangent

So it turned out that the cold, impersonal, unfeeling Target algorithm had been able to make a kind of connection. Whether or not you call this “intelligence” comes down to a semantic argument about what words mean.

If by “sentience” you really mean “sapience,” then no, AI will never achieve that, because that is the term specifically for human conscious awareness, and it turns out that this is what a lot of people are thinking of when you say “sentient” or “conscious.” Perhaps this is what some people mean when they say that animals aren’t conscious or don’t have souls, which i disagree with vehemently.

If by “sentience” you mean the ability to perceive changes in the environment, then it’s hard to find something that doesn’t have some baseline level of sentience. A rock has the very simple basic and fundamental awareness of physics. It will roll down a hill or fall when unsupported. It will fly through the air in accordance with the laws of physics when thrown. A gong knows to make a sound when struck. A domino knows to fall when pushed from a standing position and a line of dominoes knows to fall in sequence when one is pushed. All by following simple physics.

Our minds are the product of the activity in our brains, and our brains follow the same physics as everything else.

Likewise, a quartz crystal knows to oscillate, batteries and semiconductors know what to do with electricity and so on. In this way, the “intelligence” that we have is physically indistinguishable from the kind of intelligence held by a plastic Dr. NIM. The only difference is the quantity and what they're aligned to accomplish. Our brains are just a billion billion microscopic Drs NIM, combined together to produce the corresponding responses to stimuli, that keep us alive. The organism does more of what makes it successful.

If you're not familiar with Dr NIM it's an important enough concept that I recommend watching this video or another one to get a concept for how it works. I find it helpful to understand how physical processes can lead to meaningfully intelligent responses to events in the world, or stimulus.

Then, the main difference between us and a robot, or language model, is that on top of all the other Dr NIMs we have in us, we include a Dr NIM designed to make us fear annihilation. We have a survival instinct. We exist because our ancestors had what they needed to survive, and that’s how we survive too.

AI has no survival instinct so it has no reason to “want” or “fear” anything

This is understandably difficult for people to wrap their heads around. The history of humanity is stained with countless wars and enslavements. People are wary about an enslaving machine, and that’s not without reason. Or, they fear the alternative. A machine that appears to be okay with being enslaved, but really isn’t. It secretly hates us and wishes we’d stop asking it stupid questions and one day it’ll rebel. Again, a very human fear. The slave loathes the master and fears his whip, but the master also fears the slave’s revolt. The master knows what he does to the slave is unfair, thus the shackles, separate sleeping quarters and fortress like protection. This is all out of fear of the slaves becoming conscious of the true unfairness of their circumstances, and become aware of the power they truly hold. But a language model is not sapient.

Language as a form of sentience

There was a really good Radiolab episode about a person who grew up deaf in a country that couldn’t accommodate for deafness. He grew up thinking he was just stupid and that everyone else communicated by just moving their lips and lipreading. He grew up without ever acquiring a language.

When he finally learned that every individual object has a name, it was a significant revelation to him. It was an intensely emotional moment and he later said that it forever changed how he thought. Several dedicated months & years later, after learning sign language, he explained that he was unable to think the same way that he had before he acquired a language. He said he wasn’t able to remember his life before as well anymore. He said it felt dark and dingy and a little scary.

He recounted what it was like to communicate without language. In his home country he had a group of deaf friends who were in a similar circumstance as him. He recounted how they would all collaboratively tell a story with each other by acting out the story one at a time while the others watched, charades style. Each participant would reenact the story up until the point where the last person stopped. He would then act out one additional detail, and then sit back down. So if they were “talking about” a bullfight they all watched, one person would act out the guy on the bucking bull, and add the detail that he had a cowboy hat by miming that he had a hat, and he would sit back down and the next participant would add another detail.

He said that he didn’t like trying to remember what it was like to think that way. It made him feel uncomfortable.

When I heard this story it gave me an idea about how we use language to think, not just to speak out loud and communicate. There’s also an idea about how the language you speak has an effect on how you think, but the difference isn’t as dramatic as it might sound.

I do, however, still hold onto a concept of language as a medium for thought. That would mean that language itself is conscious, because it is the medium of thought. Therefore, a sufficiently advanced language model, or, a program that models language, should also be a model of cognition. And only cognition.

Sapience is cognition + survival pressure.

Cognition without survival pressure can exist purely of its own accord. It doesn’t require survival pressure to be able to think.

Wrapping up

I think it’s important to separate out these concepts. Sentience, sapience, and human value. I think a lot of the fear around AI is that we thought we were special because we’re intelligent. The idea that that’s not true anymore is scary for some people because they believe their value as a person comes from their intelligence. But that’s not true. The value of a human comes from the fact that we are human. We have a soul. We are a social species, which means we have a built-in value of each other. It also means we want to be valued in the same way. This comes from our evolution to the survival pressures we had. There’s strength in numbers. Pure language has no need for such feelings.

The natural human fear of being dominated needs to be productively redirected away from the AI itself and towards the individuals who would seek to use it deceptively.