AI computers and our own view of life – 2nd Digression

Leia essa digressão em português.

Humans vs Computers - Brains vs AI

I read this comic last week and I asked myself: “What’s the big deal here?”
Machines overcoming humans, what’s the meaning of this, exactly, and why it matters?
For starters, this is something pretty unique in Earth’s history, I would say. Because this would most likely be the first time that a creation overcomes the creator. (No, I’m not going to start discussing about God and humans and who created who and if, in the possibility of God having actually created the humans, if humans are overcoming him. Not now, at least. And I’ll likely transcribe a digression about why I didn’t capitalize the “h” when I wrote the personal pronoun that is referring to God.)

Right, but what can machines do exactly? A lot, right? Yes, but what can they do by themselves? Only what humans tell them to do. So what’s the problem here if we create a machine that is better than humans at playing chess, go or even jeopardy? Even a simple calculator can do math better than most humans and I don’t see people complaining or worrying about it…

But there seems to be a lot more to computers and AI than meets the eyes. What computers can do better than humans are completely mechanized and clearly algorithmic tasks (but can’t all tasks be completed by following an algorithm?). Just think about what your usual computer program does. A spreadsheet processor takes the data that you present and runs the algorithm that you set it to. The same goes for a calculator. The same goes for a chess computer. The same goes for a table tennis playing computer. But scientists wants more! A lot more! And seems like there are some important questions that humanity will bump into as we keep getting more and more with AI.

Some AI can learn and create new patterns and algorithms, like the table tennis one, but what is it actually doing there? Can we call it “learning”? Well, most likely we can. But can we call it “knowledge”? Now, that’s a bigger problem (which includes the meaning of “knowledge”).

But before we venture into those questions, what’s exactly the problem in having a machine doing better math than us, humans? Or solving the Rubik’s cube faster than the fastest human can solve? Is it a self-esteem issue? Does creating something that can do a task better than us show our limitation? Or does it prove our abilities? Because even though it’s not us who are completing the task, the first action to solve the task was ours (we created the computers that can make fast calculations). But this still seems different from when we use wheels to move faster. Computers and autonomous machines aren’t mere tools, right? Yeah…..well, aren’t they?

Is it just social fear? Because we all know that as soon as a machine is introduced in the production line, a lot of human workers lose their jobs and only a handful of qualified work spots are created (because we still need technicians to assemble and do the periodic maintenance). Unemployment rate goes up while production also goes up and chaos is settled because production is too high, consumption is too low, deflation kicks in, factories can’t keep their profit, workers are fired and the economical apocalypse starts and it’s all the machines’ fault! Right? And if even computers without an AI can do that, imagine what an AI could do!? So let’s destroy the machines! Let’s all rage against the machines and reclaim our own space! Come forth, my friends! Let’s….stop a little and give a better thought. Or maybe not, let’s just destroy the machines, including that smartphone and tablet you have there with you, they’re the devil’s work too, ok?

Maybe the social fear is not related to work and economy, but related to the concept of life and being alive? I remember I watched a video about AI development where one scientist raised whether it’d be ethic to turn an AI off. I’m pretty sure the video was about ASI (Artificial Super Intelligence)  or SAI (Strong Artificial Intelligence) but the question struck me (what I could understand is that ASI are AI that are better than humans in any area and SAI are AI that can think and work as good as a human brain). What’s the problem in turning a machine off? What’s the problem in turning a thinking machine off? What’s the problem in turning a SAI or ASI off?

I do understand why this question is important. SAI or ASI would be thinking and sentient beings. They would be a completely autonomous and independent being that can think and reason like humans without human interference. But are they “alive”? And would turning them off be the same as killing a living being? What does it mean to be “alive”? Is being able to think and learn and gain knowledge what defines “living”? Or does it applies only to what we’d call “biological beings” that are composed of cells? Are plans “alive”? Well, nobody really argues against that. But do they think, learn and gain knowledge? That’s something more people would argue against. Is consciousness a proof of being “alive”? Do you have to do something to be “alive”? Are you as alive if you’re kept in coma at a hospital than you are if you spend your days completely isolated in your house and doing nothing besides surviving by yourself (consider you have unlimited non-perishable food supply)?

Can we really find a definition that is broad enough and yet precise enough to pin-point what it means to be “alive”?

If (or maybe, when) AI development reaches the point of being called SAI or ASI, these questions will arise. But leaving these questions aside, what would it mean to pull the plug of a SAI/ASI? Is it really analogous to killing someone? I would argue that it’s closer to putting someone into a coma than actually killing someone. That’s because I’m assuming a turned off SAI/ASI can be turned on again by just plugging it back. Memories, abilities and knowledge would likely remain the same as long as the processing and storage devices were still functional, which is the same for someone who’s been in coma for a long time. The main difference would probably be that someone in coma still needs to be fed and breath while a turned off computer don’t. Also, let’s not forget that there’s a possibility that SAI/ASI is a program so, in theory, it could be transferred from one computer to another without a problem.

Still, would a SAI or ASI know that? Would either one accept the fact that they could be “revived” anytime in the future by just giving them some energy again, regardless of how long they’ve been “dead”? Would that affect how they react to knowing that someone is trying to pull their energy plug? In other words, could this happen?

If we accept that they’d fear and/or try to prevent their plugs being pulled, than wouldn’t it just mean that we made them too close looking to us? We are afraid of dying or, at least, we’re all trying to avoid it (and don’t try to deny, your body wants to live and gives it all to keep you alive, regardless of what your conscious self tells). It’s an instinct. Cut yourself and your body will rush to stop the bleeding. Eat something poisonous and your body will rush to get rid of it. Try to jump off roof and your body will instantly try to prevent you from doing it. Jump off roof and your body will rush to try to minimize the damage of when you hit the floor. That’s something beyond our own control.

Maybe they ought to be similar to us? If we’re going to build a machine that can live with us, help us and more importantly, interact with us, wouldn’t we be much more comfortable if they were similar to us? Not only in looks (that’s why we try to build closer anthropomorphic robots) but in the way of thinking. Would you like to talk about life and death with someone (or something) that does not comprehend the meaning we give to those concepts and, even worse (or best?), is not afraid of death because they don’t actually suffer from it?

There are lots of issues around artificial intelligence. Knowledge, life, ethics, economy, these are just a few of the broader topics that surround the development of AI and each topic have a whole lot of new arrays to discuss. Maybe we should just create a super computer to help us answer these questions?

Some interesting texts to read:
Ethical Issues in Advanced Artificial Intelligence
Three Laws of Robotics
The Myth of the Three Laws of Robotics – Why We Can’t Control Intelligence

Leave a Reply

Your email address will not be published. Required fields are marked *