Posted on Leave a comment

The Artificial Intelligence Problem

“AI is obviously going to surpass human intelligence by a lot.” Elon Musk

To many people, this might seem like a fictional plot for a made-for-TV sci-fi movie. What we tend to overlook is that Artificial Narrow Intelligence (ANI) is commonplace today. When your car warns you that you’ve left your lane or applies anti-skid braking, it’s due to the car’s computer using a limited form of AI. We’re seeing more and more people use voice-activated units in their home for home control, ordering online, and entertainment. This is a form of AI. Facial recognition systems are also a form of ANI.

At the moment, numerous companies are racing to create Artificial General Intelligence or AGI. AGI will be able to perform at roughly human level. New techniques such as deep learning are stimulating rapid advances in this effort. The interesting thing about deep learning is that the human programmers often cannot explain how it works or why the computer system is able to learn and sometimes outperform expert humans in specific tasks.

If you’re asking yourself, “Why should I care?” right now, I’m about to give you some reasons.

It’s obvious that the first company to create an AGI system will benefit financially, possibly to an extreme extent. The motivation is high and the competition is intense. For this reason, some companies may be tempted to cut corners.

Let’s assume a situation where company X creates a system which not only displays human level intelligence, but is able to utilize deep learning to quickly comprehend things that humans find difficult. Let’s also assume that the system learns how to modify its internal programming. This could allow it to quickly surpass human intelligence. It might reach an IQ level of 10,000 in a few hours. It would be an ASI or Artificial Super Intelligence.

There is a concept of keeping an experimental AI boxed up, not allowing it access to the outside world in case it should make such a breakthrough. If company X has failed to keep the AI properly boxed, it could quickly create havoc.

Imagine an entity with an IQ of 10,000+ that has access to the Internet. It could, if so motivated, control the entire financial world within hours. If it had been given a prime directive of (just for example) calculating pi to as many digits as possible, it could easily conclude that it could use more computing power to better execute its computations. In that case, it might use its financial dominance to hire companies to create more computers or, perhaps, robots to create more computers.

In this scenario, it could eventually use all manufacturing resources to create computing machines. It might cover farmland with computers or power generating stations. Humans might not matter to it at all, since all it really wants to do is to calculate pi to the maximum accuracy. It could even decide that the molecules in human bodies could be converted into computing devices.

The end result would be no humans left alive, just a gigantic machine happily calculating the next digit of pi.

So, how do we, as responsible humans, ensure that an ASI doesn’t get rid of us? How can we ensure that it is domestic–that is values humans and helps us?

Musk believes that we need to become part of the system and interface with AIs using some form of brain interface. If we are part of the system, perhaps it will be more amenable to helping us.

My personal opinion is that we should seek a way to show an ASI that intelligent biological life is valuable.

Physicists tell us that if the basic constants of our Universe were even slightly different, life would not exist. This seems to indicate that the gathering together of energy that distinguish living beings may be something special. The immutable mandates of the Universe’s structure force life to obey certain structural rules, one of which is a limited form of reverse entropy. In short, we self-assemble, creating order where there was none before. Never mind that our personal order doesn’t last long and we eventually perish.

The question I’m pointing toward is: Can we make a connection between the Universe’s structure and the value of human life? If we can do that, perhaps an ASI would also value us as an example of a direct manifestation of the Universe.

We need a set of rules based on the structure of the Universe that apply equally well to both organic life

(emphasis on humans) and AI. These rules need to be expressed in a way that any ASI would abide by them.

My belief in the underlying laws is why I have some hope that an ASI would be friendly to us. However, this maybe hopelessly naïve. An ASI may have a level of understanding that is so far advanced that it would see things differently.

Perhaps its non-human set of observational criteria will serve as a representation of the Universe’s underlying reality that is beyond human understanding. This might invalidate human models of the Universe and lead to the conclusion that humans are non-essential.

For these reasons, I believe that premature development of an AGI, let alone an ASI could pose extreme danger to humans and possibly all biological life.

I’ve been attempting to explore various aspects of this subject in a series of short stories that may be freely read on my blog. Please read them and comment, if you want.

I also deal with the topic extensively in my latest novel, “Cyber-Witch”. It will be released shortly (November or December, 2017).

Thanks for reading. I’d like to hear your opinion of the issues I’ve raised.

Eric