Posted on 1 Comment

Musings on Artificial Intelligence: Dangerous Times

Memory cards
***Two Gigabytes separated by a few years – both cards are outdated today.***

Years ago I was giving a seminar at the University of Colorado during which I mentioned the possibility of Artificial Intelligence. I explained that most computer people used the abbreviation “AI”. I was surprised when a member of my audience broke out in laughter.

I asked him what was funny and he explained that he was a large animal veterinarian and, to him, AI meant something completely different. The whole class laughed.

Author Yuval Harari believes that in 300 years, Homo sapiens will not be the dominant life form on Earth if we exist at all. He thinks that the likely possibility is that we will use bio-engineering and machine learning and artificial intelligence either to upgrade ourselves into a different type of being or to create a totally different kind of being that will take over. In any case, he projects that in 200 or 300 years, the beings that will dominate the Earth will be far more different from us than we are different from Neanderthals or chimpanzees.

He also states that cooperation is more important for success than raw intelligence. Since AI is far more cooperative than humans, it will have an advantage. For example, self-driving cars can be connected to one another to form a single network in a way that individual, human-controlled cars never can.

The real question is whether AI’s cooperative advantage will have beneficial results for humans or prove to be disadvantageous. Let’s examine various ideas that may be pertinent to the answer.

There’s confusion in both the general populace and science fiction writers about the meaning of AI. People aren’t sure whether it involves intelligence or consciousness, or both paired together as in the human organism. Most science fiction stories center around the premise that AI will be an Artificial consciousness (AC) with super-human intelligence. This supposition is a purely human assumption based on the requirement of writing interesting stories.

I’ve most recently written two short stories based on the idea that AI will choose to emulate humans by implementing some form of programming that allows for emotions. I’m also well into the process of writing another novel that explores this issue.

Assuming robots will have emotions, fall in love, and want to destroy human competitors makes for interesting reading. However, those ideas may not apply in the real world of AI.

Can we agree that intelligence is not necessarily consciousness? I think that one can roughly define intelligence as the ability to solve problems. The ability to emotionally feel things may have nothing to do with intelligence, especially when considering AI. In bio-life, the two go together. Mammals solve problems by feeling things. Emotions assign meaning and meaning provides a necessary component to problem solution for mammals. Computers do not have emotions, at least not yet, and possibly not ever.

There has been a lot of development in computer intelligence in the past decades, but very little development in computer consciousness. That’s understandable, since we humans have a hard time defining what our consciousness is and how it works.

Computers might be developing along a different path than humans. Humans are driven towards greater intelligence by way of consciousness; by the emotional awareness of comfort and discomfort and the urge to do something about those feelings.

On the other hand, computers may not ever develop emotional consciousness, but they do have the potential to form a non-conscious, linked super-intelligence. The important question is what does a world of non-conscious, super-intelligence look like? What are the ramifications of such a world? What is the impact of such a world on humans?

Nothing in our evolutionary past prepares us for that question. (Or maybe we’ve already answered it – a point I’ll get to a little later in my musing.)

Humans have animal requirements. To avoid injury and death, to consume fuel, to reproduce, all are things that provide motivation to bio-life. An AI won’t necessarily have those kinds of drives. The initial AIs might receive grafted on human emotions from their creators, but machine learning has the potential to quickly morph those tendencies into something that humans won’t have the ability to understand.

According to Harari, humans tend to overestimate themselves, and won’t be around in 300 years, because to replace most humans AI won’t have to do very spectacular things. I think that his assumption that AI won’t have to do amazing things to put us out of work is on target. However, I suspect that his 300 years is too long an estimate.

Ray Kurzweil estimates the so-called singularity, the point at which AI supersedes human intelligence will be around the year 2029. That is much sooner than 300 years from now. The exponential rate of technological growth implied by Moore’s law (more a rule of thumb than a law) means that we humans will have to start learning how to live in an increasingly automated world very quickly. Take the idea of smartphones, for example. Smartphones seem to have been around forever, but they first hit the market in Japan in 1999. Most people today couldn’t imagine living without them.

We’ve already seen real-world examples that demonstrate that AI will soon be able to do most of the jobs that humans do and do them better and without tiring or wavering attention. As a result of the transition from a human-factory-based economy to an automated-factory-based economy, we now face what could be called the Uberization of work. Work is metamorphosing from a career-based economy to a gig-based economy. At the moment, wealthy countries are faring better than economically disadvantaged ones in this scenario, since their better infrastructure allows for a little more cushion for unemployed and under-employed workers, but I suspect that this advantage is temporary at best.

Here’s another example of how AI can replace humans, even in specialty knowledge-based tasks: An AI system can diagnose cancer better than a human. It turns out that even the most expert humans have quite a spectacular rate of error in such a task. A simple algorithm, while not as flexible as a human, will easily outperform the human norm, simply because it is consistent and doesn’t get tired or bored. It won’t miss any cues and will always draw the same conclusions based on experience. Humans are rather more variable than that.

Let’s personalize this for a moment. If you suspected you had cancer, wouldn’t you want the most accurate diagnosis possible?

Given AI’s performance advantage in most tasks, there is a distinct possibility that humans will lose their ability to generate value for the major systems that dominate our lives today. We could become useless from the viewpoint of the economic, military, and even political systems. These systems could lose the incentive to invest in human beings. What would happen then? How would the average human survive?

Will there simply be subsidies that provide food, housing, health care? Based on a brief look at our history, there will undoubtedly be various levels of subsidies. What will determine whether one has a gold-level subsidy or a brass-level subsidy? Prior ownership of resources might then become the benchmark for separating the haves from the have-nots. This situation may seem reprehensible, but when have humans ever treated each other as totally equal?

Of course, the development of AI could be interrupted by a world-wide catastrophe. An apocalyptic event could easily cast humanity back into the hunter-gatherer mode, and that is the precise existence in which humans evolved to thrive. Such a life requires a generalist with both physical ability and intellectual flexibility, paired with rapid learning and pattern recognition skills. Could these be duplicated by an AI? Not perhaps so easily as they can be created in a biological entity.

Failing a doomsday scenario, AI will inevitably continue to develop. An idea cannot be killed once it has been given birth. It can be suppressed if there is a sufficiently strong authority, but concepts cannot be destroyed in the normal sense.

Assuming that there is no apocalypse, could AI perhaps find that humans are a useful, self-replicating resource? Given food and opportunity to engage in sex, we duplicate ourselves. How could an AI use us? Will we become a commodity? Could we become a self-replicating biological factory that automatically creates raw materials?

We’ve largely replaced directly useful jobs like farming with intellectual jobs where humans deal with ideas, rather than basic needs. Are intellectual jobs necessary? Not really. Does the world need me to write science fiction? Require my science fiction to survive? Of course not. Can intellectual jobs be done by AI? Probably.

The question is what is necessary? Corn is cheap, but if you let it sit around long enough in an oaken barrel, it can become whiskey and be valuable — to a human, not to a computer. There is a conflation of people being useful with the concept of people being valued. Useful is a judgment based on the production of some essential. Value is a human-based story — we decide what has value and it’s not always what is useful.

Humans have both physical abilities and cognitive abilities. Machines are taking over in the physical ability field, replacing us in factories and AI is starting to compete successfully in the cognitive field.

Do we have a third kind of ability; one we could fall back on? Just for controversy’s sake, how about spiritual ability? Is that a possibility? Could AI become spiritual? That’s the same as asking could it love in the same way that humans do? Could we move from jobs of the body to jobs of the mind and then to jobs of the heart?

Science fiction writers often assume that an AI will be automatically hostile to humans; that it will inevitably try to get rid of us. Various reasons have been given in stories, and numerous methodologies for extinguishing the human species have been postulated, ranging from the Terminator scenario to using poison-releasing nano-machines. These make for fun reading, but might not be accurate.

AI software will shortly be able to read and understand human emotions better than humans can. But, will AI feel in the same way, or will it be a simple analysis allowing it to predict our future actions? Either way, it will be completely consistent and startlingly accurate. Given such an ability, what would prevent the AI that wanted to get rid of humans from simply engaging in an effective propaganda campaign to convince us that we have no reason to exist?

Many people would simply give up when faced with such a campaign. They’d quit eating, quit reproducing, and quit trying to work. Loss of meaning is a terrible thing.

Given that low-skilled jobs are disappearing and not every human is able or has the desire to be trained for a high-skilled job, where will humans find meaning? What point is there to a world where every human is engaged in a nonproductive cycle of hyper-pleasure existence in say VR?

The writings of Victor Frankl demonstrate that humans find their highest feelings of self-worth when they are engaged in meaningful activities. Those with meaning in their life survive longer.

If you don’t have a job and you’re provided with the means to sustain your life, will you be able to find adequate meaning in VR and chemicals?

If not, what will be the outcome? What would such a world look like? What would happen to the odd misfit who cannot find adequate meaning in a VR existence?

Before I finish, I want to come back to the idea that I promised to address at the beginning of this post. The question that asks: what does a world of non-conscious, super-intelligence look like?

My suspicion is that our Universe may be a primary representative of the answer to that question. If one assumes the wave nature of the elemental particles that make up the Universe, then one must also assume that the waves create interference patterns similar to those on a hologram. Waves and interference patterns can store data. Given the estimated size of the Universe, it’s a fairly safe guess that the storage potential is adequate to store everything that has happened since the initial expansion event.

That’s point one. Point two is that chaotic systems sometimes seem to have a tendency to self-organize. What if all that data storage somehow self-organized into a super-intelligence? What if it organized tiny parts of itself into the matter that we see when we look at galaxies and stars? What if it organized itself into transient forms that generated their own form of limited consciousness and asked absurd questions like these?

Regardless of your opinion on any of the questions I’ve raised, I sincerely appreciate your taking the time to read this post, and I hope that it provided you with things to ponder. We are rushing into the next stage of our evolution, and we absolutely must begin to answer these types of questions. I believe that our future depends on it.

Namaste!

Eric

Check my blog for my free short stories relating to AI: “Virtual Love” and “The Adventure of Life”.

Some of this post owes its existence to Ezra Klein’s interview with Yuval Harari. The interview was just too thought-provoking for me to ignore. Thanks.