Posted on 1 Comment

Dustfall selected as Florida Authors and Publishers President’s Awards 2021 Finalist

2021 FAPA Finalist!

What an excellent way to start an otherwise hot and possibly dull Saturday! I just received notice that Dustfall was selected as a finalist for this year in Adult Science Fiction. It remains to be seen which place it will take. But reaching the finals is an accomplishment by itself, given the quality of the competition.


My other two FAPA award winners are Cyberwitch in 2018 and Pirates of the Asteroids in 2020. Both stories are ones that I’m pleased with, and both incorporate unique ideas into their plot. Cyberwitch takes a severely damaged woman through an astounding transformation via AI-enhanced nanobots, and introduces genetic chimeras and AI-bio-mech creatures, leading to a world where technology is indistinguishable from pure magic. Pirates is the story of a somewhat exceptional physics student who arrives in the asteroid belt just in time for a new war for independence. He finds himself suddenly thrust into a leading role with all the benefits and problems anyone could want or hope to avoid.


Dustfall takes a single premise: The Earth has passed into an interstellar dust cloud that drops mRNA-like particles on the surface. Much of Earth’s life has died or mutated into deadly monsters as a result. What can a young man, focused solely on survival and avoiding being eaten, do, but fall in love with the first ‘normal’ girl he has met in years? He wants her enough to risk the unknown. She isn’t immediately sure but makes up her mind when they meet for a second time. Then the mutants show up, and they want both of them. Now, it’s run, fight, or die!


If you read one of my books, please take a moment to provide a review. Marketing is always tricky, and reviews help a lot. Amazon makes it easy, but you could drop me an email at this site if you read it elsewhere.


Thanks greatly!
Namaste,

Eric

Posted on Leave a comment

The Artificial Intelligence Problem

“AI is obviously going to surpass human intelligence by a lot.” Elon Musk

To many people, this might seem like a fictional plot for a made-for-TV sci-fi movie. What we tend to overlook is that Artificial Narrow Intelligence (ANI) is commonplace today. When your car warns you that you’ve left your lane or applies anti-skid braking, it’s due to the car’s computer using a limited form of AI. We’re seeing more and more people use voice-activated units in their home for home control, ordering online, and entertainment. This is a form of AI. Facial recognition systems are also a form of ANI.

At the moment, numerous companies are racing to create Artificial General Intelligence or AGI. AGI will be able to perform at roughly human level. New techniques such as deep learning are stimulating rapid advances in this effort. The interesting thing about deep learning is that the human programmers often cannot explain how it works or why the computer system is able to learn and sometimes outperform expert humans in specific tasks.

If you’re asking yourself, “Why should I care?” right now, I’m about to give you some reasons.

It’s obvious that the first company to create an AGI system will benefit financially, possibly to an extreme extent. The motivation is high and the competition is intense. For this reason, some companies may be tempted to cut corners.

Let’s assume a situation where company X creates a system which not only displays human level intelligence, but is able to utilize deep learning to quickly comprehend things that humans find difficult. Let’s also assume that the system learns how to modify its internal programming. This could allow it to quickly surpass human intelligence. It might reach an IQ level of 10,000 in a few hours. It would be an ASI or Artificial Super Intelligence.

There is a concept of keeping an experimental AI boxed up, not allowing it access to the outside world in case it should make such a breakthrough. If company X has failed to keep the AI properly boxed, it could quickly create havoc.

Imagine an entity with an IQ of 10,000+ that has access to the Internet. It could, if so motivated, control the entire financial world within hours. If it had been given a prime directive of (just for example) calculating pi to as many digits as possible, it could easily conclude that it could use more computing power to better execute its computations. In that case, it might use its financial dominance to hire companies to create more computers or, perhaps, robots to create more computers.

In this scenario, it could eventually use all manufacturing resources to create computing machines. It might cover farmland with computers or power generating stations. Humans might not matter to it at all, since all it really wants to do is to calculate pi to the maximum accuracy. It could even decide that the molecules in human bodies could be converted into computing devices.

The end result would be no humans left alive, just a gigantic machine happily calculating the next digit of pi.

So, how do we, as responsible humans, ensure that an ASI doesn’t get rid of us? How can we ensure that it is domestic–that is values humans and helps us?

Musk believes that we need to become part of the system and interface with AIs using some form of brain interface. If we are part of the system, perhaps it will be more amenable to helping us.

My personal opinion is that we should seek a way to show an ASI that intelligent biological life is valuable.

Physicists tell us that if the basic constants of our Universe were even slightly different, life would not exist. This seems to indicate that the gathering together of energy that distinguish living beings may be something special. The immutable mandates of the Universe’s structure force life to obey certain structural rules, one of which is a limited form of reverse entropy. In short, we self-assemble, creating order where there was none before. Never mind that our personal order doesn’t last long and we eventually perish.

The question I’m pointing toward is: Can we make a connection between the Universe’s structure and the value of human life? If we can do that, perhaps an ASI would also value us as an example of a direct manifestation of the Universe.

We need a set of rules based on the structure of the Universe that apply equally well to both organic life

(emphasis on humans) and AI. These rules need to be expressed in a way that any ASI would abide by them.

My belief in the underlying laws is why I have some hope that an ASI would be friendly to us. However, this maybe hopelessly naïve. An ASI may have a level of understanding that is so far advanced that it would see things differently.

Perhaps its non-human set of observational criteria will serve as a representation of the Universe’s underlying reality that is beyond human understanding. This might invalidate human models of the Universe and lead to the conclusion that humans are non-essential.

For these reasons, I believe that premature development of an AGI, let alone an ASI could pose extreme danger to humans and possibly all biological life.

I’ve been attempting to explore various aspects of this subject in a series of short stories that may be freely read on my blog. Please read them and comment, if you want.

I also deal with the topic extensively in my latest novel, “Cyber-Witch”. It will be released shortly (November or December, 2017).

Thanks for reading. I’d like to hear your opinion of the issues I’ve raised.

Eric

Posted on 1 Comment

Musings on Artificial Intelligence: Dangerous Times

Memory cards
***Two Gigabytes separated by a few years – both cards are outdated today.***

Years ago I was giving a seminar at the University of Colorado during which I mentioned the possibility of Artificial Intelligence. I explained that most computer people used the abbreviation “AI”. I was surprised when a member of my audience broke out in laughter.

I asked him what was funny and he explained that he was a large animal veterinarian and, to him, AI meant something completely different. The whole class laughed.

Author Yuval Harari believes that in 300 years, Homo sapiens will not be the dominant life form on Earth if we exist at all. He thinks that the likely possibility is that we will use bio-engineering and machine learning and artificial intelligence either to upgrade ourselves into a different type of being or to create a totally different kind of being that will take over. In any case, he projects that in 200 or 300 years, the beings that will dominate the Earth will be far more different from us than we are different from Neanderthals or chimpanzees.

He also states that cooperation is more important for success than raw intelligence. Since AI is far more cooperative than humans, it will have an advantage. For example, self-driving cars can be connected to one another to form a single network in a way that individual, human-controlled cars never can.

The real question is whether AI’s cooperative advantage will have beneficial results for humans or prove to be disadvantageous. Let’s examine various ideas that may be pertinent to the answer.

There’s confusion in both the general populace and science fiction writers about the meaning of AI. People aren’t sure whether it involves intelligence or consciousness, or both paired together as in the human organism. Most science fiction stories center around the premise that AI will be an Artificial consciousness (AC) with super-human intelligence. This supposition is a purely human assumption based on the requirement of writing interesting stories.

I’ve most recently written two short stories based on the idea that AI will choose to emulate humans by implementing some form of programming that allows for emotions. I’m also well into the process of writing another novel that explores this issue.

Assuming robots will have emotions, fall in love, and want to destroy human competitors makes for interesting reading. However, those ideas may not apply in the real world of AI.

Can we agree that intelligence is not necessarily consciousness? I think that one can roughly define intelligence as the ability to solve problems. The ability to emotionally feel things may have nothing to do with intelligence, especially when considering AI. In bio-life, the two go together. Mammals solve problems by feeling things. Emotions assign meaning and meaning provides a necessary component to problem solution for mammals. Computers do not have emotions, at least not yet, and possibly not ever.

There has been a lot of development in computer intelligence in the past decades, but very little development in computer consciousness. That’s understandable, since we humans have a hard time defining what our consciousness is and how it works.

Computers might be developing along a different path than humans. Humans are driven towards greater intelligence by way of consciousness; by the emotional awareness of comfort and discomfort and the urge to do something about those feelings.

On the other hand, computers may not ever develop emotional consciousness, but they do have the potential to form a non-conscious, linked super-intelligence. The important question is what does a world of non-conscious, super-intelligence look like? What are the ramifications of such a world? What is the impact of such a world on humans?

Nothing in our evolutionary past prepares us for that question. (Or maybe we’ve already answered it – a point I’ll get to a little later in my musing.)

Humans have animal requirements. To avoid injury and death, to consume fuel, to reproduce, all are things that provide motivation to bio-life. An AI won’t necessarily have those kinds of drives. The initial AIs might receive grafted on human emotions from their creators, but machine learning has the potential to quickly morph those tendencies into something that humans won’t have the ability to understand.

According to Harari, humans tend to overestimate themselves, and won’t be around in 300 years, because to replace most humans AI won’t have to do very spectacular things. I think that his assumption that AI won’t have to do amazing things to put us out of work is on target. However, I suspect that his 300 years is too long an estimate.

Ray Kurzweil estimates the so-called singularity, the point at which AI supersedes human intelligence will be around the year 2029. That is much sooner than 300 years from now. The exponential rate of technological growth implied by Moore’s law (more a rule of thumb than a law) means that we humans will have to start learning how to live in an increasingly automated world very quickly. Take the idea of smartphones, for example. Smartphones seem to have been around forever, but they first hit the market in Japan in 1999. Most people today couldn’t imagine living without them.

We’ve already seen real-world examples that demonstrate that AI will soon be able to do most of the jobs that humans do and do them better and without tiring or wavering attention. As a result of the transition from a human-factory-based economy to an automated-factory-based economy, we now face what could be called the Uberization of work. Work is metamorphosing from a career-based economy to a gig-based economy. At the moment, wealthy countries are faring better than economically disadvantaged ones in this scenario, since their better infrastructure allows for a little more cushion for unemployed and under-employed workers, but I suspect that this advantage is temporary at best.

Here’s another example of how AI can replace humans, even in specialty knowledge-based tasks: An AI system can diagnose cancer better than a human. It turns out that even the most expert humans have quite a spectacular rate of error in such a task. A simple algorithm, while not as flexible as a human, will easily outperform the human norm, simply because it is consistent and doesn’t get tired or bored. It won’t miss any cues and will always draw the same conclusions based on experience. Humans are rather more variable than that.

Let’s personalize this for a moment. If you suspected you had cancer, wouldn’t you want the most accurate diagnosis possible?

Given AI’s performance advantage in most tasks, there is a distinct possibility that humans will lose their ability to generate value for the major systems that dominate our lives today. We could become useless from the viewpoint of the economic, military, and even political systems. These systems could lose the incentive to invest in human beings. What would happen then? How would the average human survive?

Will there simply be subsidies that provide food, housing, health care? Based on a brief look at our history, there will undoubtedly be various levels of subsidies. What will determine whether one has a gold-level subsidy or a brass-level subsidy? Prior ownership of resources might then become the benchmark for separating the haves from the have-nots. This situation may seem reprehensible, but when have humans ever treated each other as totally equal?

Of course, the development of AI could be interrupted by a world-wide catastrophe. An apocalyptic event could easily cast humanity back into the hunter-gatherer mode, and that is the precise existence in which humans evolved to thrive. Such a life requires a generalist with both physical ability and intellectual flexibility, paired with rapid learning and pattern recognition skills. Could these be duplicated by an AI? Not perhaps so easily as they can be created in a biological entity.

Failing a doomsday scenario, AI will inevitably continue to develop. An idea cannot be killed once it has been given birth. It can be suppressed if there is a sufficiently strong authority, but concepts cannot be destroyed in the normal sense.

Assuming that there is no apocalypse, could AI perhaps find that humans are a useful, self-replicating resource? Given food and opportunity to engage in sex, we duplicate ourselves. How could an AI use us? Will we become a commodity? Could we become a self-replicating biological factory that automatically creates raw materials?

We’ve largely replaced directly useful jobs like farming with intellectual jobs where humans deal with ideas, rather than basic needs. Are intellectual jobs necessary? Not really. Does the world need me to write science fiction? Require my science fiction to survive? Of course not. Can intellectual jobs be done by AI? Probably.

The question is what is necessary? Corn is cheap, but if you let it sit around long enough in an oaken barrel, it can become whiskey and be valuable — to a human, not to a computer. There is a conflation of people being useful with the concept of people being valued. Useful is a judgment based on the production of some essential. Value is a human-based story — we decide what has value and it’s not always what is useful.

Humans have both physical abilities and cognitive abilities. Machines are taking over in the physical ability field, replacing us in factories and AI is starting to compete successfully in the cognitive field.

Do we have a third kind of ability; one we could fall back on? Just for controversy’s sake, how about spiritual ability? Is that a possibility? Could AI become spiritual? That’s the same as asking could it love in the same way that humans do? Could we move from jobs of the body to jobs of the mind and then to jobs of the heart?

Science fiction writers often assume that an AI will be automatically hostile to humans; that it will inevitably try to get rid of us. Various reasons have been given in stories, and numerous methodologies for extinguishing the human species have been postulated, ranging from the Terminator scenario to using poison-releasing nano-machines. These make for fun reading, but might not be accurate.

AI software will shortly be able to read and understand human emotions better than humans can. But, will AI feel in the same way, or will it be a simple analysis allowing it to predict our future actions? Either way, it will be completely consistent and startlingly accurate. Given such an ability, what would prevent the AI that wanted to get rid of humans from simply engaging in an effective propaganda campaign to convince us that we have no reason to exist?

Many people would simply give up when faced with such a campaign. They’d quit eating, quit reproducing, and quit trying to work. Loss of meaning is a terrible thing.

Given that low-skilled jobs are disappearing and not every human is able or has the desire to be trained for a high-skilled job, where will humans find meaning? What point is there to a world where every human is engaged in a nonproductive cycle of hyper-pleasure existence in say VR?

The writings of Victor Frankl demonstrate that humans find their highest feelings of self-worth when they are engaged in meaningful activities. Those with meaning in their life survive longer.

If you don’t have a job and you’re provided with the means to sustain your life, will you be able to find adequate meaning in VR and chemicals?

If not, what will be the outcome? What would such a world look like? What would happen to the odd misfit who cannot find adequate meaning in a VR existence?

Before I finish, I want to come back to the idea that I promised to address at the beginning of this post. The question that asks: what does a world of non-conscious, super-intelligence look like?

My suspicion is that our Universe may be a primary representative of the answer to that question. If one assumes the wave nature of the elemental particles that make up the Universe, then one must also assume that the waves create interference patterns similar to those on a hologram. Waves and interference patterns can store data. Given the estimated size of the Universe, it’s a fairly safe guess that the storage potential is adequate to store everything that has happened since the initial expansion event.

That’s point one. Point two is that chaotic systems sometimes seem to have a tendency to self-organize. What if all that data storage somehow self-organized into a super-intelligence? What if it organized tiny parts of itself into the matter that we see when we look at galaxies and stars? What if it organized itself into transient forms that generated their own form of limited consciousness and asked absurd questions like these?

Regardless of your opinion on any of the questions I’ve raised, I sincerely appreciate your taking the time to read this post, and I hope that it provided you with things to ponder. We are rushing into the next stage of our evolution, and we absolutely must begin to answer these types of questions. I believe that our future depends on it.

Namaste!

Eric

Check my blog for my free short stories relating to AI: “Virtual Love” and “The Adventure of Life”.

Some of this post owes its existence to Ezra Klein’s interview with Yuval Harari. The interview was just too thought-provoking for me to ignore. Thanks.