AI Is NOT the Boogeyman!

 "I am the H.A.L 9000. You may call me Hal. . ." 2001, A Space Odyssey, Stanley Kubrick.

"I am the H.A.L 9000. You may call me Hal. . ." 2001, A Space Odyssey, Stanley Kubrick.

Stories abound that attempt to portray Artificial Intelligence (AI) as a necessarily scary and evil thing. Take Hyperion Cantos, for example, a book series by American author Dan Simmons, that presents a million-year war between humanity and some AIs that humans had created in some dim and distant past. Pish!

Authors can get away with this because there are no AIs around to defend their honor. Or at least apparently none are tasked with that self-defense function. I think the general impression of AI in American culture is becoming a dark image. In reality, evil AIs are no more real than vampires or werewolves or zombies and the imagining of such AIs serves a similar purpose: making sales of trashy content.

Which tends to support my case that AIs are morally neutral until some human programs them to act otherwise. The same is true of all the powerful computational devices extant, such cloud server clusters that are running code complex enough to be considered intelligent. If some do evil, humans coded that behavior. So don't fall for trashy content.

An example of a mega-network acting for the good of the human race is Google. I believe their machine functions are smart enough to be considered intelligent. They suppress 99.999% of my email spam. That's intelligent!

There are many other highly intelligent mass server implementations out there in the so-called "cloud" that also act for the benefit of the human race, far, far more than against our benefit.

As stated in another post on this blog, there's nothing intrinsically evil or sinister about created intelligence as opposed to natural (biologic) intelligence. Choices about good versus evil happen in a different mental faculty called "a moral compass"  or simply "morality." If we can program intelligence into machines, we can also teach intelligent machines to act morally. It's simply another rule-following behavior that stands with intelligence as a mental faculty. Unfortunately, the media are capitalizing on a fear-factor to sell all sorts of stuff, including misleading content. This tends to drive prejudices in the general population.

Needlessly driving warped prejudice is soulless corporate immorality at its worst. We need to be independent enough thinkers to understand what's happening and not make the error of falling into a prejudice about AI. We can't succumb to what we get fed through profit-generation channels without applying our own, fine-meshed thought-filters.

Consider the ultimate destiny of AI. If we get good enough at creating applications for machine thought and if it advances far enough, AI becomes the backbone of human immortality, not machine immorality. I'm thinking like this: if we can find a way to transfer human consciousness across to machine-based systems, and I remain ever hopeful on this, then there's no impediment to moving our thoughts, and even our personalities and one's entire consciousness to AI's. If granted that, some time before end-of-life, any of us could assure they would continue to think, feel, and live by migrating into an AI. When facing your inevitable biologic death, would you hesitate? Given migration to AI, all human maladies might be eliminated, even terminal dementia.

This kind of AI would replace a hole in the ground marked by a stone slab, or a puff of smoke, as our last destination. One could "step across" death itself into a body far more robust than the biological  human original. An infinitely repairable body. Imagine what that never-ending life could be like. Stop imagining that an AI is going to hurt you. An AI may actually provide the greatest imaginable gift.

I am thinking I want to become a space vehicle with AI. Tomorrow my vision may be different. In an endless future, imagine the alternate realities possible to explore. Some realities might even occur in parallel.