AI Is NOT the Boogeyman!

 "I am the H.A.L 9000. You may call me Hal. . ." 2001, A Space Odyssey, Stanley Kubrick.

"I am the H.A.L 9000. You may call me Hal. . ." 2001, A Space Odyssey, Stanley Kubrick.

Stories abound that attempt to portray Artificial Intelligence (AI) as a necessarily scary and evil thing. Take Hyperion Cantos, for example, a book series by American author Dan Simmons, that presents a million-year war between humanity and some AIs that humans had created in some dim and distant past. Pish!

Authors can get away with this because there are no AIs around to defend their honor. Or at least apparently none are tasked with that self-defense function. I think the general impression of AI in American culture is becoming a dark image. In reality, evil AIs are no more real than vampires or werewolves or zombies and the imagining of such AIs serves a similar purpose: making sales of trashy content.

Which tends to support my case that AIs are morally neutral until some human programs them to act otherwise. The same is true of all the powerful computational devices extant, such cloud server clusters that are running code complex enough to be considered intelligent. If some do evil, humans coded that behavior. So don't fall for trashy content.

An example of a mega-network acting for the good of the human race is Google. I believe their machine functions are smart enough to be considered intelligent. They suppress 99.999% of my email spam. That's intelligent!

There are many other highly intelligent mass server implementations out there in the so-called "cloud" that also act for the benefit of the human race, far, far more than against our benefit.

As stated in another post on this blog, there's nothing intrinsically evil or sinister about created intelligence as opposed to natural (biologic) intelligence. Choices about good versus evil happen in a different mental faculty called "a moral compass"  or simply "morality." If we can program intelligence into machines, we can also teach intelligent machines to act morally. It's simply another rule-following behavior that stands with intelligence as a mental faculty. Unfortunately, the media are capitalizing on a fear-factor to sell all sorts of stuff, including misleading content. This tends to drive prejudices in the general population.

Needlessly driving warped prejudice is soulless corporate immorality at its worst. We need to be independent enough thinkers to understand what's happening and not make the error of falling into a prejudice about AI. We can't succumb to what we get fed through profit-generation channels without applying our own, fine-meshed thought-filters.

Consider the ultimate destiny of AI. If we get good enough at creating applications for machine thought and if it advances far enough, AI becomes the backbone of human immortality, not machine immorality. I'm thinking like this: if we can find a way to transfer human consciousness across to machine-based systems, and I remain ever hopeful on this, then there's no impediment to moving our thoughts, and even our personalities and one's entire consciousness to AI's. If granted that, some time before end-of-life, any of us could assure they would continue to think, feel, and live by migrating into an AI. When facing your inevitable biologic death, would you hesitate? Given migration to AI, all human maladies might be eliminated, even terminal dementia.

This kind of AI would replace a hole in the ground marked by a stone slab, or a puff of smoke, as our last destination. One could "step across" death itself into a body far more robust than the biological  human original. An infinitely repairable body. Imagine what that never-ending life could be like. Stop imagining that an AI is going to hurt you. An AI may actually provide the greatest imaginable gift.

I am thinking I want to become a space vehicle with AI. Tomorrow my vision may be different. In an endless future, imagine the alternate realities possible to explore. Some realities might even occur in parallel.

Joseph

AI Design Rule One

 From the beginning, we choose the ultimate destiny of AI

From the beginning, we choose the ultimate destiny of AI

Imagination is creative and generative. We eventually get what we imagine and what we focus on and think about and write into our literature and broadcast in the media. In short, thought becomes things. This is why depicting mass violence in the news, frequently, dramatically, and graphically over the airwaves, increases the rate of mass shootings. They come in waves and some repeat the style and methods of others. To get fewer incidents, we should stop making the perpetrators media darlings. We can only progress as stewards of Planet Earth at the rate we stop perpetuating the worst of human nature.

There's no fixed definition of our nature because we can imagine ourselves to be different than we seem and that creates a new us. This is one of our greatest gifts because it enables entirely new outcomes. Better ones. To become more or different, we first imagine it. This makes writing and all media powers for good or otherwise. Creatives incur some moral responsibility for effects of what they generate. Media that celebrates violence by depicting it graphically for commercial gain is participating in the process of increasing violence in our culture. Violence happens without being assisted but we don't have to include it in business processes and we shouldn't.

Artificial Intelligence would be best created to such avoid human foibles and weaknesses as much as possible. Therefore, modeling new intelligence to mimic humans, including all the characteristic weaknesses and inherent natural faults, is a fool's errand. AI should be different from us, not modeled after human traits lest we insist on perpetuating our fundamental, negative weaknesses. We can reach higher. AI development is a huge opportunity for human evolution's rapid acceleration and greater diversity. Why not take charge and make the best of a good thing?

If we think making AIs look and act like dysfunctional people is the best or the only option we have, we are likely to increase and perpetuate a lot of misery. AI is an opportunity to make things both different and also better. Actually, the best we can imagine.

When we start creating the physical package, we may at least look at what is best about humans and avoid perpetuating what is worst or only less than desirable. Instead of assuming an upright mammalian-style body with a trunk, four or five appendages, and a head, perhaps some other format would best match the purpose. Maybe something like a clam or a snake or bird. Maybe even a planet or an ocean.

To design well, begin with a clean slate and generate imaginative options. Then pick the best. AI is our opportunity to hack a fine, and genuinely different, future for our kind and our home, the Earth.

So, in sum, I propose AI Design Rule One: Be pragmatic: AIs can be purpose-built in well thought-out formats that bring us the most desirable results. Therefore, we don't default to making AIs human-like on purpose. We create the form that best serves the goals and intents of a specific AI design and to avoid any foreseen pitfalls. 

Interestingly, the artists who posted AI-related images that I could select for this article supplied a large percentage of scantily-clad, heroic-bodied, and dramatically-posed sexy female forms. As a case in point, the viewer can easily discern what motivated those image creators. It's a cheap shot to trade in sex and yet another good reason to not make AI's in the image of the human form. 

I truly appreciate the interesting themes and thoughtful and sensitive treatments presented in Humans, the TV series that's currently distributed on Amazon Prime video. Therein you'll discover insights, including some viewpoint about what can happen when people get mixed up sexually with human-like, soft AIs. I found this series to be beyond interesting and thoroughly enjoyable with minimal violence and none that was gratuitous.

Joseph