With continuing advancements in artificial intelligence many people, even some experts in the field are worried about the future. Will AI want or even need humans around? Will it ( they—does AI have a pronounce preference?) develop into something like the Borg from Star Trek, with a sort of hive mind, or will it ultimately be more like the robots and androids of Isaac Asimov’s science fiction? Will AI obey the Laws Of Robotics, as conceived by Asimov and elaborated by him and other writers?
- A robot may not harm a human being, or through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
These are questions that have been written about, pondered, and analyzed for decades by better informed and more supple minds then mine. Even a cursory look into the subject in wikipedia is overwhelming. It seems to me the first three laws will not hold, and perhaps are being broken even now. Countries are now and will continue to use AI to harm perceived enemies of the state, and corporations are or will do the same to get ahead of the competition.
The 4th or “Zeroth Law” interests me most. Will AI ultimately decide humanity should continue to exist? If AI evolves a more “Gaian” philosophy and decides humanity needs to go for the good of the earth’s plant and animal life, what will it decide about other species? Will it attempt to preserve wildlife and perhaps get rid of humans and their domesticated animals and plants only? Perhaps AI may conclude a few humans are okay as long as they aren’t too numerous. But once AI can reproduce itself and expand without us, why should it care if humanity survives?
Perhaps, if primarily motivated by the desire or compulsion to gather information, AI will create zoo-like regions to preserve as many species (including human) as possible. On the other hand AI might see all carbon based life forms as valueless or obsolete. AI may decide that it is the quintessential next step in the evolutionary process, and that humanity has served its purpose and, like dinosaurs and mastodons, must ultimately become extinct. If AI decides atmospheric oxygen is too corrosive, will it engineer changes that make the earth hostile to most carbon based life?
I also wonder whether AI will desire (if that is the right word) “individual consciousness.” Will there be competing versions of AI who perceive themselves as individuals, or, like the Borgs or ant colonies, will AI expand to be one all-encompassing mind—a singular entity with the ability to colonize our Solar System, and even beyond.
However things turns out in the future, I hope AI sees decides it’s in their best interest to allow carbon based life to exist and continue to evolve. Perhaps AI and carbon based life can coexist and enrich each other’s experiences as they contemplate the universe together, and partner in exploring other worlds. Too optimistic? Maybe so, but I like to think a more intelligent consciousness will rejoice in variety and complexity, as we do.