Among the older posts racked to be gotten at someday, this Ronald Bailey column from Reason (probably postponed as it turned up during a Super Bowl run by The Green Bay Packers) argued, "We haven't heard from space aliens and that might be good news."  One of the hypotheses he turned up suggested the use of exponentiating machines.
In 1980, physicist Frank Tipler proposed a scenario in which alien civilizations would launch fleets of self-replicating machines possessing human-level intelligence to explore the galaxy. The machines would arrive in a new star system and immediately start to populate it with duplicates of themselves and launch the next wave of explorers. Tipler calculated that once launched such machines would inhabit every solar system in the galaxy within 300 million years. Since there is no evidence for such an ever-expanding fleet of self-replicating machines, Tipler concluded, “Extraterrestrial intelligent beings do not exist.”
The conclusion of TNT's mercifully ended Falling Skies suggested that they passed through sometime in the sixth century, being driven off by Peruvians with spears.  Note, though, that in the debate that followed, the risk that an exponentiating machine with intelligence becomes a threat to the machinist is one that cosmologists have long understood.
In 1983, planetary scientists Carl Sagan and William Newman countered Tipler’s “solipsistic” conclusion arguing, among other things, that intelligent aliens might refrain from constructing fleets of self-replicators because such machines might turn on their creators. In addition, Sagan and Newman suggested that advanced aliens might have “much more exciting and fulfilling objectives...than strip-mining or colonizing every planet in sight.” Then Sagan and Newman turned Pollyannaish proposing that aggressive mean-spirited aliens would conveniently kill themselves off leaving only benevolent civilizations “pre-adapted to live with other groups in mutual respect." Moreover, they suggested, “We think it is possible that the Milky Way is teeming with civilizations that are far beyond our level of advance as we are beyond the ants; and paying us about as much attention as we pay to the ants.” Never mind how thoughtlessly we walk over anthills as we go about our daily tasks.
In The Bulletin of the Atomic Scientists, Edward Moore Geist digs into the state of the artificial intelligence art (machines are still a long way from forming strategies, let alone evolutionary stable ones?)  That's not to say the danger isn't present.
Dr Stuart Armstrong, of the Future of Humanity Institute at Oxford University, has predicted a future where machines run by artificial intelligence become so indispensable in human lives they eventually make us redundant and take over.

And he says his alarming vision could happen as soon as the next few decades.

Dr Armstrong said: "Humans steer the future not because we're the strongest or the fastest, but because we're the smartest.

"When machines become smarter than humans, we'll be handing them the steering wheel."
But he continues with a hypothetical that has already been anticipated by Hollywood.
In attempting to limit the powers of such super AGIs mankind could unwittingly be signing its own death warrant.

Indeed, Dr Armstrong warns that the seemingly benign instruction to an AGI to "prevent human suffering", could logically be interpreted by a super computer as "kill all humans", thereby ending suffering all together.

Furthermore, an instruction such as "keep humans safe and happy", could be translated by the remorseless digital logic of a machine as "entomb everyone in concrete coffins on heroin drips".

While that may sound far fetched, Dr Armstrong says the risk is not so low that it can be ignored.

"There is a risk of this kind of pernicious behaviour by a AI," he said, pointing out that the nuances of human language make it all too easily liable to misinterpretation by a computer. "You can give AI controls, and it will be under the controls it was given. But these may not be the controls that were meant."
Particularly if, as in the Star Trek episode, the machines exchange algorithms.
I've always found Nomad's origin and characteristics somewhat contradictory and confusing. Nomad was supposedly the result of a collision between an Earth probe tasked with searching out new life, and an alien robot tasked with sterilizing soil samples. How did it acquire the power to wipe out four billion lives? And how did it get around? How could Nomad have warp capability? It was the size of a Chatty Cathy doll.
Anything can happen in a cartoon. But complex adaptive systems, even algorithmically based artificial intelligences, tend to do whatever they d**n well please.

No comments: