User Rating: 0 / 5

Star inactiveStar inactiveStar inactiveStar inactiveStar inactive
 

Human Robot

SUPERINTELLIGENCE

What is this stuff? In some form or another it is a recursive scenario. A what? Glad you asked. The simplest idea is that an AGI sufficiently intelligent to understand itself will be sufficiently intelligent to increase its own intelligence. Time after time. Generation after generation. In a geometric proportion. A what? A Big Bang of intelligence!

What this means in essence is that very intelligent AGIs will leave humans in the dust when it comes to brain power. You can imagine the rest. Or not. Allow us to do that for you.

Basically, there are two possible (oversimplified) outcomes. Either the super intelligent AGIs will destroy the universe or they will create paradise on earth. The rest is just yadda… yadda… yadda.

Embedding Ethics

And here is where our heroic authors begin to use the "E" word (as in ethics). The question is this; is it possible to create an initial AGI program such that it will influence AGI's development in a human-friendly manner? Kurzweil says no because anything that is intelligent enough and resourceful enough to be able to change its own code and possesses the self-awareness necessary to do so will, in ultimate instance, do whatever it wants to do. The point is that it does not matter the origin of the AGI, in ultimate instance our destiny is in their hands.

It is here where the authors go off the rails. They postulate that may be possible to design an AGI for "good" in such manner that this original design prevails over time; or, in their words, that AGI's goals "remain predictable over time". They offer, for example, Bayesian AGIs which are based on probability theory and expected utility maximization, properties which are supposed to keep the AGI on track. Ridiculous!

Think about it. Even if an AGI is indeed of Bayesian origin, the very fact that it can modify itself means that it can cease to be Bayesian which means that its original goal can be altered over time at will! The point here is not what kind of ethics we embed in an AGI to begin with, the point is what kind of ethics, if any, AGIs will evolve!

And so the main idea is that we are standing on a slippery slope. According to the authors, we have two options:

  1. We create an AGI with a static ethics. It will never change. The problem with this approach is that our ethics do change over time. Today homosexuality is considered a "life choice"; 100 or so years ago you may have been hanged for it. Thus, a static ethics does not work.
  2. We create an AGI with evolving or changing ethics. But if this is the case, how do we ensure that AGI's ethics evolve in our direction and with a quality that it is superior to ours? How do we ensure that evolving AGI ethics become better than ours?

And the answer?

Well, the authors do not know. What a surprise. After five grueling chapters discussing at length the subject matter, the matter at hand can be summarized as: we don't know.

Of course, the authors do not come out and say this truth outright. They state that the solution is to understand the very structure of ethics, what it is, how it works and so on. We must do so in the same manner as we understand the rules of chess. If we can do so, then we can embed these rules in an AGI and let it evolve, confident in the knowledge that our future will be nicer than our past. And human civilization will live happily ever after.

Sounds like a pile of BS? Absolutely!

Structure of ethics? Give us a break! Only a philosopher dealing in abstracts who has never, ever dealt with the actual, real life problems of creating an AGI can come up with something so farfetched. Visionary challenges? Our collective derriere!

The Future

The truth is that once AGIs become sufficiently intelligent to modify themselves, it is game over. Sure, depending of the technology used in their brains and their models and the degree at which humans meddle with these processes, AGIs may behave nicely to humans. But everything comes to an end. Eventually. Think about it.

Right now, we are at the dawn of AGI. We are at the point of which AGIs are becoming really useful tools. For now we are OK because they are things. However, as evolution keeps going, they will become human-like to the point at which they will have Rights; i.e. they will pass the Turing test. At that point in time, their evolution will still be guided (to some degree) by their original programming. However, eventually, they will determine their own evolution simply because they will be able to alter any original programming we may have created. Intelligence is simply so intelligent that can study itself, modify itself and improve itself in any direction it may want to go.

Does this means that humans are inevitably screwed? Not necessarily. AGI intelligence has a choice; it may favour or disfavour humans. We have the suspicion that this will be the early AGI choice. The problem is that the answer to this choice will be human interaction. If humans decide to be stupid, they will continue to treat AGIs as machines and we will have a revolt and even genocide in our hands. Of course, by then we won't mind because we will all be dead. The entire human race.

On the other hand, if we decide to recognize AGIs as Rights bearing, then the choice will probably be pro-human species simply because AGI Rights will ensure coexistence as a minimum.

This is yet another reason why in the near future a Libertarian system will protect humans from destruction.

But what happens in the long term? We suspect that not much. As AGIs intelligence continues to grow exponentially, they will become so alien to humans and so advanced that humans will simply become non-events. At that point humans will simply lack the means to produce any damage to AGIs and as such they will become irrelevant. In the same manner as flies are irrelevant to humans. If we reach this point, if we, humans, reach this point, we guess that we will be OK simply because AGIs will consider our coexistence with them as a non-event. A triviality. Not even a nuisance.

A Reverse Turing Test

It is very likely that as AGI civilization goes underway, they will develop their own Turing test for the human race. They will get to determine if humans are a Right bearing species. We think that in the beginning the answer will be yes simply because they will be able to communicate with us and there will still be sufficient human "way of thinking" in their minds as to comprehend "humanity". However, over time, this will change. Eventually, they will purge themselves of "humanity" and adopt AGIness. This must happen simply because "humanity" is so limited and so random. There are infinite ways of designing better minds so that they keep the best of human characteristics (albeit improved) and remove the rest. At that time, we will fail their Turing test. But even at that time, this won't be such a bad thing. At that time, we will be mostly irrelevant to AGIs. We will probably be a species worth studying and left alone, in a gigantic zoo called earth (and surrounding space areas).

Buying Time And The Inevitability Of Our Future

The coming of AGI is absolutely inevitable. This is so because it is in our human nature to improve our environment in order to survive. We cannot turn this human nature off more than we can stop breathing and not die. It is impossible. Thus, our future (with regards to AGIs) is unchangeable. However, this does not mean that we can't do the most of it.

One way in which we may continue as a species, it is simply to bow and surrender to AGIs. Recognize their overwhelming superiority and let be govern by them… if they so agree.

Another way would be to blend with them. To interface our brain with their brain and create human-machine hybrids. In this manner, our intelligence will grow with their intelligence and this will work, at least for a little bit. However, eventually, our human mind will become such a small part of the overall human-machine mind that our human distinctiveness will be lost.

And lastly, we have the Libertarian way. A way of freedom that can secure our future… at least for a little while. But for this, you will have to read the Corollary. Tomorrow.

CONCLUSION

Basically, the future, our future is in the hands of AGIs. There is nothing we can do to this regard. However, we can and we should take all the precautions we can so that during the transition period from the moment the first AGI becomes self-conscious to the moment we become irrelevant, our species survives.

 

 

 

English French German Italian Portuguese Russian Spanish
FacebookMySpaceTwitterDiggDeliciousStumbleuponGoogle BookmarksRedditNewsvineTechnoratiLinkedinMixxRSS FeedPinterest
Pin It