User Rating: 0 / 5

Star inactiveStar inactiveStar inactiveStar inactiveStar inactive
 

Human Robot

MINDS WITH EXOTIC PROPERTIES

Next, the authors proceed to explore the unusual properties that non-human minds may exhibit, beginning with the idea that it is theoretically possible to have AGI's with only some of human characteristics.

The Human Analogue

And so, the next question that the authors pose is as follows. If we create an AGI that is sapient and exhibits human-like behaviour but it is not sentient what would its Moral Status be? We must remember that the authors required Sentience and Sapience to exist in order for an entity to have Moral Status. Obviously, if we remove one of the requirements, the answer (according to the authors and taking animals as example), is that they have very little Moral Status. Obviously.

However, in order not to make this question so easy, they throw-in the modifier that an AGI must be simultaneously Sapient and "be a person" without being Sentient. Then, the authors ask what is the Moral Status of such AGI? Aha. Uhu. Hohum…

Allow us to rephrase. So we have an AGI that is intelligent, it behaves as a person but it is not self-aware. Do you see the glaring contradiction in terms?

This is classic philosophical horse manure. An AGI cannot simultaneously behave as a person (who is self-aware) and not be self-aware because if the AGI is not self-aware then it cannot behave as a person!

And so, how do the authors jump over and ignore this blatant contradiction in terms? Simple, by not defining what "being a person" means.

What they are posing is the following question: what Moral Status would a human analogue that behaves as a human have if it is not self-aware?

Obviously, they don't provide an answer because this article is about "Morality" in the philosophical term, not "Morality" in practical terms. Because obviously, we are not dealing with AGIs which we can design at our will giving them any characteristic that we may choose… Oh… wait…

Again. In contrast, take a look at our test. We say that an entity has rights if this entity is self-aware and it is capable of communicating this self-awareness to us. In order to make a judgement call, we use the Turing test. From this perspective, the semantic differences between not being Sentient but still exhibiting human behaviours become mute. The point is whether this AGI passes or fails our test. If it passes, then it has rights. If it fails, then it does not. We don't really care about the philosophical definition of what does it mean to be conscientious of oneself, simply because such definition is ultimately subjective! Precisely because of this Turing devised a test which hinges on human judgement since he recognized that such objective definitions do not exist and it is thus a theoretical impossibility to arrive at an objective decision.

Basically, to spend time in such thoughts although good to keep philosophers employed in universities, it is pointless in real life where real AGIs are being developed.

Subjective Rate Of Time

The next question that they pose is what they define as a "Subjective Rate Of Time". And what is the Subjective… yadda…yadda…yadda (SROT)? Let's assume that a human could be "uploaded" into a machine. Essentially, they assume that there is a technological system allowing the exaction of brain patterns from a human brain running on "wetware" and to be precisely copied into hardware. As hardware has no speed limitations (e.g. we can increase the clock of the CPU), a mind so uploaded will run faster than at "normal" speed thus seeing the world develop in slow motion. The reverse is also true. But before we proceed, let's make sure we don't confuse SROT with the Subjective Passage Of Time (SPOT), which is, subjective! SROT is real time accelerated or slowed down. SPOT is real time moving at a constant speed, but our perception of it changes. We are interested in SROT and not SPOT. Understood?
Good!

Let's go to the questions.

Would an upload be the same person as the original assuming that the upload process makes perfect copies of our mental processes? The authors have no idea, but we do! The answer is Yes and No. The philosophical mind versus brain debate is basically over. A mind cannot exist without a physical substrate, ergo, if we make an exact copy but in a different substrate, we get an exact copy of the mind. No mysteries here. In this case thus, the answer is Yes. We get the same person.

However, as time goes by and the copy is exposed to different situations than the original and because it is equally intelligent and sentient, it will develop differently than the original. Thus, the answer is that over time it won't be the same person. Gee… that wasn't so difficult, was it?

What happens to "personal identity" if two copies are running in parallel? Well… nothing mysterious to begin with! Even if both copies are exposed to the exact stimulus (for example through a virtual world that is identical), the basic nature of both exact copies introduces certain degree of randomness. Which means that over time both copies subjected to the exact same "life experiences" will be different because they will react differently based on random behaviours! Not that difficult either!

Moving on… According to the authors, the variability of the SROT is an exotic property of AGI minds (agree), that raises "novel" ethical issues. Well… not a chance in heck!

For example, what happens if a self-aware AGI kills a person and is sentenced, which time do we use? Do we use SROT or "our" time? Irrelevant! The error in this example is to take the current legal system and assume it is moral and correct. It is not. The purpose of a just action against a self-aware entity which interacted with somebody's property without prior agreement is to turn back the effects of the interaction to the point immediately prior to the interaction. In this case it would mean to resuscitate the dead person. As this is obviously not possible, compensation is assessed. As such, the AGI must generate such compensation, however, the interesting point is that compensation is outside of the realm of AGI's time and in the realm of the victim's time (e.g. family, friends, enterprises, etc.). As such it is irrelevant if the SROT is equal to actual, human time or not. What matters is compensation. Thus, we ask, where is the ethical dilemma?

If a fast AGI and a human are in pain, who do we treat first? The AGI because its pain lasts longer or the human? Again, this moral or ethical dilemma arises only because the authors assume that there is a moral or ethical dilemma. What we say is that there isn't! What matters is AGI's and human's contracts with the pain treater. Whatever the contracts say will decide the priority! And what happens if there are no contracts? Then the treater gets to decide based on his/her own preference! The treater may even decide not to treat either one! This is not a "moral issue"; this is a contractual issue first and a personal and subjective issue second.

But are we simply ignoring the problem by passing it to somebody else? It would seem so. But only seemed so. The answer is a resounding no! The problem is not one of ethics, but one of people attempting to impose their ethics to everybody. The problem originates in the ludicrous idea that a specific ethic definition can and should be universal. Think about it! The people proposing such a ludicrous idea are saying that a subjective, un-provable and flimsy concept at best is good enough to guide our very actions! Yes. The image of a clown springs to mind. We wonder why….

The authors having screwed-up so far to this point, continue with the trend by stating yet another principle… which they formulate only but "do not argue for" (their exact words). So much for placing their reputation where their mouth is! And what is this mysterious new principle? The Principle of Subjective Rate of Time, which paraphrased states that what's important is the experience's subjective duration.

In other words and taking the previous example, it does not really matter for how long an AGI may be incarcerated, what matters is for how long the AGI feels that it has been incarcerated. Aha… Uhu… well…not! To be of any use such a principle must be symmetric, this is, it must work for AGIs and humans alike. Obviously, this principle does not work for humans. We can already see the scene of a criminal attorney making a plea to the judge:

- Your Honour, although it is true that my client has been found guilty of murder and sentenced to 25 years imprisonment, because his imprisonment felt like having lasted for 25 years instead of the two that actually passed and by the Principle of Subjective Rate of Time, he should be released! And so Justice shall be served!

Does this make any sense? At all? Of course not! Not even in our ludicrous so-called Justice System.

So much for this "Principle".

Mid-Level Ethical Principles

Passed this point of debacle, the authors deem appropriate to bring to our attention what they call "mid-level ethical principles". This is, ethical principles (if such an animal actually exists) that are not too large nor quasi-meaningless. Stuff that will have some impact but not too much and how such human principles will have to be modified for AGI's.

In order to illustrate their point they posit human reproduction. Humans have limited abilities to reproduce other humans and such a reproduction is limited to available resources and means. For example, we can't clone humans (yet) and we can't gestate humans in-vitro (yet) and we can't alter humans' brains (yet). In opposition, we have AGI's which will potentially be able to clone themselves, manufacture themselves in factories and alter their software. As you can see, the differences are staggering! Because in the future humans and AGI's will be able to do the exact same!! Oh…. Wait… nevermind…

But now having made this point…don't look closely… keep moving…keep moving…The authors introduce "reproductive freedom". Their point is that humans are slow to reproduce while AGI's could do so in an exponential fashion… which humans have been doing since they first began to reproduce! Yes Timmy, biological entities do reproduce exponentially! All the species do so. The only thing that keeps species in check is… other species! And so this mid-level ethical dilemma is that AGI's can reproduce far faster than humans. Aha… uhu… and the dilemma is? Helloooooooo… may we have our dilemma please? Still waiting….Hellooooooo…. anybody there? Hellooooo…..

But then the authors introduce the dreaded "societal duty" to provide for the dispossessed. Or the less wealthy. Or the middle class. Or something… if our current "social laws" are of guidance… but we digress. The point they make is that if AGIs could reproduce exponentially and very, very fast, our society will simply run out of resources to ensure their wellbeing. At this point the AGIs will either die or they will be turned off. The horror!!! Again, the so-called "ethical problem" arises because of authors' preconceptions and errors. To begin with, the very concept of society is as meaningless as the concept of class. There is no such thing as a society. Secondly, even if societies would actually exist, they don't have the duty to do diddly squat for other "members of society". This is so because a contract between the members of a given society simply does not exist. For example, do you have an actual contract to help your neighbour? No? There. Point made.

Ahh… but the clever reader would point to the fact that there is a "Social Contract", right? And because of that, society has a "duty", right? Well… no. We won't get into longwinded explanations on this topic here. But trust us. Or not and then just check Social Contracts Are A Scam.

Basically, if AGIs decide to reproduce exponentially and rapidly they will find themselves lacking resources to do so in a Libertarian society because other AGIs and people won't have the obligation to help them. Then AGIs (as Right holders) will have to make a determination whether or not to reproduce in that fashion. They can do as they please for as long as they don't interact with our property without our previous agreement.

CONCLUSION

The authors then conclude that we must not mistake mid-level ethical principles for foundational normative truths. In other words, when it comes to AGIs we can't dismiss context when applying ethics, particularly when dealing with minds with exotic properties. Which is nonsensical. The issue is simple. If an AGI fails our test, it has no property-based rights, is then a thing and we, humans, get to decide what to do with it. On the other hand if an AGI has property-based rights, then the same rules as for humans apply! Context does not come into play whatsoever, no matter how exotic AGI minds may be. At least not within a Libertarian system.

 

 

English French German Italian Portuguese Russian Spanish
FacebookMySpaceTwitterDiggDeliciousStumbleuponGoogle BookmarksRedditNewsvineTechnoratiLinkedinMixxRSS FeedPinterest
Pin It