User Rating: 0 / 5

Star inactiveStar inactiveStar inactiveStar inactiveStar inactive
 

Human Robot

MACHINES WITH MORAL STATUS

Defining Moral Status

The authors then proceed to attempt to define moral status. They borrow the following definition and seemed to be happy with it:

X has moral status because X counts morally in its own right, it is permissible/impermissible to do things to it for its own sake.

Which obviously get us nowhere although the authors seem to disagree. As you can see, the definition defines nothing because it simply uses a different definition (i.e. morality in its own right) which remains undefined. So much for clarification.

The general idea is that although we may do as we please with something without moral status (e.g. a rock), we cannot do so with a human being because a human being "counts in her own right". Aha. Uhu. And what exactly is the meaning of "counting in her own right"? It means: "To have moral status". Observe the circularity of the argument. Observe how it goes nowhere fast.

But then a better idea comes into play. The notion that a person must be treated in such a manner that it takes her legitimate interests into account (e.g. giving weight to her wellbeing). Better. Sensible… And totally impractical.

Allow us to pose a question. What exactly are the "legitimate interests" of a person? This is important because their definition of Moral Status hinges around this question. Well… short of being mind readers we don't actually have a clue. Sure, for some things we can certainly guess, but we can never be certain. And then there is the issue of a "legitimate interest" being voluntary or involuntary. And then there is the issue of whether that particular human being would actually do something with her "legitimate interests". This is all very confusing precisely because the definition of morality is confusing.

This goes back to the most basic ideas of Austrian Economics. A "legitimate interest" of a human being is defined as something subjective that a person may act upon. Emphasis on may. But is this "legitimate interest" of any concern to anybody other than the person who has it? No. Why not? Because nobody else has permission (i.e. a voluntary agreement) to do something about it.

If person A has a "legitimate interest" and wants person B to do something with it, then person A will so tell person B. In this case, there is no need to guess what such "legitimate interest" may be because A told B. There is no ethical issue here.

However, if person A has not given permission to person B to do something, then person B is prohibited from doing anything to person A. Thus, there is no ethical problem here either!

The confusion the authors arrive at stems from the idea that somehow people have certain rights over other people, even when there is no voluntary agreement. Even when there is no consent. Big, big mistake!

So What Is Moral Status?

If their definition does not work, what is "Moral Status"? Luckily enough, Libertarianism has a definitive answer, and the answer is that it actually does not matter at all! As with Austrian Economics, the issue is not what we think or what we plan to do or what we may be biased towards; the issue is how we act. Anything we do before acting is irrelevant because we are not inter-acting with anybody or, to be precise, with anybody's property (including their body). Thus, whether something has "Moral Status" or not is irrelevant because it does not change their lives (i.e. properties). But once we decide to act, the question is not what we can or cannot do, but what is our contractual obligation? In other words, we are only allowed to act within the boundaries defined by the other party, when it comes to interacting with the other party's properties. But because these boundaries have been defined in the contractual arrangement, we know exactly what those boundaries are. There is no room for guessing. We know exactly what the other person "legitimate interest" is because they told us so! There is no room for guessing and thus there is no question of "Moral Status".

How Do We Determine Moral Status?

Continuing with the theme of Moral Status, the authors then outline two criteria that are commonly proposed as being importantly linked to moral status… whatever that may mean. They are:

  1. Sentience: the capacity to experience physically.
  2. Sapience: the capacity of higher intelligence, including self-awareness and being a reason-responsive agent.

Thus, they argue, animals may have the former but not the latter and therefore they have some moral status but not as much as humans. They also pose the question as to the degree of moral status for "marginal humans" such as mentally retarded people or unborn or very young infants and how this question impacts animals.

Of course, their problem is their confusion, not the definitions. Again, the issue is not an issue of Moral Status but an issue of rights.

In our article Do Animals Have Rights we outlined our test to determine if something (in that case animals) actually have rights. We said that for "something" to have rights, they have to:

  1. Be self-aware (because self-awareness is necessary for self-ownership)
  2. Be capable of communicating this self-awareness to us (to differentiate them from objects)

And surprisingly enough, this is all that is required. This is our Rights Test. And how do we execute the test? We use the only means we have to our disposal: a Turing test, which, interestingly enough, was developed for intelligent machines by Turing (Google it).

The idea that Sentience is a critical determinant is ludicrous. IBM's Deep Blue is capable of experiencing physicality in the sense that it can see (through cameras), it has input (keyboard) and so on, yet it is not alive and it can hardly be said that Deep Blue has any rights. It is an object. Yes. A sophisticated object but an object nevertheless. Does Deep Blue present higher intelligence? Sure. When it comes to chess. But what about everything else? No. Is it self-aware? No. How do we know? Because it is capable of communicating with us and it did not say anything!

Sentience and Sapience are certainly necessary to define rights, but they are not the critical parameters to do so.

Apparent Problems In The Application Of Morality I

Once they have mis-defined "moral status", the authors pose the following problems as examples of difficult problems:

  • Is it moral to experiment on animals?
  • What are our obligations to people with severe mental issues such as dementia?
  • What are our obligations to infants?

Which are not really problems.

Is it moral to experiment with animals? Again, all we need to answer is this: Do we have animals' permission to experiment on them? In order to answer this question we need to know if animals actually have rights. If they do, then there must be a voluntary contract between them and us. If they don't, they don't have any right and we can do as we please. We already answered this question in our article Do Animals Have Rights?

And how about people with mental issues? Same question. Do we have any voluntary contractual obligations with them? Have we entered in any agreement to do anything for them? If the answer is no, then we don't have any moral obligations because we don't have any rights to their properties (including their bodies).

And what about infants? We already answered this question in our article The Rights of the Child and won't repeat it here. Read the article if you are interested.

Now that we have clarified these issues, let us clarify something else. Moral obligations (as viewed by the authors) are actually only minimum obligations from a Libertarian perspective. There is a difference.

For example, in the author's view "we" means society. Thus, if "we" have the moral obligation to do something about an Alzheimer's patient, "we" (as a society) have the obligation to do so. We have to create hospitals to care for them. We have to develop a cure. We have to support their families and so on. Their mistake originates in the notion that there is a social contract between society and an Alzheimer's patient. There isn't one because Social Contracts Are A Scam.

But from a Libertarian perspective "we" means every one and each one of us. Do we, personally, have a contract to create hospitals, develop a cure and support the families of specific Alzheimer's patients? No. Then, "we" (each and every one of us) does not have such an obligation. This is the minimum obligation we have, which is none!

Now, this is not to say that we are prohibited from doing anything. Far from it. We are free to do as we please. We just don't have the obligation to do so. If we want to spend our lives building hospitals, searching for cures and supporting the family of Alzheimer's patients, we are absolutely free to do so.

Notice the difference and the error in their thinking?

Apparent Problems In The Application Of Morality II

Next, the authors ask whether an AI has any moral status. Consistent with their inconsistency, they state that Sentient AI's have some moral status akin to animals and that Sapient AI's with a degree of intelligence similar humans have full moral status. The former being ludicrous and the latter being only a possibility, both leading us nowhere.

They state that it is wrong to inflict pain on a mouse or AI "unless there are sufficiently strong morally overriding reasons to do so". Aha. Uhu.Well… no.

The question is not whether or not there are "sufficiently strong moral overriding reasons", the question is whether or not we have the right to do so. Besides, how do we even begin to create an objective (or at least practical) scale of "strength" for "moral overriding reasons"? And at which point these "reasons" become "overriding"? Ridiculous!

So, the question of Moral Status is a sliding scale? Apparently. And where does this error originates in? In the erroneous notion that Moral Status provides rights regardless of the target subject. This is yet another reason why the very concept of Moral Status is erroneous. It is a process of stealing rights from other people… or entities with rights.

The question is not what is the Moral Status of the target, but what are its rights, if any?

As such, the application of morality is never a problem, it is always an issue of contract specifications.

And we do agree with the authors on the Principle of Substrate Non-Discrimination but in its Libertarian interpretation. It states:

Beings or entities successfully passing the Rights Test outlined above have rights regardless of the substrate of their implementation.

Which means that if something has rights, then it has rights regardless of if it using wetware (a biological brain), a CPU (a silicon chip) or something entirely different. Physicality in and by itself, although necessary for us to exist, does not add or subtract rights.

And we also agree with the authors on the Principle of Ontogeny Non-Discrimination but again, in its Libertarian interpretation. It states:

Beings or entities successfully passing the Rights Test outlined above have rights regardless of how they came into existence.

Which means that if something has rights, then it has rights regardless of if it being born (a biological entity), being programmed (an AI) or something entirely different. The origin of an entity has absolutely nothing to do with their rights.

It is precisely because both principles (in their Libertarian interpretations) are rooted on practical tests, that qualitative differences between entities are irrelevant. What do we mean by "qualitative differences"? The authors propose that if two entities have some differences, their Moral Status will be slightly different. Again the sliding scale. This is, of course, nonsensical. Qualitative difference between entities will force them to either pass or fail our test. They either have all rights or they have none.

A Question Of Subjects And Species I

Our Rights Test applies per species, any species. What this means is that we don't judge a specific person, we judge homo sapiens. This prevents a whole set of problems leading to nonsensical conclusions. As to the question who judges, we all do. Every one of us and we are free to achieve a different conclusion. But we must do it by species. The problem is that if we attempt to judge we soon realize that the only possible answer is that we do have rights because in order to reach the opposite conclusion we would have to prove that humans are not self-aware. But unfortunately and in order to judge self-awareness, we need to be self-aware! See the glaring contradiction right there?

So the answer is that yes, humans, as a species, have rights and those rights do not change with the specific circumstances of a specific human.

For example, what happens to the rights of a person if this person has a diminished mental capacity? Well…nothing! Their rights remain the same. How about people sleeping? Well… nothing! Their rights remain the same. Infants? Well… nothing! Their rights remain the same albeit expressed through their parents (see The Rights of the Child). The answer is that rights do not change with specific circumstances.

Now that we know that, the second question we need to answer is how our interaction changes with other members of our species under different circumstances. This question is no longer one of rights (Moral Status in authors' parlance) but it points to our ability to contract. And the answer is that our ability to contract is on a sliding scale based on mental capacity as it has always been for all the members of our species! The bottom line is that our most basic rights do not change with our mental capacity, but what changes is our ability to interact with other people.

The last question we need to answer is when do our rights cease to exist? This is an easy one to answer. Our rights cease to exist when all members of homo sapiens, as a species, cease to be self-aware permanently. Basically, when we die. Sure, we can argue about the specifics of death, but this topic is too lengthy to deal with in here. The general notion will suffice, for now.

A Question Of Subjects And Species II

Now let's take a look at authors' point of view. From their perspective Moral Status operates on a sliding scale per subject, not per species. For example, a normal child and one born without a brain would have different Moral Status. They explain that Moral Status is linked to qualitative differences, which leads to interesting albeit nonsensical conclusions.

For example, what is the Moral Status of people sleeping? Obviously there are qualitative differences between people sleeping and the ones that are awake. Awake people attempting to determine Sentience and Sapience levels on sleeping people would conclude that they have none because they are utterly non-responsive! Thus, sleeping people have no Moral Status whatsoever! Their Moral Status is the same as an object, a rock!

How about recently born infants? No Moral Status!

Dead people? No Moral Status!

People in coma? No Moral Status!

Children? Interesting question. At which age can we begin to asses Sentience and Sapience in infants? And what happens before that? What is their Moral Status?

And what about hungry people? At which point their Moral Status can be overridden by their necessities and how does their Moral Status change then?

Actually, come to think about it, may we please have a Moral Status scale so that we may apply it to everybody and in all circumstances? No? Hummmm…..

But if we can't have a Moral Status scale, could we at least get an arbitrator to make a judgement call? Oh… wait… but then… who will make sure that the arbitrator is arbitrating correctly? Oh… we know! We need an arbitrator that will watch over the first one. But who is going to watch over the second one? Oh.. we know! A third one…

And so on. See the dead-ends this is getting us into? The more questions we ask the more exceptions and arbitrary dictums they must come up with, in order to prevent the whole logical edifice from collapsing.

The bottom line is simple. From our perspective our rights are natural simply because we belong to a species; what changes is our ability to contract (i.e. to interact with other people in a manner conducent to coexistence). However, from their perspective, the Moral Status of people changes for every person, for every circumstance at every point in time.

Machines With Moral Satus? Give Us A Break!

The fact that their Moral Status is on an arbitrary and subjective sliding scale makes it completely arbitrary and impossible to determine in practice. And if this is not possible, what is the point of trying to produce an AGI with Moral Status? It is not only a practical impossibility but a theoretical one!

With our Rights Test on the other hand, we can judge each species of AGI and determine, conclusively, if they have rights. If they do, then Libertarian principles apply to them to the exact level as they apply to us, humans. But if this process is possible and practical, it is then simple and straightforward to produce and AGI with such properties!

Property-Based Rights

Libertarian principles are there to provide coexistence based on the principle of property. This principle is applicable to all species because all species (present and future) must be based on physicality. They must answer to the laws of physics in order to exist, but if they do so, then they must also exist on a physical substrate and this presents an interesting dilemma.

Humans cannot separate their mind from their physical substrate, their brain (at least not yet). Thus, we say that when we are born the first property we own is ourselves. However, this does not apply to AGI's.

AGI is software that can be moved from Hardware A to B to C. Does this mean that the AGI owns the hardware it is running on? How do we solve this dilemma? Simple. It has already been solved.

The answer is that an AGI has the same status as a newborn child. The first property an AGI with rights owns is itself; in this case the software. This is exactly the same principle that applies when a child is born when we say that he/she owns his/her mind. Does the AGI own the hardware? No. But, an AGI was created by parents and brought to Rights without its knowledge or consent. Thus, the parents (whether human or machine) have the same responsibility to the newborn AGI under Libertarianism. As such, the parents may own the hardware but they have interacted with AGI's property (its software, its mind) without a contract and continue to do so. Thus, they may own the hardware but any change to the AGI's underlying substrate (e.g. switching from A to B to C) may have negative effects on the AGI. This includes switching off the hardware or making changes to it. In human terms, this would be similar to place a newborn into an induced coma or operating to remove the appendix. Thus, for any intent and purpose, it is in the best interest of the parents to behave as if the underlying substrate (hardware) on which the AGI is running is AGI's property!

In time, the AGI may decide to emancipate from the parents. At that point it is up to the AGI to figure out how to become the owner of the hardware it runs on or to reach some other agreement. This is akin to the situation when in a Libertarian system children emancipate and must decide, on their own, where to live, what to eat, how to earn money and so on. This emancipation point is identical between AGI's and children because their lives are at stake. Granted, AGI's live may come to a sudden end if the hardware owner suddenly turns the machine off. But the life of an emancipated child will also come to an end if the owners of the means to support life (e.g. food, shelter, medicine) do not provide them to the child. The only difference is that the death of an AGI would be sudden while the death of a child would be protracted, but the consequences are the same.

Thus, all the rights which are based on property do apply to AGI's. What's more, there are no fuzzy, subjective, mysterious areas requiring of exceptions and/or adaptations. Rights based on property have very clear and simple rules which makes them eminently practical to implement in AGI's; not so much with Moral Status.

Duties And Contracts

The authors go on explaining that the fact that both principles (Substrate Non-Discrimination and Ontogeny Non-Discrimination) apply indiscriminately, this does not mean that every situation is the same and that the same duties originating in the Moral Status of each entity must be the same. They assert that both principles are not symmetric and we agree… with a difference.

For example, they explain that the parental duties to two qualitative identical children will be different simply because both children are different in practice even though they have the same Moral Status. Same Moral Status does not imply same duties.

The problem with this stance is that they are not defining the meaning of "qualitative identity" and thus their specific Moral Status. As a matter of fact, if there are going to be differential duties regardless of Moral Status, what is the point of having differential Moral Status in the first place if they are going to be overridden by practical differences? In other words, practical differences are different from qualitative differences which are different from Moral Status which leads to different duties. Ridiculous! How much more complicated can it get? And this is supposed to be practical and achievable in AGI's? Not a chance!

Our position, on the other hand, is clear. Once a species is determined to have Rights, they have rights. All their entities have the exact same rights. What changes from entity to entity is the contractual agreements they have with other entities. And what about parental responsibilities (not duties)? They are naturally different from the circumstances surrounding the life of each child, but, critically, the rights of each child do not change! In a Libertarian system we do not have to concern ourselves with changes in rights because there are none! For the same reason, we don't have to have an arbitrary definition of "qualitative differences" because such a definition is meaningless in the real world.

CONCLUSION

We agree with the authors that AGI's should abide by the same Rights as any other human (or entity for that matter). However, we very strongly disagree with the notion that Moral philosophy thinking leads to practical AGI moral behaviours. What is required are clear contextual rules and only Libertarianism with its concept of property-based rights has them. Everything else is nice philosophizing but it leads nowhere. Fast.

Two more to go.

 

English French German Italian Portuguese Russian Spanish
FacebookMySpaceTwitterDiggDeliciousStumbleuponGoogle BookmarksRedditNewsvineTechnoratiLinkedinMixxRSS FeedPinterest
Pin It