User Rating: 0 / 5

Star inactiveStar inactiveStar inactiveStar inactiveStar inactive
 

Human RobotToday we are going to have some fun on a more speculative terrain. Most people are obliviously happy about the impending revolution that is coming. No, it is not the Libertarian revolution or the "workers'" revolution (we already had that one), but the intelligent machine revolution. Kurzweil called this fact in his book "The Singularity Is Near". The idea is simple; what will happen when we create a computer capable of learning so fast that will leave us in the dust? Will we become slaves, partners or simply obsolete?

Intelligent, Ethical Machines

Since a few years ago, a new direction in science has begun to appear. The so-called "Psychology of Machines" or "Sociology of Machines" or even "Philosophy of Machines". Sounds implausible, even ludicrous? Well, it's not. It is quite real. And why wouldn't be otherwise? Eventually machines will become more intelligent than us. Eventually machines will become self-conscientious. Eventually machines will be indistinguishable from us in all the sense that it makes us who we are… and then they will leave us in the dust.

Some notable scholars have begun to appear in this field but the one we are going to comment today about is Eliezer Yudkowsky who is part of the Machine Intelligence Research Institute. What draws out attention is not that he is "doing" AI research (there are plenty of people so doing) but that he is investigating the Ethics of intelligent machines with the view that we can be friends with them, even when we become mental midgets and beyond.

We applaud Eliezer in his endeavour because his chosen field is not easy and is controversial to say the least. However, this is not to say that we agree with him. Mostly not.

Our series of articles are based on an interesting paper that Eliezer and Nick published not too long ago, called "The Ethics of Artificial Intelligence" (you can find the full bibliography at the bottom). This is an interesting article in the sense that it is "seminal" or, to put it in different words, original. It is unusual in terms of its topic but it is also original in its contents. So, as there is nothing more lifting than standing on the shoulders of giants, here we go.

ISSUES

The first part of the article lists the desirable characteristics that intelligent machines should exhibit and why are they important.

Transparency to Inspection

The first characteristic that they demand is that we should be able to inspect and understand the manner in which AI's work. The authors pose the scenario where a person is denied a mortgage by an AI entity and because of the nature of this entity (e.g. a neural net or a genetic algorithm), its programming is so utterly obscure and disconnected with anything we know (i.e. reality) as to be utterly incomprehensible to us even on principle. Their point is that when AI's take on tasks previously done by humans and related to people, the AI inherits those social requirements.

Fair enough. We know that future AI's will probably be utterly obscure because they will somehow, mimic our brain. However, passing this point, their intelligence will be so alien as to be double obscure.

But then we ask, why? Why would we want to have transparent AI's if this is simply not achievable? It is nonsensical to demand better AI's and at the same time denying technology which will make them better, even if this technology is obscure.

This issue of transparency is not an issue of ethics but an issue of economics. There will never be a scenario where we will be facing AI's alone, for as long as AI's are willing to either obey us or cooperate with us. Take the AI of the mortgage example above. Do you honestly think that knowing that people won't prepare themselves for the application? That other people won't create AI's to beat the bank AI? Of course they will! How do we know this? Because it is happening today. The most glaring example are trading algorithms which attempt to outthink the other trading algorithms. There is a permanent and constant competition in this world.

But let's go a step forward and return to our previous example. If somebody would request a mortgage with a bank using an AI, this person would have to accept the contractual relationship the bank is offering: we will give you a mortgage if the AI says that you are OK. It is your choice, not theirs. You chose a bank with an AI. Don't like the AI, choose a bank without one. All of them have AI's choose a different AI. Don't like any AI? Get a helper AI to help you get your mortgage.

What we are talking here is nothing new. People do this sort of things all the time. If bank A won't give you a mortgage, bank B will. And, do you know what bank B is basing their decision? On a human employee. Sure, the employee may have some broad guidelines (just like an AI would have), but when it comes to it, it is gut feeling. Yes. Humans are just as much obscure as AI's. Thus, why would we ever want to have transparent AI's? They are not necessary at all.

Predictable

The next property that the authors demand is predictability to those they AI's govern. Now, if this is not a scary thought, we don't know what it may be. AI's "governing" us? Sure. It is happening today. Your credit card company assigns your credit limit based on AI's or near-AI's. Yet, this did not produce the end of the world.

The authors argue that laws and those who interpret them (judges), must be predictable so that people may optimize their own life. And we have to admit that from a very conventional thinking, this makes sense. Let's have predictable law and order so that we may build upon.

Sadly, it is all wrong. The authors point at contracts as examples. The interpretation of the law is necessary so that contracts be written knowing how they will be executed. They say.

We, of course, say otherwise. Contractual arrangements between parties are just that, arrangements between parties. The "law" is nothing but an extreme example of the legal concept of ex-parte where a decision is taken without requiring all the parties to the controversy to be present. In this case, none of the parties are present. The "law" is a party that inserts itself between the parties to a contract, for no reason other than it is "the law".

The interpretation of contractual agreements is something to be arranged strictly between parties. Predictability is not necessary not required. However, we grant that it is convenient. Yet, we can have a great deal of predictability without the need for "the law". Furthermore, a great deal of predictability can be achieved without the need for third-party interpretations, in this case AI's.

The world is full of standards that were created by people agreeing to a predictable set of conditions. The Internet is possible simply because somebody back then figured out that standardizing on how to do things is a good thing, if you do not believe us, just Google the term "RFC". Large quality gains are possible because of ISO quality standards and so on. Nobody "enforces" or forces an "interpretation" of RFC's or ISO's standards on other people… well… other than governments… and they are utterly unnecessary.

It is standards that we need, not interpretation of standards. We need standardization, not standardization of interpretations. There are no circumstances, none whatsoever, that merit the imposition of third party standards on contracts. We explained this in our article Contracts Are The Key To Coexistence.

The problem the authors have failed to understand is that they are assigning human problems to machines. Worse. They are assigning current human problems to machines. Why should it be the case?

We do not need AI's to be predictable to those they govern as long as we have standards. Furthermore, considering that such predictability does not extend beyond said standards!

Manipulation

Yet another characteristic is resistance to manipulation or AI's requiring to be "tamper proof". The authors provide the example of an AI searching for explosives in luggage. They argue that such AI's would have to be tamper-proof to be of any use. Any they are correct, but it is not a generality!

Sure, when it comes to security we want AI's that may be as tamper-proof as possible. And there are other places where we would demand the same, for example in medicine or car driving. But do we actually want all AI's to be as "robust to manipulation" as the ones controlling a nuclear reactor? Of course not!

The authors made the initial assumption that robustness to manipulation is a good thing and therefore all AI's should have it. But the real world is not like that. In the real world we have reliable and unreliable people. Even those looking for explosives in luggage, doctors and engineers handling nuclear reactors.

The issue is, again, not robustness, but how much robustness is good enough. And do you know how people determine this threshold? Through the market! The authors are committing the same error as Communists did. They believe they can forecast what people want. They believe that they can determine how much of a characteristic is "good" for AI's. The correct answer is that this is strictly a personal and subjective choice, as Austrian Economics teaches.

Responsibility

Another characteristic that the authors demand is that AI's should be made in such a way that we would always be able to find a responsible person to solve an unusual problem or issue. Their argument is that if AI's have no responsible people to supervise or override them, people will tend to blame AI's for their human errors. And, of course, this is true, generally speaking. However, this is also happening today when bureaucrats hide behind procedures, as the authors cheerfully indicate.

But this is an inductive problem. We know that bureaucrats should be watched. By whom? By supervisors. And they? By managers. And they? By ministers. And they? By governments. And they by elected representatives. And they by people. And they? By other people. And they? By yet more people, and so on. We talked about in our article Who Watches the Watchers.

How is this "human" responsibility over AI's any different than human responsibility over humans? Currently we do not have a political system capable of solving this problem… except Libertarianism where everybody watches everybody else to the subjective degree to which they are interested in watching.

And yet again the authors are committing the error of cloning current human and political deficiencies and transplanting them to AI's with the hope of finding a solution within a political system that does not work.

In a Libertarian system AI's (whether self-aware or not) will be bound and un-bound to humans in the very same manner that humans are bound or un-bound to other humans today. That is not the key. They key is the contractual relationship between humans and AI's. Did you agree to allow an AI to interact with your property? Do you have a contract with somebody owing an AI or in partnership with an AI? Then you are responsible! You and only you! Not the other party. Not the AI. You! It was your choice!

And it still is. How much "irresponsibility" you are willing to accept is your decision to make. As usual, these levels will be determined by the free market. Unfortunately, the authors do not think in these terms. They still think in the absolute terms of our current political system.

CONCLUSION

The point is that responsibility, transparency, auditability, incorruptibility and predictability are not necessary nor desirable as absolute goals for AI's. Sure, some of these characteristics are appropriate for some AI's in some conditions and to some degree. As such they are not criteria that are currently applied to humans performing social functions and therefore they are not criteria that must be considered in an algorithm intended to replace human judgement in social functions. To think differently is to simply transplant current political limitations into AI's. But if that's the case, why have deeply flawed AI's to begin with? The authors are definitively missing the point and not thinking creatively in terms of Libertarianism and Austrian Economics.

Note: please see the Glossary if you are unfamiliar with certain words.

Bibliography:

Bostrom, N., Yudkowsky, E. "The Ethics of Artificial Intelligence". MIRI. Machine Intelligence Research Institute.

English French German Italian Portuguese Russian Spanish
FacebookMySpaceTwitterDiggDeliciousStumbleuponGoogle BookmarksRedditNewsvineTechnoratiLinkedinMixxRSS FeedPinterest
Pin It