1 week, 6 days ago

This article is relatively long, but it can basically let you "understand" what is the ethical issue of AI. 
How can I beat a robot like this! 

Boston Dynamics, a robot research and development company, tried to test the recovery performance of the robot after it was blocked, but it did not expect to cause great public controversy when it "beat" the robot with a stick. 

Can I fall in love with robots? 

After watching the movie "her", many people ask such questions. In fact, this question contains a serious philosophical issue and an ethical issue of AI technology - what is love. 

Why am I monitored? 

An AI Unicorn company is piloting "classroom monitoring based on face recognition" on campus. With the system, students can monitor their every move in the classroom, the number of hands raised, doze off, make a difference or even pay attention, and visualize. But is that right? 

Although today we are still far away from the era of artificial intelligence in the true sense, more and more realistic scenes make us have to think about technology integration, technology terrorism, as well as the boundary between "can" and "should" of AI technology. 

 I. The ethical dilemma of AI

Two years ago, on March 18, 2018, a fatal accident happened to Uber's driverless car. The truth is that the sensor of the car has detected a pedestrian crossing the road, but the automatic driving software did not take avoidance measures at present, which eventually led to tragedy. 

On the surface, the accident is a technical problem: Uber unmanned vehicle detects pedestrians, but chooses not to avoid them. In fact, when the judgment power is transferred to the computer system, it involves the moral and ethical dilemma of AI technology. 

In general, the data in the AI system of driverless cars is fair, interpretable, and free of racial, gender, and ideological biases. But Francesca Rossi, a researcher at IBM Research Center, believes that most AI systems are biased. Google's head of self driving has said that in a crisis, Google can't decide who is better, but will try to protect the most vulnerable. 

But don't forget, 
 protecting vulnerable people means that we must divide people, different people, and compare the life value of one person with that of a group of people. 

The moral paradox of 
 automatic driving appears because this problem is an algorithm problem in the era of automatic driving. It also involves the rational choice of the programmers behind the 
 according to the actual economic benefits and costs. 

 Baidu is the most radical technology company in China's layout AI. CEO Robin Li has proposed the four principles of AI ethics. The first principle is that the highest principle of 
AI is safety and controllability. "
 If an unmanned vehicle is attacked by a hacker, it may become a killing weapon, which is absolutely not allowed. We must make it safe and controllable." Ma Huateng, CEO of Tencent, put forward that there are four ways to develop AI: in the future, whether AI can be "knowable", "controllable", "available" and "reliable". 

I hope the big guys do what they say. 

Another more typical ethical dilemma of AI is nursing robot. With the soulless algorithm, we can make robots blink, sing and make various intelligent actions. But in the 2017 survey, nearly 60% of Americans said they don't want to use robots to take care of themselves or their families. 64% believed that such care would only increase the loneliness of the elderly. And some older people make their voices heard: they want to have a care robot and make friends with it. 

 2. The problem of "counterattack against humanity" 

In addition to the dilemma, the more concerned problem is that the development of AI technology will eventually counter attack humanity. This concern is reflected vividly in all kinds of science fiction films. 

Western world has built a real game experience field. People only think it's a place for everyone to have fun, but they don't know that AI is learning, and eventually they are enemies of human beings. 

In the western world, when man-made people are hurt, such as Clementine's brain lobes are cut off, how beautiful they are, and so on. No matter we are the audience, or the human beings who should have adapted to these scenes in the play, we can't help but show their unbearable expressions. In fact, even if it's not that bloody, for example, how beautiful she is dragged away by people in black to make people think she's going to be violated, or when prostitutes in butterfly bar are used as venting tools, or even when artificial people are treated naked in the laboratory without dignity, it will cause more or less discomfort in people's hearts. 

 we feel something is wrong, but why not? 

Movies such as blade killer, mechanical public enemy and Western world take artificial intelligence as the theme to resist and surpass human beings. People are more and more inclined to discuss when artificial intelligence will form their own consciousness, surpass human beings, and make human beings become their slaves. 

Violence, sex, invasion of privacy There are problems between robots and human beings, which can not be solved by existing laws alone. All of these films are metaphors of human concerns about the ethical issues of artificial intelligence. 

In fact, the AI nightmare in sci-fi movies is actually the domination of human beings who are afraid of more advanced and closer to divinity than themselves. 

As futurist Kurzweil put it in "near the singularity" - once a singularity is exceeded, there is the possibility of overwhelming mankind completely. But Hawking, Schmidt and so on are all wary of strong artificial intelligence or super artificial intelligence may threaten human survival. 
So some more radical scientists have come up with the idea of "putting AI in a cage.". "Otherwise, the machines will take over and they will decide what to do with us!" Yam borsky, Professor of the Department of computer engineering and computer science, School of engineering, Louisville University, and founder and director of the network security laboratory, proposed "Ai Put them in a controlled environment. For example, when you study a computer virus, you can put it in an isolated system. This system cannot access the Internet, so you can understand its behavior, control input and output in a safe environment 

The development of AI ethics

AI ethics is a problem in the scope of new philosophy of science and technology. From the perspective of humanity, with the development of artificial intelligence, artificial intelligence will even bring some fundamental problems that may shake the social foundation. 

Looking back on the development history of AI, we may see the change and development process of the ethical issues of AI. 
"Meat computer" and Turing test
In 1956, scientists held a special seminar at Dartmouth College. John McCarthy, the organizer of the conference, gave the conference a special name: artificial intelligence Summer Seminar. This is the first time that the name "artificial intelligence" has been used in academic circles. In fact, what McCarthy and Minsky think about is how to turn all kinds of human senses, including vision, hearing, touch, and even brain thinking into Shannon information, which is called "father of information theory", and how to control and apply it. The development of artificial intelligence in this stage, to a large extent, is still the simulation of human behavior. 

Minsky confidently declared: "human brain is nothing more than a meat computer." 

McCarthy and Minsky not only successfully simulated the visual and auditory experience, but later Terry scheynowski and Jeffrey Hinton also invented a "NetTalk" program based on the latest progress in cognitive science and brain science, simulating a network similar to human's "neurons", so that the network can learn like human's brain, and can make simple Reflection. 

In this stage, the so-called artificial intelligence, to a greater extent, is to simulate human feelings and thinking, so that a more human like thinking machine can be born. The famous Turing test is also based on the standard of whether we can think like people. 

The principle of Turing test is very simple. The test party and the tested party are separated from each other. Only a simple dialogue is used to let the people on the test side judge whether the tested party is a person or a machine. If 30% of the people cannot judge whether the other party is a person or a machine, it means that Turing test has passed. Therefore, the purpose of Turing test is still to test whether artificial intelligence is more like human beings. 

However, the question is whether machine thinking needs the intermediary of human thinking when it makes its own judgment? In other words, does the machine need to make a detour first, that is, dress up its thinking as a human, and then make a judgment? 

In mecha, the super artificial intelligence robot named EVA swindled human beings, passed the Turing test and killed the "father" of its long-term confinement in the dark laboratory, a scientist. "If I didn't pass your test, what would happen to me?" she said "Will someone test you and turn you off or down because you don't perform as well?" In the movie, EVA has been trying to explore its relationship with human beings. In the end, the human beings who tried to imprison her were devastated. 

Obviously, for AI, they don't need to think and solve problems through human thinking. 

 2. Machine learning: not concerned about how to be more like human beings, but about how to solve problems in their own way 

The development of artificial intelligence is in another direction, that is, intelligence enhancement(


No Comment Yet.