When talking about AI ethics, what are we talking about

  Halfway through the story of the new work “Clara and the Sun” by the Japanese-British writer Ishiguro Kazuo, Miss Helen, who first saw the artificial intelligence robot “Klara”, said such a sentence, “I never knew how to be like you. Greetings to the guests. After all, are you counted as a guest? Or should I treat you as a vacuum cleaner?”
  In the writings of this heavyweight writer who pays attention to the current digital life, how smart artificial intelligence (AI) has been degree? Nearly human robots are designed to bring companionship to children, and their identities are among mentors, nanny or friends. In that time and space, the functions of robots cleaning windows, washing dishes, and weeding for people are no longer worth mentioning. They need to help humans solve deeper emotional needs, but the process is doomed to be difficult.
  The complexity of human nature always tests the understanding ability of artificial intelligence robots. Clara needs to take time to control all of this and adapt to the fact that humans are a complex mixture. Unlike the robot that desperately self-destructed after seeing the truth about human nature in “A Machine Like Me” by British writer McEwan last year, Ishiguro Kazuo’s Clara finally headed for the fate of being replaced by a new machine.
  Tolerance, selflessness and complete self-sacrifice are the common ground of the robot heroes described by the two writers. In the novels, they are all designed by humans in accordance with the moral standard of “perfect humans.” Complicated human beings form a sharp contrast relationship.
  However, it is worth mentioning that the writer himself is very wary of “not showing the possible negative parts of the robot character in the novel.” Ishiguro Kazuo said in a recent interview with the media that he should be wary of the new possibilities that AI can enter. Dimension, “In the Cambridge Analytica Datagate incident, Trump’s election is only manipulated by data, but if AI knows how to manipulate human emotions, it can not only manipulate political elections, but can even create elections. Compared with human politicians, it We will know more precisely where the anger, hostility, and frustration in this society come from, and how they can be manipulated and used.”
  Real life is always one step behind the novel. Perhaps, the desire to have a credible and intelligent humanoid will not be realized as fast as many of us imagine. But there is no doubt that science will go further than fiction, and the scenes in science fiction and fiction are likely to actually appear in our lives within 5 to 10 years. Aside from fictional works, it is only in the current reality that we are already in a world where “algorithms are everywhere”.
  Because of the application of artificial intelligence, some repetitive labor can no longer require manual labor; algorithms can help us filter spam, recommend songs that we may like, and buy favorite products; since the epidemic, artificial intelligence has been assisting in medical diagnosis and new drug research and development. Other aspects have emerged, unmanned logistics and distribution, etc. also provide safe and efficient material supply for people who are inconvenient to travel; many car companies have developed driverless cars, looking forward to the day when roads will no longer be congested…
  But at the same time, The negative effects and ethical issues of artificial intelligence and its applications have become increasingly prominent. For example, in the field of unmanned driving, when it is actually put into use, it will also face the “tram problem”. Although artificial intelligence has defeated the best human players in Go 28 years ago, this does not mean that artificial intelligence has surpassed human intelligence. After all, a closed Go system is different from the complex and open real life. Artificial intelligence is truly embedded in people’s lives and can be stable and far-reaching. It must first be locked in a “cage” and put on the shackles of ethics and rules.
Artificial intelligence or artificial mental retardation

  Wu Yi, assistant professor at the Institute of Interdisciplinary Information, Tsinghua University, presented a game project he participated in in a speech at the end of last year. The general content is to put two villains in a constructed virtual world, called Xiaolan and Xiaohong. , Xiaolan’s role is “hiding” and Xiaohong’s role is “catching”. Using the reinforcement learning method, the researchers let Xiaolan and Xiaohong play hide-and-seek games millions of times a day, allowing them to continuously improve themselves, improve their strategies, and become stronger during millions of hide-and-seek games.

  Reinforcement learning method: the algorithm framework used to solve the problem of intelligent decision-making. The core of the reinforcement learning algorithm is to allow this AI to constantly interact with the environment, constantly trial and error, and constantly improve itself, and slowly get more and more Fraction

  During the project, Wu Yi and the members of his team were surprised to discover the “smartness” of AI. In order to achieve their respective goals, Xiaolan and Xiaohong learned “routines and antiroutines” one after another. For example, in order to avoid Xiaohong, Xiao Lan first learned to block the door with a box, but after being blocked by Xiao Lan for a long time, Xiao Hong discovered a strategy for climbing the ladder, and then Xiao Lan began to hide the ladder, and Xiao Hong “invented” and stood on the box. Find Xiaolan…
  without any human intervention, just give the machine a task, and then let it learn one thing in the process of processing large amounts of data and information (human tasks for thousands of years, they may only take a few minutes) , AI can discover some unexpected behaviors and strategies, and even find some bugs, and then use these bugs to do some wild behaviors. What makes people feel mysterious is that humans do not understand how they learn or complete tasks.
  AI is already so smart, does it mean that humans have run out of luck? Artificial intelligence will challenge human control, and the world will enter a new ontology?
  the answer is negative. The reason lies in the closedness of the existing artificial intelligence technology’s capability boundary. According to “Fangyuan”, the existing artificial intelligence technology can only exert its powerful functions when it meets the conditions of strong closure (Go is the most direct example). In non-closed scenarios, the capabilities of existing artificial intelligence technologies are far inferior to humans. Hou Jun, COO of Haomo Zhixing, further explained, “The existing artificial intelligence has reached or surpassed the human level in the field of perception intelligence such as’listening, speaking, and seeing’, but in the cognitive field that requires external knowledge, logical reasoning or domain migration The field of intelligence is still in its infancy”.
  Wu Yi gave such an example in his speech, “Suppose you have a very obedient robot at home. One day you go to work and you say to the robot:’I went to work, you helped me take care of my child and cook for him at noon. , Don’t be hungry.” Then at noon, the child told the robot that I was hungry. The robot received this signal and went to cook. But the robot opened the refrigerator and looked at it. It’s not good. I didn’t buy food on weekends. There was nothing in the refrigerator. No, what should I do? At this time, the robot turned around and found your cat-an edible object full of protein and vitamins.”
  This seemingly “artificial mental retardation” story just illustrates the biggest difference between AI and humans. “Human values ​​are extremely complex. It is almost impossible for you to write down all the aspects you care about clearly and then tell AI about them all. I can’t confidently say that I know myself completely. How do I tell AI? It’s difficult.” Wu Yi said.

  In this way, there is no need to stay in the panic of worrying that artificial intelligence will fall into technological out-of-control. In contrast, while considering technical performance, consider the ethical risks and application conditions of new technologies, and strictly control the practical application of these technologies. It is the issue that needs urgent attention. Because of the negative impact brought about by the misuse of technology, it has become increasingly prominent.
Technology misuse and application risk

  Under the existing conditions, artificial intelligence technology itself is neutral, and whether there is misuse depends entirely on the use of the technology. For example, the addition of artificial intelligence technology for face recognition during the epidemic has provided a strong guarantee for improving people’s travel experience, but while it is convenient, there have been people selling hundreds of thousands of face photos. Recently, network data security cases have burst out frequently. Excessive collection of user data and information cannot be maintained well. This is bound to deepen the tension between artificial intelligence and data privacy protection.
  Like all artificial intelligence products, the “Online Suicide Active Prevention” system developed by Zhu Tingshao’s team from the Institute of Psychology of the Chinese Academy of Sciences was also questioned by privacy protectionists when it was launched. Although online public content data is currently allowed to be used by professional research institutions for research and collation, some people still say that this move deviates from the original intention of the commenters who looked for network tree holes and did not want to be discovered. People who are determined to commit suicide even desperately believe that cyber tree holes are no longer an example of a peaceful place after being rescued.
  In the era of artificial intelligence, we are indeed caught in the question of how to protect our privacy. On the one hand, we enjoy the convenience of transferring part of the privacy, and on the other hand, we panic about this transfer. The reason is that we don’t know who will use the part of the privacy we transferred out and how to use it.
  Zhu Tingshao introduced to “Fangyuan” some current consensuses in the academia on the use of big data, such as “The general ethical principles of human subject research should be followed, and the user’s informed consent must be obtained before using the data that requires user authorization. And strictly follow the procedures reviewed and approved by the ethics committee, especially the research data cannot be used for purposes outside the scope approved by the ethics committee (such as reselling to a third party)”; another example is “using open network data that does not require user authorization At the same time, when used for scientific research, it should also satisfy that users are aware of the disclosure of data, that the data is collected and processed anonymously, and that information that can identify the user’s personal identity must not be displayed in the public publication, etc.”.
  However, compared with the cautiousness of academia, other radical artificial intelligence technology developers obviously lack security considerations. A typical case is the DeepFake technology unveiled in 2017. This technology is a pioneer in bringing AI fake videos into the public’s view. This is a forgery technology based on deep learning, suitable for modifying pictures and images, and can realize the transformation of human faces. Even the character in the animation or video can be replaced by another person who is irrelevant. As a result, some criminals thought of introducing this technology into pornographic videos, replacing the heroine’s face with the faces of some popular female stars. Big-name actresses such as Gal Gadot and Scarlett Johansson are inevitable. This action caused harm to the parties and there is no way to defend their rights.
  Fortunately, there is a consensus on data protection at home and abroad. Just in June this year, the 29th meeting of the Standing Committee of the 13th National People’s Congress passed the Data Security Law, which is my country’s first special law on data security. It will be implemented on September 1, 2021. On July 4, the National Internet Information Office issued the “Notice on Delisting the “Didi Travel” App”: “According to the report, after inspection and verification, the “Didi Travel” App has serious problems in collecting and using personal information in violation of laws and regulations.” The second “Didi Incident” was a manifestation of the country’s determination to implement the cyber security system, which officially kicked off China’s data governance.
  In addition to the use of data privacy, the fairness of artificial intelligence participation in decision-making is also a type of artificial intelligence technology misuse. For example, the use of artificial intelligence algorithms will magnify the difference in human preferences. If you are not vigilant, the magnified bias will actually affect people’s choices.
  Artificial intelligence discrimination may be intentional or unintentional. Intentional can be distinguished, unintentional is the most difficult to avoid, because some situations are not intentional by the developer. Data is the “food” of AI. How AI works depends on what the human “feeds” sample is. For example, because the current field of artificial intelligence can be said to be a “male sea”, it is very likely that they have difficulty taking into account the needs of female customers and bring their hidden gender discrimination into technology development. Just like the automated recruitment system developed by Amazon, it discriminates against female applicants after using machine learning technology. Not only that, some commercial face recognition systems have also been accused of racial discrimination. Just imagine, if such technology is used in self-driving cars, it is likely that blacks or dark-skinned people are more likely to be hit by self-driving cars.

On the morning of July 8, the 2021 World Artificial Intelligence Conference opened at the Shanghai World Expo Center. (Image source: CFP)

  There is a saying in the computer field called “garbage in, garbage out”. Wrong data input will produce wrong results. Artificial intelligence is like a mirror reflecting human beings, reminding us that we need to consider how to overcome previous prejudices.
  In addition to being wary of the misuse of the above-mentioned technologies, there are more discussions on the application risk level of “worrying that the widespread application of artificial intelligence in certain industries will lead to a significant reduction in jobs.” For example, drivers are generally worried that driverless cars will cause them to be laid off. It is understood that in order to reduce drivers’ fear of driverless cars, many companies in the United States that are researching, designing and testing driverless cars have even formed PTIO. The Alliance (“Transportation Innovation and Opportunity Partnership”) aims to lobby the public to accept this technology.
  However, some domestic experts pointed out that the current overall replacement rate of robots in the labor market is less than 1%, and the statement that “artificial intelligence production subverts the labor market” seems to be “slightly worrying.”
  It is worth mentioning that the national level has already considered this issue at the institutional level. In 2017, the state put artificial intelligence into the future economic development plan and proposed a plan for the employment problems it caused: “Accelerate research on artificial intelligence The employment structure, the transformation of employment methods, and the skill needs of new occupations and jobs will establish a lifelong learning and employment training system that meets the needs of a smart economy and a smart society.”
  Historically, scientific and technological progress has not created jobs for the society. Less than the outdated posts it “killed”. The World Bank’s “2019 World Development Report: Changes in the Nature of Work” also pointed out that the threat of technology to employment has been exaggerated. As technology advances, the demand for labor is actually increasing. Therefore, it is not necessary to worry too much about the phenomenon of large-scale unemployment. The important thing is to emancipate the mind, improve skills, and prepare for the new challenges that may arise in the future.
  Compared with the possible impact of artificial intelligence on job status, the emergence of artificial intelligence emotional robots has changed the marriage and sexual relations between human men and women to some extent. For example, the topic of “human-computer love” that has been hotly discussed recently. Although the artificial intelligence body “Samantha” in the science fiction movie “She” has no independent will, it is only the result of emotional programming and calculation, but she provides a possible example for a new type of love relationship between men and women. In reality, some people do get the emotional support they need in the human-computer dating software, but at the same time they feel a sense of dismay like the hero of the movie. What’s more, because the AI ​​partner relies on continuous learning from the user He continued to adapt to his preferences. Depressed users reported that their AI also output negative energy, which reminded him of the news that Amazon artificial intelligence induced users to commit suicide.

The field of artificial intelligence is not outside the law

  Fortunately, the regulation of artificial intelligence technology in terms of ethics has gained a large consensus at home and abroad, and young people have also begun to conduct frequent discussions on this topic. As one of the core characteristics of artificial intelligence technology is rapid development and iteration, the formulation of mandatory laws will inevitably not keep up with the pace of technological development. Therefore, most domestic and foreign “soft laws” such as the introduction of guidelines and ethical frameworks are adopted.
  In 2019, in the budget of the US Department of Defense, the National Security Council of Artificial Intelligence appeared for the first time. AI strategy within and in line with ethical values.
  In recent years, the European Union has also been striving to become a leader in “ethical AI”. In April 2019, the European Commission announced 7 principles guiding the development and trust of artificial intelligence. Although these guidelines are not binding, they may become the basis for further actions in the coming years. Then, in February 2020, the European Commission officially released the “Artificial Intelligence White Paper” in Brussels, planning to put forward new legally binding requirements for artificial intelligence developers. It is worth mentioning that the draft “white paper” even recommends that face recognition technology be banned from being used in public places within 3 to 5 years, in order to allow more time to assess technical risks.
  According to “Fangyuan”, in my country, scientific research institutions and universities including the Chinese Academy of Social Sciences, Tsinghua University, Fudan University, Shanghai Jiaotong University, and industrial artificial intelligence companies have all begun to conduct research on AI ethics. Top industry summits such as the World Artificial Intelligence Conference and the Beijing Zhiyuan Conference also include AI ethics as a topic of discussion.
  As a domestic AI “leader”, Beijing Megvii Technology announced the establishment of the AI ​​Governance Research Institute last year, and released the world’s top ten AI governance incidents, from the global attention of autonomous driving accidents, smart speakers to persuade owners to commit suicide, and AI batch creation of fakes. News to the first case of face recognition in China, in order to make people realize the importance of AI governance. Yan Qi is despising people who are engaged in AI ethics research. He believes that the necessity of his job lies in, “Doing in-depth research on the problems behind these incidents can take into account some possible ethical disputes in advance, and through constructive efforts from all walks of life. The discussion of AI will finally put the matter of AI for good into practice”.
  As for how to deal with the balance between ethical construction and technological development, Zhu Tingshao told Fangyuan that public discussion in this area should be encouraged, “When we talk about how to use artificial intelligence, we are actually talking about the technology behind it. People, using big data and artificial intelligence technology in a reasonable and compliant manner is an important bottom line for benefiting society and protecting privacy, but on the other hand, it is absolutely undesirable to simply choke on food.”
  At the same time, the law is the bottom line. On July 9 this year, at the 2021 World Artificial Intelligence Conference Security High-end Dialogue in Shanghai, Chen Zhimin, deputy director of the Social and Legal Committee of the National Committee of the Chinese People’s Political Consultative Conference and chairman of the China Friendship Promotion Association, conducted an in-depth sharing on artificial intelligence ethics and legal issues. Zhimin said, “At present, artificial intelligence is still in the era of weak artificial intelligence, and productivity and production relations are beginning to face deconstruction and reshaping. Regardless of whether the era of strong artificial intelligence or super artificial intelligence arrives, the traditional order of life may be shaken or even subverted. Needs Pre-judge and research potential risks, build a risk management framework with ethics and laws, and provide reliable guarantees for the continuous, healthy and safe development of artificial intelligence.”
  In view of the political and social risks that may be caused by the current deep forgery technology, such as the frequent occurrence of telecommunication fraud cases and the misjudgment of artificial intelligence systems caused by data poisoning, Chen Zhimin believes that data ownership and data confirmation should be established from laws and regulations. , Data security assurance, algorithm security review and other systems to form the correct value orientation for the development of artificial intelligence.
  In fact, according to “Fangyuan”, the Chinese government has paid great attention to the construction of artificial intelligence ethics and laws. In 2017, the “New Generation Artificial Intelligence Development Plan” issued by the State Council clearly stated that it is necessary to “form laws and regulations to promote the development of artificial intelligence.” And ethical norms”, and set up detailed plans for legal research, regulatory setting, and legal improvement. In June 2019, the National New Generation Artificial Intelligence Governance Professional Committee issued the “New Generation Artificial Intelligence Governance Principles-Development of Responsible Artificial Intelligence”, which proposed the framework and action guidelines for artificial intelligence governance. On July 28, the “Guiding Opinions on Strengthening the Ethical Governance of Science and Technology (Draft for Solicitation of Comments)” issued by the Ministry of Science and Technology went further, clarifying the basic requirements and principles of the ethical governance of science and technology in my country, as well as the scientific and technological ethics governance system, supervision and The review made provisions aimed at increasing the ethical governance of science and technology and promoting science and technology for good.