Life,  Tech

Musk stopped GPT-5, more like a performance art

Musk and Bengio, one of the “Three Giants of Deep Learning”, and more than 1,000 people in the academic circles jointly called for the suspension of AI large-scale model research news, swiping the Internet, causing an uproar.

Will GPT research be stopped? From the perspective of academics and industry, will AI really pose a threat to human beings? Is the joint letter a well-documented warning or a scare stunt? Supporters and opponents expressed their views one after another and made a fuss.

In the midst of the confusion, we discovered that there were questions about the authenticity and validity of this joint letter. Many people in the signature area did not even know that they had signed this joint letter.

On the other hand, the content of the joint letter itself is also debatable. It is of course true to call on people to pay attention to the issue of AI security. AI research can be suspended for 6 months, and even the government is called on to intervene, waiting for the formulation of relevant rules and consensus. This naive and unrealistic approach is rare in the history of technological development.

At the same time, many people found that OpenAI has opened a batch of GPT-4 API interface applications, and developers are excited to successfully apply this sensational technology to their products. The prelude to the Pre AGI era has slowly begun.

Will the AI ​​stop? Can the AI ​​stop?

Or, as netizens said, do you want to stop the mass production of the Ford Model A?

The Model A helped Ford regain the top spot in auto sales after the Model T was withdrawn from the market.

As of press time, Founder Park found that the number of signatures on the joint letter had decreased from 1,125 to 1,123.

01 What did the joint letter say?

On the official website of the Future of Life Institute (FLI), there is the full text of the open letter, summarized as follows:

The open letter calls on all artificial intelligence laboratories to immediately suspend research on training artificial intelligence systems stronger than GPT-4, including the current training GPT-5, for at least 6 months.

The letter argues that human-competitive AI systems could pose profound risks to society and humanity, as this has been extensively studied and endorsed by top AI laboratories.

Additionally, the letter calls on AI labs and independent researchers to use this pause to jointly develop and implement a shared set of advanced AI design and development safety protocols, subject to rigorous review and oversight by independent external experts. These protocols should ensure that systems adhering to them are secure, and this confidence must be well-founded and increases with the size of the system’s potential impact.

02 Who is the originator?

Future of Life Institute: The Future of Life Institute was founded in March 2014. The founders are MIT cosmologist Max Tegmark, Skype co-founder Jean Tallinn, and Harvard University doctoral student Victoria Krakowna ( Viktoriya Krakovna, Meia Chita-Tegmark, doctoral student at Boston University and Tegmark’s wife, and Anthony Aguirre, a cosmologist at the University of California, Santa Cruz.

The company mission introduced on the official website is:

Protecting the future of life The development and use of certain technologies have profound implications for all life on Earth. This is currently the case with artificial intelligence, biotechnology and nuclear technology.

If managed correctly, these technologies can change the world, greatly improving the lives of those alive today and all those yet to be born. They can be used to treat and eradicate disease, strengthen democratic processes, and mitigate—or even prevent—climate change.

If not managed properly, they can have the opposite effect. They could produce catastrophic events that would bring humanity to its knees, and possibly even push us to the brink of extinction.

The mission of the Future of Life Institute is to steer transformative technologies away from extreme, large-scale risks and to make them beneficial to life.

As of 16:00 on March 29th, Beijing time, the number of signatories to the open letter was 1,125. We compiled some of the more well-known scholars and practitioners in the technology industry among them.

Yoshua Bengio: Joshua Bengio is a Canadian computer scientist known for his research on artificial neural networks and deep learning. Bengio, along with Jeffrey Hinton and Likun Yang, received the 2018 Turing Award for their contributions to deep learning. These three are also known as the “Turing Big Three”.

Stuart Russell: Founder and professor of computer science at the Center for Artificial Intelligence Statistics at the University of California, Berkeley, has been paying attention to the development of the field of artificial intelligence. He is also the author of Artificial Intelligence: A Modern Approach, the standard textbook in the field of artificial intelligence.

Elon Musk: Elon Musk, as we all know, is the CEO of SpaceX, Tesla and Twitter, and a former shareholder of OpenAI.

Steve Wozniak: Steve Wozniak, co-founder of Apple.

Yuval Noah Harari: Yuval Noah Harari is a professor of history at the Hebrew University of Jerusalem and author of the trilogy A Brief History of Mankind.

Andrew Yang: Andrew Yang, the founder of the Progressive Party and the current co-chairman of the party, the founder of the American Entrepreneurship Organization, and the Democratic primary candidate for the 2020 US presidential election.

Jaan Tallinn: Jean Tallinn, co-founder of Skype and co-founder of Future of Life, the website that started this open letter.

Evan Sharp: Evan Sharp, co-founder of Pinterest.

Emad Mostaque: Founder of Stability AI. Stability AI was established in 2019 and is now a unicorn company with a valuation of more than US$1 billion. Its products include Stable Diffusion, etc. Stable Diffusion is a recent popular text-to-image generation application.

John J Hopfield: John Joseph Hopfield, an American scientist, invented the associative neural network in 1982, now commonly known as the Hopfield network.

Rachel Bronson: Rachel Bronson, President and CEO of the Bulletin of the Atomic Scientists. Founded in 1945 by physicists including Albert Einstein and Robert Oppenheimer, the Bulletin of the Atomic Scientists manages the countdown to the “Doomsday Clock” .

Max Tegmark: Max Tegmark, a cosmologist, is currently a professor at the Massachusetts Institute of Technology and director of science at the Institute for Fundamental Questions. Co-founder of the Future of Life Institute.

Victoria Krakovna: Research Scientist at DeepMind and co-founder of the Future of Life Institute. DeepMind is a British artificial intelligence company that was acquired by Google in 2014. In 2016, the AlphaGo they developed defeated Li Shishi, which caused a lot of repercussions.

We actually measured the process of submitting signatures. After submitting the personal name (made up) and email address, we will receive such a prompt. We do not confirm the follow-up manual confirmation process for the time being.

03 What are the problems with joint letters?

The other two of the Big Three

The statements of the three giants in the deep learning academic circle will inevitably attract attention. The name of Yoshua Bengio is impressively listed as the first joint name, so what is the attitude of the other two?

The dean Hinton has not publicly stated his position, but he has always praised OpenAI’s research results.

LeCun’s position was even clearer.

Someone reposted the joint letter on Twitter, and @LeCun said that he and Benjio, another of the “Big Three of Deep Learning”, signed the joint letter.

Lecun quickly retweeted the response: “No. I did not sign the letter. I disagree with his premise.”

According to speculation by industry observers, the premise LeCun mentioned may come from two aspects.

One is the so-called “out of control arms race”. LeCun, as the facade of Meta AI research, is obviously one of the participants. A month ago, he promoted Meta’s large language model LLaMA on Twitter;

The second is that LeCun has always been skeptical about the current research direction of LLM. When ChatGPT just attracted public attention, he said that its underlying technology has no great innovations, and it is just a well-combined product. “They’re kind of like students who have learned the material by rote but haven’t really built a deep mental model of the underlying reality.”

On Twitter, a screenshot of OpenAI CEO Sam Altman participating in the signing was widely circulated.

But when we search for Sam Altman’s name in the joint name, we can’t find it. In addition, there is typo in the screenshot, i should be a capital I.

This may mean that there was no strict information verification mechanism in the initial stage of the joint letter, and the person who filled in the joint letter did not confirm his true identity. Borrow famous people (even a lot of famous people who have nothing to do with AI).

Sam Altman did talk about AI safety recently. In an interview with the famous podcaster and YouTuber Lex Fridman, he said that AI does have problems such as bias and human unemployment, but the only solution is that AI continues to iterate, learn as early as possible, and try to avoid the situation of “only one chance, you must do it right”.

The premise of this topic is that at the beginning of the interview, Sam Altman expressed his views on GPT-4:

“When we look back in the future, we will think that GPT-4 is a very early artificial intelligence (AI), and its processing speed is slow, there are many bugs, and many things are not very good. However, early computers are also That way. They took decades to evolve, but at the time they still pointed the way to something very important to our lives.”

Other noteworthy signatories

There is no doubt that “suspending the training of an AI system more powerful than GPT-4” will first affect the next technological progress of OpenAI. And this is not a bad thing for some signatories.

He is planning to set up his own AI research lab to develop an alternative to ChatGPT. In the past few months, Musk has bombarded OpenAI-related information, emphasizing the issue of AI security for a while, and investing 10 billion in Microsoft to get the code base of OpenAI. As the first investor, he has nothing to worry about. did not receive.

The fact is that Musk withdrew from OpenAI because Tesla’s research in the field of AI ran counter to the ideas of OpenAI, which was still an open source non-profit organization at the time.

“I’m still confused how a nonprofit I donated $100 million to became a $30 billion for-profit.”

Another front signer is Emad Mostaque, founder of Stability AI.

In an interview last year, he said that the future model will realize information flow between different modalities such as vision and language. By then, people can make beautiful PPT when they speak, which has been realized by Microsoft. The speed of Open AI’s development also gave him a certain panic. ChatGPT has intensified competition in the artificial intelligence industry. He once texted employees: “You will all die in 2023.”

Gary Marcus, a computer scientist who has been fighting against LeCun, also participated in the signing, but this time his attitude is different.

New York University professor Gary Marcus (Gary Marcus) has always been an opponent of deep learning and big prediction models. He believes that relying on massive data to feed large models will continue to develop, but it will never be able to achieve the capabilities required by general AI. In his view, the current research and development of general AI is a dead end.

His previous debate with LeCun on artificial intelligence is well-known. Marcus publicly stated in 2022 that “deep learning has hit the wall”, which attracted rebuttals from Hinton and LeCun among the three giants of deep learning.

But this time, although he signed it, he said on his Substack that he is not worried about the risk of LLM becoming a true general artificial intelligence, but the still unreliable but widely deployed medium artificial intelligence (New Bing and GPT-4) poses risks.

There is also Yuval Harari, the best-selling author, author of “A Brief History of Mankind” and “A Brief History of the Future”.

He has been paying attention to when the tipping point of AI development will come. After trying GPT-3 in 2021, he was alerted to the impact of AI intelligence. He once wrote on the tenth anniversary of the publication of “A Brief History of Humanity”, “Soon, artificial intelligence will be able to understand us more closely than we do.” itself. Will it continue to be an instrument in our hands, or will we become its instrument?”

A few days ago, he just published an article in the New York Times, “You can take the blue pill or the red pill, but we are running out of blue pills.”

The threat of artificial intelligence to human beings is discussed in the article, especially ChatGPT. He believes that ChatGPT may be used to create false information, manipulate people’s emotions and behaviors, and even replace human creativity and intelligence. He called on humans to take steps to protect themselves from AI while taking advantage of it. He suggested that humans upgrade their educational, political and ethical systems to adapt to the world of artificial intelligence, and learn to control artificial intelligence instead of being controlled by it.

AI systems that compete with human intelligence can pose profound risks to society and humanity, which has been widely studied and acknowledged by top AI laboratories. As stated in the Asilomar Principles of Artificial Intelligence, advanced AI could represent a profound transformation in the history of life on Earth and should be planned and managed with corresponding care and resources. But even as AI labs have been caught in an out-of-control race in recent months to develop and deploy increasingly powerful digital intelligences that even their creators cannot understand, predict, or reliably control, we We do not see this type of planning and management taking place.

As modern AI systems are becoming human-competitive in common tasks, we must ask ourselves: should we allow machines to flood our information channels with propaganda and disinformation? Should we automate all jobs, including those that are enriched? Should we develop non-human minds that end up being more than we are, smarter, more obsolete, and more capable of replacing us? Should we risk losing control of our civilization? These decisions should not be delegated to unelected technical leaders. Robust AI systems should only be developed when we are confident that their impact will be positive and their risks will be manageable. This confidence must be well documented and increases with the size of a system’s potential impact. In a recent statement on artificial general intelligence, OpenAI noted that “at some point, it may be necessary to conduct an independent review and place limits on the computational speed used to create new models before starting to train future systems”. we agree. Now is that time.

Therefore, we call on all AI labs to immediately suspend training AI systems stronger than GPT-4 for at least 6 months. Suspension should be public and verifiable, including all key actors. If such a moratorium cannot be implemented quickly, the government should step in and impose a cease and desist order.

AI labs and independent experts should use this pause to jointly develop and implement a shared set of safety protocols, subject to rigorous review and oversight by independent outside experts. These protocols should ensure that systems conforming to them are secure beyond reasonable doubt. This does not imply a general pause in AI development, but simply a pause from the dangerous race towards more precise, safer, more explainable, more transparent, more robust, more reliable, and more faithful to existing advanced systems.

At the same time, AI developers must collaborate with policymakers to greatly accelerate the development of sound AI governance systems. This should include, at a minimum: new, capable AI-focused regulatory agencies, agencies that monitor and track highly capable AI systems and vast amounts of computing power; watermarking systems that can help distinguish real from synthetic and trace sources of model leaks; Robust auditing and certification ecosystems; accountability for harm caused by AI; strong public funding for technical AI safety research; and institutions dealing with massive economic and political disruptions to democracy.

Humanity can enjoy a prosperous future brought about by AI. We have succeeded in creating powerful AI systems, and now we can reap the rewards by designing these systems to truly benefit all and give society a chance to adapt. Society has paused on other technologies that could have disastrous effects on society. We can do the same here. Let’s enjoy the long AI summer and not rush into the fall.

error: Content is protected !!