Ilya Sutskever Steps Down as OpenAI’s Chief Scientist, Leaves a Legacy of AI Innovation

Today, OpenAI co-founder and chief scientist Ilya Sutskever announced his resignation on Twitter.

After nearly 10 years at OpenAI, I have made the decision to leave. OpenAI’s trajectory is nothing short of miraculous, and I believe OpenAI, under the leadership of Sam Altman, Greg Brockman, and Mira Murati, and the outstanding research leadership of Jakub Pachocki, will build safe and beneficial AGI.

It was a privilege to work together and I will miss you all terribly. It’s been so long, thank you all for everything. I’m excited for what’s coming next and working on this next project is very meaningful to me and I’ll share the details in due course.

Ilya Sutskever also shared photos with the likes of Sam Altman, Greg Brockman and Mira Murati.

OpenAI CEO Altman tweeted that Ilya and OpenAI’s parting ways are very sad.

Ilya is undoubtedly one of the greatest thinkers of our generation, a guiding light in our field, and a dear friend. His talent and vision are well known; his warmth and compassion are less widely known, but no less important.

Without him, OpenAI wouldn’t be what it is today. Even though he had something to do that meant a lot to him personally, I am forever grateful for what he did here and committed to completing the mission we started together. I’m delighted that for such a long time I was able to be so close to such a truly extraordinary talent and a man so focused on delivering the best possible future for humanity.

Jakub Pachocki will be our new Chief Scientist. Jakub is also undoubtedly one of the greatest thinkers of our generation; I’m delighted that he will be here to take up the baton. He has been responsible for many of our most important projects, and I have every confidence that he will lead us quickly and safely toward our mission of ensuring that artificial general intelligence (AGI) benefits everyone.

Jakub Pachocki, who is about to become the next chief scientist of OpenAI, also expressed his gratitude to his predecessor Ilya.

Ilya introduced me to the world of deep learning research and has been a mentor and great collaborator for many years. His incredible vision for deep learning became the foundation for OpenAI and the field of AI today. I’m extremely grateful for the countless conversations he’s had with us, from high-level discussions about future advancements in AI, to in-depth technical whiteboard sessions. Ilya, I will miss working with you.

According to OpenAI’s official website, the new OpenAI chief scientist Jakub Pachocki holds a PhD in theoretical computer science from Carnegie Mellon University and has been leading OpenAI’s transformative research programs since 2017. Previously he served as Director of Research at OpenAI and was a leader in the development of GPT-4 and OpenAI Five, including fundamental research in large-scale RL and deep learning optimization.

“He was instrumental in realigning the company’s vision to expand deep learning systems,” OpenAI said.

At this point, the eight-year story of Ilya Sutskever and OpenAI ends. In fact, even in a world without OpenAI, Ilya Sutskever would still go down in the history of artificial intelligence.

He had a turbulent childhood and became a student of Hinton as an undergraduate.

Ilya Sutskever is an Israeli-Canadian born in the former Soviet Union. He immigrated to Jerusalem with his family at the age of five (so he is fluent in Russian, Hebrew and English), and moved to Canada in 2002.

During his undergraduate studies at the University of Toronto, Ilya Sutskever started working with Geoffrey Hinton on a project called “Improved Random Neighborhood Embedding Algorithm” and later officially joined Hinton’s team while pursuing his PhD.

We are all familiar with what happened next: In 2012, Hinton took Ilya Sutskever and another graduate student Alex Krizhevsky to build a neural network called AlexNet, whose ability to identify objects in photos far exceeded other systems at the time.

Geoffrey Hinton, Yann LeCun, and Yoshua Bengio became the three giants of deep learning and won the Turing Award in 2018. The reason for the award was “research results in neural networks.”

But when Ilya Sutskever joined Hinton’s team in the early 2000s, most AI researchers thought neural networks were a dead end.

“Thanks to my collaboration with Geoffrey, I had the opportunity to study some of the most important scientific questions of our time and pursue ideas that were highly disapproved of by most scientists, but that turned out to be absolutely correct,” Ilya Sutskever later said in an interview.

AlexNet is the breakthrough moment of deep learning. After years of failure, his team was the first to prove that the pattern recognition problem could be solved—the secret was a deep neural network trained on massive amounts of data and computing power.

This idea extends from computer vision to the field of natural language processing. It is also an important factor in ChatGPT’s current achievements, including Sora’s success in the field of video generation. These two points are also inseparable.

After graduating in 2012, Ilya Sutskever worked as a postdoc with Andrew Ng at Stanford University for two months, then returned to the University of Toronto and joined DNNResearch, a spin-off company of Hinton’s research group.

In March 2013, Google acquired DNNResearch and hired Ilya Sutskever as a research scientist at Google Brain.

“Ilya has always been interested in language,” says Jeff Dean, now Google’s chief scientist. “He had a strong intuition about where things were going.”

At Google, Ilya Sutskever showed how the pattern recognition capabilities of deep learning can be applied to data sequences, including words and sentences. He collaborated with Oriol Vinyals and Quoc Le to create the sequence-to-sequence (Seq2seq) learning algorithm, is deeply involved in TensorFlow research, and is one of the many authors of the AlphaGo paper.

Join OpenAI and lead the research and development of GPT series

A strong interest in language may have driven Ilya Sutskever to join OpenAI.

In July 2015, Ilya Sutskeve attended a dinner hosted by Y Combinator President Sam Altman at a Sand Hill Road restaurant, where he met Elon Musk and Greg Brockman.

From that dinner party OpenAI was born. Those present agreed on one thing: It needed to be a nonprofit, without any competing incentives to water down its mission, and it needed to have the best artificial intelligence researchers in the world.

At the end of 2015, Ilya Sutskever began to lead the research and operations of OpenAI with the title of “Research Director”. The organization has also attracted several world-renowned artificial intelligence researchers, including “Father of GAN” Ian Goodfellow, UC Berkeley’s Pieter Abbeel and Andrej Karpathy.

The new company, backed by $1 billion in funding from Sam Altman, Elon Musk, Peter Thiel, Microsoft, Y Combinator and others) had its sights on AGI from the start, even if few took it seriously at the time prospect.

However, the initial OpenAI struggled. Ilya Sutskever said: “When we started OpenAI, there was a time when I wasn’t sure how we were going to continue to make progress. But I had a very clear belief that you couldn’t bet against deep learning. Somehow every time I encountered If there is an obstacle, researchers will find a way around it within half a year or a year.”

In 2016, OpenAI’s first GPT large-scale language model came out. From GPT-2 to GPT-3, the model’s capabilities have become increasingly powerful, proving the practical correctness of this route. With every release, OpenAI continues to push the limits of people’s imagination.

But Ilya Sutskever revealed that when ChatGPT, which really brought OpenAI out of the circle, was released, the company’s internal expectations for it were very low: “When you ask it a factual question, it will give you a wrong answer. I thought it would be very It’s so boring and people say: Why are you doing this?”

Encapsulating GPT models in an easy-to-use interface and making them freely available gives billions of people their first glimpse into what OpenAI is building. Prior to this, the large language model behind ChatGPT had been around for several months.

The success of ChatGPT has brought unprecedented attention to the founding team.

OpenAI CEO Sam Altman spent much of 2023 on a weeks-long outreach tour — talking to politicians and speaking to packed auditoriums around the world.

As chief scientist, Ilya Sutskever maintains a low-key style and does not often give interviews. He is not a public figure like other founding members of the company, but focuses more on GPT-4.

He is not interested in talking about his personal life: “My life is very simple. I go to work and then go home. I don’t do anything else. One can participate in many social activities, one can participate in many activities, but I don’t go. “

What did the results of ChatGPT bring him?

Earlier this year, Hinton publicly expressed his fear of the technology he helped invent: “I’ve never seen a case where something at a much higher level of intelligence was controlled by something at a much lower level of intelligence.”

Ilya Sutskever, a student of Hinton’s, did not comment on the remarks, but his focus on the negative consequences of superintelligence suggests they are kindred spirits.

With the updates of GPT-4 and a series of subsequent more powerful large language models, some OpenAI members represented by Ilya Sutskever are increasingly worried about the controllability of AI.

Before OpenAI’s heated “palace fight” incident, Ilya had been interviewed by a reporter from MIT Technology Review.

He says his focus is no longer on building the next generation of GPT or the image-making machine DALL-E, but on how to prevent artificial superintelligence, which he sees as a hypothetical future technology that is prescient, from running out of control.

“Sutskever also told me a lot of other things – he thinks ChatGPT may be conscious (if you squint),” the reporter wrote.

Ilya Sutskever believes the world needs to wake up to the true power of the technology OpenAI and other companies are working to create. He also believes that one day humans will choose to merge with machines.

Once the level of artificial intelligence exceeds that of humans, how will humans supervise artificial intelligence systems that are much smarter than themselves?

OpenAI established the “Superalignment” team in July 2023, with the goal of solving the alignment problem of super-intelligent AI within four years. Ilya Sutskever is one of the leaders of the project, and OpenAI said it will dedicate 20% of its computing power to research on the project.

In an interview, Ilya Sutskever boldly predicted that if the model can predict the next word well, it means that it can understand the profound reality that led to the creation of the word. This means that if AI continues to develop on its current path, perhaps in the near future, an artificial intelligence system that surpasses humans will be born. But what is even more worrying is that “super artificial intelligence” may bring some unexpected negative consequences. This is what “alignment” means.

The first result of this team was released in December 2023: Using a small GPT-2 level model to supervise a large GPT-4 level model can achieve performance close to the GPT-3.5 level, opening up the field of empirical research on superhuman models. New research directions in alignment.

Meanwhile, OpenAI announced a $10 million grant program in partnership with Eric Schmidt to support technology research to ensure the consistency and safety of superhuman artificial intelligence systems.

Break with OpenAI founding team

But judging from today’s results, Ilya Sutskever should have an irreparable difference with the faction represented by Sam Altman.

Because regarding the process of the “Gongdou Incident”, Greg Brockman, chairman and co-founder of OpenAI, said:

Sam received a text message from Chief Scientist Ilya Sutskever requesting communication at noon on Friday. Sam used Google Meet to attend the meeting, which included the entire board except Greg. Ilya Sutskever tells Sam that he will be fired and that the news will be sent soon.

At 12:19 noon that day, Greg received a text message from Ilya Sutskever, requesting a phone call as soon as possible. At 12:23, Ilya Sutskever receives the Google Meet meeting link. Greg is told that he will be removed from the board (but that he is vital to the company and will retain his position), and Sam has been fired. Around the same time, OpenAI made an announcement.

In its announcement, OpenAI said Altman was not forthcoming enough with the board. Some people interpret this as: OpenAI may have implemented AGI internally, but has not synchronized the information to more people in a timely manner. In order to prevent the technology from being applied on a large scale without safety evaluation, Ilya and others pressed the emergency stop button. .

According to The Information, at OpenAI’s all-staff meeting that day, Ilya Sutskever admitted what employees called a “coup.” “You can call it (a coup), but I think this is just the board of directors performing its duties,” he said.

Of course, this is all just speculation.

At the end of November, Sam Altman officially returned to OpenAI. The overall situation has been decided. Ilya Sutskever did not leave OpenAI immediately, but he also did not appear in OpenAI’s San Francisco office again.

Sam Altman expressed his gratitude to Ilya Sutskever and hopes to continue their working relationship: “I respect and love Ilya, I think he is a guiding light in the field and a treasure of mankind. I have no ill will toward him. zero.”

Dalton Caldwell, managing director of investment at Y Combinator, once recalled: “I remember Sam Altman said that Ilya was one of the most respected researchers in the world, and he believed that Ilya could attract many top artificial intelligence talents. He even mentioned that the world’s top Artificial intelligence scholar Yoshua Bengio believes that it is impossible to find a better candidate than Ilya to serve as OpenAI’s chief scientist.”

Jakub Pachocki, who finally took over, may also be a carefully considered choice by the OpenAI board of directors. Jakub joined OpenAI in 2017, which was his first job after leaving school.

What did Ilya see?

Rumors about Ilya leaving OpenAI have actually been circulating for a long time. When there was a “palace fight” at OpenAI and Sam Altman was kicked out of OpenAI, there were rumors that Ilya “saw something” that was powerful enough to make him worry about the future of AI and rethink the development of AI. But what exactly he saw, no one yet knows.

In fact, Ilya’s concerns are not new after ChatGPT came out. In a video shot between 2016 and 2019, Ilya said that on the day AGI is implemented, AI may not necessarily hate humans, but they may treat humans the same way humans treat animals. People may not mean to harm animals, but if you want to build an intercity highway, you don’t consult the animals, you just do it. When faced with this situation, AI may naturally make similar choices. This is Ilya’s AI philosophy.

This explains why Ilya is always worried about the progress of AI and focuses on AI alignment (aligning AI with human values) work.

Where does Ilya go from here?

As a founding member of OpenAI who left midway, Musk offered an olive branch to Ilya Sutskever, saying that Ilya Sutskever should join Tesla or xAI. Could this be his next move?

Since there are no non-compete provisions in California, Ilya Sutskever could immediately go to work for another company or start another company.

Another well-known OpenAI employee, Andrej Karpathy, joined Tesla after leaving OpenAI, then returned to OpenAI from Tesla, and announced his departure again not long ago. When the “Palace Fight Incident” occurred at the end of 2023, Karpathy was on vacation and was a complete “outsider”, but he completed the resignation procedures earlier than Ilya.

What is the truth? We don’t know if we’ll ever find out.

Many of what Ilya Sutskever said may be a bit wild, but at this point, it no longer feels as “crazy” as it did a year or two ago. As he said, ChatGPT has rewritten many people’s expectations for the future, turning “will never happen” into “will happen sooner than you think.”

error: Content is protected !!