Tech

Are We Reaching Artificial General Intelligence (AGI) Soon? Experts Debate

Suddenly, AGI has become a new future technology “realized within 5 years.” From Altman to Lao Huang, they have stated on different occasions that AI reaching the level of human intelligence will soon arrive. The technological path and possible energy shortages in the future may be the biggest variables in the process of reaching AGI.

The emergence of Claude 3, Sora, Gemini 1.5 Pro, and GPT-5, which may be released within this year, makes everyone vaguely feel that we seem to be getting closer and closer to AGI.

OpenAI CEO Sam Altman firmly believes that AGI will be achieved within 5 years.

However, we still need to wait patiently.

Nvidia CEO Jensen Huang’s view coincides with Altman’s: If our definition of “a computer that thinks like a human” is to test its capabilities on the human body, then AGI will arrive within five years.

Alex Irpan, a Google robotics engineer, revised his original prediction of the emergence time of AGI after the emergence of LLM: 4 years ago, he believed that the probability of AGI appearing in 2035 was 10%; now, AGI will appear in 2025 year, there is a 10% chance of occurring.

What’s even more amazing is that prediction guru Jimmy Apples broke the news last year: AGI has actually been implemented internally.

Siqi Chen, CEO of startup Runway and AI investor, also said last year that GPT-5 is expected to complete training by the end of 2023, and OpenAI expects it to reach AGI level.

In this way, Musk’s interference has caused heavy losses to the acceleration of the AGI process…

Logan.GPT, a retired OpenAI employee, said that the next ten years will be the most important decade in human history.

In this decade, we will definitely have superhuman AI!

However, one problem that is terrifying when you think about it is that AI consumes too much water and electricity!

Recently, the topic of ChatGPT’s alarming power consumption has become a hot topic on Weibo.

A PDF circulated a few days ago revealed that the parameters of OpenAI’s new model Q* are likely to reach 125 trillion. Let us calculate, if AGI really appears, how much electricity will be consumed in a day?

In addition, Musk said in a recent public interview that the shortage of chips has eased, and what will limit the development of AI will be the shortage of power and step-down transformers.

The method of obtaining efficient and clean energy will directly affect the arrival of AGI.

Altman: AGI within five years

Sam Altman said in “Our AI Journey”, a book discussing the future development direction of AI, that AI will be able to complete “95% of the work of marketers, strategic planners and creative professionals.”

(This book is on a subscription model. New chapters will be released as they are completed)

He also said that AGI will become a reality “within five years.”

This book contains in-depth interviews with top AI leaders, including Altman, by two business innovators, Adam Brotman and Andy Sack.

Brotman is the co-founder and co-CEO of Forum3 and previously served as Starbucks’ first chief digital officer. Sack is also the co-founder and co-CEO of Forum3 and was a former advisor to Microsoft CEO Satya Nadella.

Their previous backgrounds indicate that they are not the type to just talk.

What Altman tells can always refresh the upper limit of readers’ knowledge.

Altman believes that “when AI can independently complete innovative scientific breakthroughs, it can be called AGI.”

The two authors wanted to know how AGI would impact their job: marketing.

So they asked Altman, what does AGI mean for marketers who want to create advertising campaigns to build consumer brands?

At this point, Altman dropped his first knowledge bomb:

This means that 95% of the work marketers do today using agencies, strategic planners and creative professionals will be handled easily, nearly instantly and at virtually no cost by AI.

All of this content is free, instant, and nearly perfectly usable. Pictures, videos, and event creative plans are all no problem.

And AI will likely be able to test creatives against real or synthetic target customers to predict outcomes and optimize them.

Altman says AGI is coming

The two authors continued to ask Altman, when do you think AGI will become a reality?

Altman replied:

Around 5 years, maybe a little longer – no one can say an exact time, and no one knows exactly what the impact on society will be.

Roetzer, the author of this book, said that even if AGI has not yet appeared, humans can already see large-scale changes in the economy, labor force, education and society taking place.

Not long ago, Klarna, a large payments company, just revealed that its AI assistant is now capable of doing the work of 700 employees.

This AI customer service is powered by OpenAI, handles various customer inquiries, supports multiple languages, and can directly handle refund and return requests.

Klarna said that in just one month, the AI ​​assistant completed the work of 700 full-time customer service staff.

To date, it has conducted 2.3 million conversations, accounting for two-thirds of all customer service conversations at the company.

Its customer satisfaction scores are “on par” with human customer service.

Moreover, it is more accurate and faster in resolving customer requests. The average time to resolve a request dropped from 11 minutes to 2 minutes.

Klarna’s CEO hints that society needs to prepare for advanced artificial intelligence:

This highlights the profound impact artificial intelligence will have on society.

We hope that society and politicians will carefully consider the impact of AI, and believe that comprehensive and transparent management is crucial for our society to cope with this change.

Lao Huang: AI will pass human testing within five years, and computing power will increase another 1 million times in the next 10 years

Lao Huang also agrees with this view and believes that AGI will arrive soon.

Recently, Nvidia CEO Jensen Huang said: AI will pass human testing within five years, and AGI will arrive soon!

At an economic forum at Stanford University, Huang answered the question: When will humans create computers that think like humans? This is also one of Silicon Valley’s long-term goals.

Lao Huang answered this way: The answer depends largely on how we define this goal.

If our definition of “a computer that thinks like a human” is testing capabilities on humans, then AGI will be here soon.

In five years, AI will pass human testing

Huang believes that if we make a list of every test imaginable, put it in front of the computer science industry, and let AI complete it, then within five years, AI will do a great job in every test. good.

So far, AI can pass tests such as the bar exam, but it still struggles in professional medical tests such as gastroenterology.

But in Huang’s opinion, in five years, it should be able to pass any of these tests.

But he acknowledged that AGI by other definitions may still be far away because experts still disagree on how to describe how the human mind works.

Therefore, from an engineer’s perspective, it is difficult to implement AGI because engineers need clear goals.

In addition, Huang Renxun also answered another important question-how many wafer fabs do we need to support the expansion of the AI ​​industry.

Recently, OpenAI CEO Sam Altman’s seven trillion plan shocked the world. He believes that we still need more fabs.

In Huang Renxun’s view, we do need more chips, but as time goes by, the performance of each chip will become stronger, which also limits the number of chips we need.

He said: “We will need more fabs. But, remember, we are also greatly improving the algorithms and processing of AI over time.”

With the improvement of computing efficiency, the demand will not be as great as it is today.

“I will increase computing power a million times in 10 years.”

Marcus throws cold water: GPT-5 will not appear in 2024

Marcus, who has always been a naysayer, also made a new prediction by the end of 2024——

We may witness:

– About 7 to 10 models equivalent to GPT-4 have been released
– There will be no revolutionary breakthrough in technology (GPT-5 is not launched, or GPT-5 does not meet expectations)
– There will be fierce price competition in the market
– Few companies can form a clear moat
– There is no effective way to solve the hallucinations produced by AI
– Enterprise adoption of these technologies will remain at a modest level
– Profits are relatively modest and will be distributed among these 7 to 10 companies

Netizens said that this result will be witnessed in 11 months.

In order to prove the absolute influence of his predictions, Marcus also raised the issue of model illusion that he predicted in 2001.

And two years ago, on March 10, 2022, an opinion article was published stating that “deep learning is hitting a wall.”

One month after the article was published, DALL·E came out. Sam Altman wrote a mocking post, “Please give me the confidence of a mediocre deep learning skeptic…”

Today, Marcus once again said that “even after 2 years, deep learning still faces the same fundamental challenges”!

In other words, it is far out of reach for humans to reach AGI through deep learning.

https://garymarcus.substack.com/p/two-years-later-deep-learning-is?r=8tdk6&utm_campaign=post&utm_medium=web&triedRedirect=true

In the article, he cited multiple examples to illustrate that these views still hold true today:

– Deep learning is fundamentally a technique for recognizing patterns, and it works best when all we need are rough results.

– Current deep learning systems often make silly mistakes.

– Argument for scaling up the parameters – Studies that have scaled up the parameters have not really improved LLM what is urgently needed – “understanding”. The measurement standards proposed by Kaplan and other OpenAI teams are about predicting the next word and are different from AI in achieving deep understanding.

– “scaling laws” are merely observed phenomena, like Moore’s Law, and may not hold forever.

Of course, Marcus finally said that AGI is not impossible to achieve, but humans need a paradigm shift. More and more results show that LLM itself is not the end answer to AGI.

At the same time, Turing giant LeCun also said in a recent blog interview that AGI is still far away from us.

In this interview, LeCun also mentioned, “Babies can only acquire language after they have understood the basic knowledge of how the physical world works. A lot of physical knowledge is internalized and cannot be described in words, soLLMI can’t understand it either.”

Ng Enda also participated in the discussion of AGI and said that AGI will only come slowly, not overnight.

The Stanford team won the NeurIPS Outstanding Paper Award for “The emergence ability of large models is a mirage”. The paper mentioned that the emergence of large model emergence capabilities is due to researchers’ measurement choices, not because of model behavior, and fundamental changes are discovered as parameter scale changes.

When many people suddenly become aware of a technology (perhaps one that has been in development for a long time), public perception can change dramatically and they can be surprised.

But the growth of artificial intelligence capabilities is much more sustained than people think. That’s why I expect the path to AGI will include many steps forward, making systems progressively smarter.

ChatGPT consumes an astonishing amount of power. Can humans sustain AGI?

Although AI models have been developing rapidly, a big problem recently has made people worried: they consume too much power!

Artificial intelligence is a bottomless pit of energy, and AI will be choked by energy in the future.

More and more AI industry leaders, including Sam Altman, say that the first principle of AI and the most important part is the conversion rate of energy and intelligence.

Because Transformer is not essentially a very energy-efficient algorithm, energy will be a big problem plaguing the development of AI in the future.

Musk recently said in a public interview:

AI is the biggest technological revolution in history, and I have never seen any technological advancement faster than AI is now.

The chip shortage may be behind us, but artificial intelligence and electric vehicles are expanding at such a rapid pace that the world will face a supply crunch for electricity and transformers next year.

AI’s demand for computing power is now increasing by 10 times almost every six months. Obviously, this situation cannot continue forever at such a high speed, otherwise it would exceed the mass of the universe.

The bottleneck of AI computing is foreseeable… A year ago, the shortage was chips.

Then the next shortage will be electricity. When the chip shortage eases, there may not be enough electricity to run those chips next year.

Then, it’s easy to predict that the next shortage will be step-down transformers.

If the grid outputs a voltage of 100-300 kilovolts and then has to step it down all the way to 6 volts, the step down is huge.

The not-so-funny joke here is that there will be a shortage of transformers (Transformers) to run Transformers in the future.

Hinton: There is a 1/10 chance that AI will kill humans

The day when AGI is truly realized, the scene in The Terminator is approaching.

“Will digital intelligence replace biological intelligence?”

“Almost certainly, it will”!

“We humans should do our best to survive.”

Hinton, the godfather of AI, recently delivered his latest speech at Oxford University and made a shocking statement – within 5-20 years, everyone has a 1/10 chance of being killed by AI.

Turing Award winner Bengio also holds the same view, that is, there is a 1/5 probability that we will be killed.

Hinton realized that increasingly powerful AI models could act like a “hive mind” and share what they learned with each other, giving them an advantage over humans. They may be a better form of intelligence.

For example, GPT-4 can learn language, reason, sarcasm, and show extremely high empathy.

“I want to make a very strong statement, these models do understand,” he said in his speech.

These models can also “evolve” in dangerous ways, creating an intentionality of control. If I were advising governments, I would say there is a 10% chance that these AIs will wipe out humanity within the next 20 years. I think that would be a reasonable number.

Not only that, when asked “How likely is it that artificial intelligence will kill everyone?”, AI industry leaders believe that there is a 25-49% probability.

Different people/organizations predict the probability of AI extinction.

Will AI really kill humans? What do you think?

error: Content is protected !!