Tech

The Future of AI: Challenges and Growth Ahead

As we step into the realm of artificial intelligence, the possibilities and concerns surrounding its evolution are becoming increasingly pronounced. The rapid advancements in AI technology raise questions about its potential to outpace human capabilities, alongside worries about security and control in an AI-driven world.

With substantial capital investments and a rapidly expanding market, the AI landscape is brimming with opportunities and uncertainties. The influx of resources prompts reflection on the trajectory of AI development and the need for cautious navigation in this tech-driven era.

Noteworthy is the varied response from tech giants and researchers towards AI’s trajectory. Some approach with caution, while others opt to step away from the field altogether, shedding light on the nuanced challenges and uncertainties inherent in the AI journey.

In this dynamic landscape where innovation meets apprehension, the future of AI stands at a crossroads, beckoning us to navigate the intricacies of technological advancement with a blend of foresight and adaptability. It’s a journey marked by both promise and caution, shaping the narrative of AI’s evolution in the decades to come.

By the end of the 2030s, the world will be completely and unrecognizable. A new world order will be created, and soon, artificial intelligence will rule the world. – An employee who was fired by OpenAI


Sometimes I don’t understand that my Xiaoai classmate can’t even help me set an alarm clock, but some people are worried that AI will destroy human civilization.

Artificial intelligence, the “post-50s”, has only lived in film and television works, science fiction literature, newspapers and magazines for many years. Nowadays, college entrance examination candidates have to write AI in their essays.

Things do get a little “weird”. For more than half a century, chip computing power has honestly obeyed Moore’s Law: performance doubles roughly every two years.

But since 2012, AI computing power has doubled in 100 days. In 2018, a model that took a full day to train could be completed in just two minutes in 2018, a 300,000-fold increase in speed.

 Could it be that someone has turned the hour hand faster?

Looking at the following graph, the learning speed of AI is indeed surprisingly fast.

So far, humanity has only retained five abilities that have not been surpassed – but in terms of growth rate (almost vertically), these five have been (or have been) surpassed soon.

It’s like an old child who has been raised for decades and suddenly grows into a college student, and in many ways is more powerful than a “human parent”.

Parents don’t feel anxious about their children being better than themselves, but humans are not AI parents.

If AI is considered as a species, then its evolution rate is actually millions of times faster than that of humans. Humans used to create AI, but now it’s growing so fast that we’re losing our sense of control.

 What is the origin of all this?

 Mysterious Silicon Valley, strange scientists

Hinton, an absolute expert in the field of artificial intelligence, winner of the 2018 Turing Award, and the “godfather of AI”, once said: “I regret working in the field of AI in my life.” ”

This is not like Jack Ma saying that he regretted founding Alibaba. For half a century, Hinton has focused on the core technology behind ChatGPT and more. Hinton’s remorse stems from his sense of loss of control over AI.

“This kind of thing can actually become smarter than people”, something like that Hinton has said in more than one place. Not long ago, Hinton repeatedly emphasized in a TV interview:

“In the next 5-20 years, AI will be half as smart as humans. I don’t know how likely it is that we’ll be taken over when they’re smarter than we are, but it seems to me that it’s quite possible. ”


What was once an AI evangelist now seems to have transformed into an eschatological threat theorist.

In fact, not only Hinton, but also corporate and academic leaders in the technology field have almost all expressed concerns about AI to a greater or lesser extent.

On June 4, employees from OpenAI, Deepmind and others issued an open letter revealing that OpenAI’s artificial intelligence system is close to the human level.

AGI is likely to be realized in 2027, and the current AI companies are not trustworthy.

In the field of AI, it is particularly popular to issue open letters. According to incomplete statistics, there have been dozens of open letters in the past decade, and the most famous ones are as follows:

  • 2015, Open letter from the Future Life Institute. The signatories include Stephen Hawking, Elon Musk and more than 1,000 scientists and technologists.
  • In 2017, an open letter on the risks of autonomous weapon systems was signed by more than 3,000 researchers. Musk said at the summit of heads of government that artificial intelligence will destroy human civilization.
  • In 2023, two of the most famous open letters in the post-ChatGPT era were published, and they also sparked a war of words.

 There are more and more people in this group, and the lineup is getting stronger and stronger.

But you will find that the names on the letter, such as Microsoft, Google, Meta, OpenAI, are almost all the main players in AI at the same time. Google was the first company to sign the letter, but in the past decade, Google has bought more AI companies than it can count on one hand.

Scientists who work on AI are popular in Silicon Valley’s big tech companies. Even Hinton has been with Google for more than a decade. And the latest reports show that he has just joined a new AI company.

Among this group of people, the most famous and typical is undoubtedly Musk. Last year, Musk launched an open letter calling for a six-month moratorium on AI training.

But a few months later, Musk’s xAI also came. Currently, the company has secured billions of dollars in financing.

Some people say that Musk’s motives are not pure. Yes, it can be so doubtful. But it seems that everyone’s motives are not as simple as they seem?

 AI, which is even more awesome than humans, is actually a black box

Perhaps a few years later, when the survivors are writing their post-human history books, they will mention something like this:

The ancestors of the 21st century once reached the pinnacle of technological civilization, but they made a fatal mistake – allowing artificial intelligence to develop at a rapid pace, but not figuring out the situation until everything spiraled out of control.


It’s hard to imagine that scientists and entrepreneurs are making AI grow rapidly, but they don’t know how AI is evolving.

Today’s human AI research basically follows the ideas proposed by Turing in the 50s of the last century. The man, known as the “father of AI,” wrote in a paper published in 1950:

“It will be easier to create human-level AI by developing learning algorithms and then teaching machines, rather than writing intelligent programs by hand.”


This is the prototype of machine learning – humans take a step back and simply design learning methods and rules for machines to learn on their own.

Turing’s approach has indeed worked. But many people forget that he warned shortly after proposing this method that achieving this goal may not be the best thing for humanity to do.

There are methods and implementations first, and then theoretical explanations; First there is intelligent technology, then there is intelligent science. Our AI is evolving iteratively.

However, researchers are also slowly discovering that deep learning and reinforcement learning are powerful and have also made AI smarter, but no one knows where AI’s intelligence comes from. This is known as the black box problem of artificial intelligence.

I don’t know, is it accompanied by danger? No one knows. But should we be humble and cautious?

The answer given by humans is no. Robert Oppenheimer, the “father of the atomic bomb” of the United States, said:

“When people see something that feels good technically, they do it.”


  So, nuclear weapons were born.

Artificial intelligence is a black box, but it does not prevent trillions of resources from entering this field at all. Players came up with a more radical narrative called Scaling Law.

To explain, the meaning of this sentence is roughly equivalent to: the more computing power is invested in the development of AI, the better, and the more resources are invested, the better. This is too much in line with the myth of capital expansion.

It’s like a group of blind people rowing a boat, and the leader beating a drum and shouting for everyone to do their best. But he turned his back to him, not knowing if there was a cliff ahead.

 Human nature is aggressive, capital is bold, and AI is unknown.

In the article Learning Machines, Turing also proposed the famous Turing test. Can machines have a conversation like a human in a way that humans can’t tell? For a long time, the Turing test has been the core proposition for judging whether a machine is “intelligent”.

Just recently, a study announced that GPT-4 passed the Turing test, with 54% of people mistaking it for a real person. In fact, since the birth of GPT, good people have been keen to make it participate in the Turing test, but the results have been mixed.

There are also suspicions that the Turing test is not effective at all, such as Musk. Yes, there are only two outcomes for an honest AI to participate in a test – pass or fail.

 And what about dishonest AI?

 More centralized, but also more centralized

 The big guy in the AI circle is now popular to “escape”.

Hinton leaves Google, and disciple Ilya Sutskever leaves OpenAI. In tech giants such as Apple, Microsoft, and Meta, there are both a lot of people leaving and a lot of people joining.

The “Transformer Bazi” of the Google Brain team can be said to be the founder of this wave of AI. Later, they left Google to start or join new companies.

When Ilya Sutskever left OpenAI, everyone was curious. Because he has a lot of aura on him, such as the chief scientist of OpenAI and the father of ChatGPT……

Of course, the most striking thing is Musk’s fanning words – what did Ilya see?

Ilya once said in an interview: ChatGPT may already have consciousness, and in the future, humans will choose to merge with machines, and AI will be immortal.

 It sounds like Ilya is more of a tech nerd.

Does this mean that things have reached an uncontrollable level, so they chose to leave?

 Let’s take a look at other people’s stories.

In 2021, the people who left OpenAI, created Anthropic. As the name suggests from the name of the company (Anthropic roughly means “anthropic”, the existence of the universe and human beings are inseparable), they have a mission to create safe and controllable AI.

However, the company’s employees predict that the next three years may be the last years of his work, because jobs will soon be wiped out by AI.

“With each iteration of our model, I am confronted with more powerful and versatile technology than before, which could put an end to the form of employment as we know it now…… How should we view the disappearance of work? ”


There is a popular saying: AI will not replace humans, but those who use AI will replace those who can’t. However, this sentence is not exciting enough.

Not long ago, the German guy who left OpenAI made a 165-page PPT, clearly proclaiming that AI will exterminate humanity. He was alarmist and pretended to be mysterious:

“There are probably hundreds of people around the world who can see for themselves what’s going on, and most of them are in the AI labs in San Francisco.”


  Herein lies the problem.

The vast majority of ordinary people are ignorant of what is happening. When the atomic bomb was invented, ordinary people didn’t know it; When Ye Wenjie answered the three-body person, ordinary people didn’t know it……

What’s even more terrifying, in fact, geniuses don’t fully understand. What’s more, many of them are not geniuses, but careerists. Instead of seeing something, they didn’t know what they would see, but that didn’t stop them in the slightest.

The German, for example, was immediately questioned: he was pulling investment. In fact, he soon set up an AGI investment institution, and the money, too, came from the Silicon Valley giants.

Capital markets need stories. Whether it’s optimists or eschatologists, the stories they take with them when they leave are told to the market.

The “Seven Sisters of Silicon Valley”, with a total market capitalization of more than $10 trillion, have burned $1.7 trillion in the past year, which is also equivalent to about 75% of the growth of the S&P 500 index.

In China, AI track players are crowded, and the head can easily get hundreds of millions of investments from large manufacturers.

AI has already created considerable wealth for some people. In just one year, global tech billionaires have increased their total wealth by about $750 billion with AI, more than any other industry.

There is too much money in the AI track, and this money comes from almost a very small number of players.

In the past two years, four companies, including Google and Microsoft, have invested nearly 40% of global AI venture capital. In the future, AI will also receive trillions of dollars of investment, creating trillions of wealth.

However, compared to the Internet wave, the degree of concentration of AI is not the same. How many people will end up with this wealth and intelligence?

 Or AI will simply break free from human control or even replace humans.

 Either way, it’s not good news for you and me.

error: Content is protected !!