The AI ​​craze is sweeping Silicon Valley, but lawmakers are stuck without knowing the technology

  In recent weeks, two members of the U.S. Congress have sounded the alarm about the dangers of artificial intelligence (AI) technology.
  Rep. Ted Lieu, D-Calif., wrote in a January feature piece for The New York Times that he was “horrified” by the chatbot ChatGPT’s ability to mimic human writers. Rep. Jake Auchincloss, D-Calif., delivered a one-minute speech (written by a chatbot) calling for the regulation of artificial intelligence.
  But even as these lawmakers put AI technology in the spotlight, few have acted on it. No bill has yet been proposed to protect individuals or curb the development of potentially dangerous AI. Legislation aimed at limiting the use of AI, such as facial recognition, has failed in Congress in recent years.
  “There needs to be agreement on what the dangers of AI are before regulation can be imposed, and that requires a solid understanding of the technology. You’d be surprised how much time I spend explaining to my colleagues that the main dangers of AI won’t come from the eyes Evil robots with red lasers in them.” California Republican Rep. Jay Obernolte (Jay Obernolte) pointed out that the current problem is that most lawmakers don’t even understand what AI is. Obernault is the only member of the U.S. Congress with a master’s degree in AI.
  Inaction on AI technology is the norm. In this situation, technological development has once again surpassed the rule-making and supervision of the US government. Lawmakers have long struggled to understand new innovations, describing the Internet as a “series of tubes.” At the same time, companies are also trying to slow down the pace of regulation on the grounds of technological competition between China and the United States.
  This means that when the AI ​​boom sweeps Silicon Valley and technology giants such as Microsoft, Google, Amazon, and Meta are competing to develop it, the US government will take a non-intervention stance. Advances in AI that have given rise to chatbots that can write poetry and self-driving cars have sparked debate over the limits of these applications, with some fearing that such technologies could eventually replace humans in jobs or even become self-aware.
  Carly Kind, head of technology at the Ada Lovelace Institute in London, says a lack of regulation encourages companies to prioritize financial and commercial interests — at the expense of security. “The failure of policymakers to create regulatory mechanisms is creating the conditions for irresponsible AI competition,” she said.
  In a regulatory vacuum, the European Union has played a leading role. In 2021, EU policymakers created a regulation focusing on AI technologies that could cause serious harm, such as those related to facial recognition and critical public infrastructure such as water supply. The regulation, expected to be passed as soon as this year, would require AI developers to conduct a risk assessment of how the application of their technology might affect health, safety and individual rights, including freedom of expression.
  Violators could be fined as much as 6 percent of their revenue, and for some of the world’s largest technology companies, fines could total billions of dollars. EU policymakers noted that the law was enacted to ensure the benefits of AI while minimizing its social risks.
  In 2021, the Vatican Academy for Life Sciences, IBM and Microsoft pledged to develop “ethical AI” as warnings of “AI dangers” intensified. This commitment means that technical research organizations will provide transparent explanations of how AI technology works, respect public privacy and minimize bias. It also called for regulation of facial recognition software, which uses vast amounts of photo data to identify people. In Washington, some lawmakers are trying to create rules for facial recognition technology and corporate audits to prevent discriminatory algorithms. But the bills went nowhere.
  More recently, some U.S. government officials have attempted to bridge the knowledge gap about AI. In January, about 150 lawmakers and their staff attended a conference hosted by the congressional group AI Caucus and attended by Jack Clark, co-founder of the AI ​​firm Anthropic.
  Based on existing laws and regulations, federal agencies are taking some actions around AI technology. The U.S. Federal Trade Commission (FTC) has issued enforcement orders against companies that use AI in violation of consumer protection regulations. The Consumer Financial Protection Bureau has also warned that opaque AI systems used by credit institutions could violate anti-discrimination laws. The FTC also proposed commercial surveillance regulations to curb the collection of data by AI technologies. The U.S. Food and Drug Administration has also released a list of AI-powered medical device approvals.
  In October last year, the White House released a blueprint for AI rules, emphasizing the privacy of individuals and the security of automated systems, avoiding algorithmic discrimination and the emergence of realistic human alternatives. But none of these efforts have yet become law.
  Liu Yunping and other members of Congress said that in January of this year, OpenAI CEO Sam Altman (SamAltman) visited several members of Congress and demonstrated GPT-4 in person. This new AI model can complete tasks such as writing papers and solving complex programming problems. Altman expressed support for regulation by showing that GPT-4 would have stronger safety controls than past AI models, the lawmakers said.
  Liu Yunping, who met with Altman, said the government cannot rely on individual businesses to protect users. He plans to introduce a bill this year that proposes the creation of a committee dedicated to studying AI and a new agency to regulate it. “OpenAI has decided to build controls into its technology, but how can we ensure that other companies will do the same?” he asked.

error: Content is protected !!