Cheraw Chronicle

Complete News World

OpenAI and Google employees warn that AI development is ‘reckless and secretive’

OpenAI and Google employees warn that AI development is ‘reckless and secretive’

Former and current employees of OpenAI and Google DeepMind warn of the “recklessness” with which AI is currently being developed at such high speed, and the insecurity this creates for both users and employees.

in Open letter AI insiders warn that companies like OpenAI want to stifle scrutiny and criticism because they can’t use it. “We are current and former employees of leading AI companies, and we believe in the potential of AI to bring unprecedented benefits to humanity,” it says at the beginning of the letter — just to clear up this misunderstanding.

But not everything should make way for these benefits to humanity, as the signatories believe. There are 13 signatures underneath the letter, from both current and former employees of OpenAI, as well as from Google DeepMind and Anthropic. Professor Geoffrey Hinton, also known as the “Godfather of Deep Learning,” also supports the message.

“The risks are known but nothing happens.”

The dangers of reckless development of artificial intelligence are well known, the letter’s authors warn. Although laws were quickly passed in the United States and the European Union to force the development, such measures face opposition from major AI companies, the open letter says. “Until there is effective government oversight of these companies, current and former employees will be among the few people who can hold them accountable to the public,” the signatories said.

This turns employees into whistleblowers, a dangerous and often thankless role. Anyone who raises public concerns risks dismissal, and may sometimes not be able to work anywhere. This concerns all sorts of concerns, from the safety of AI functions to the high workload within companies.

“AI companies have substantial non-public information about the capabilities and limitations of their systems, the adequacy of their safeguards, and the risk levels of different types of harm. However, they currently have only weak obligations to share some of this information with governments, and none with civil society.” “

Four principles

The authors of the letter discuss four principles. First, the signatories want there to be no penalty for openly expressing criticism or concerns about AI companies. Second, an anonymous way to raise concerns to the company’s board of directors. Third: A culture in which frank criticism is welcomed. Fourth: No retaliation against (former) employees who expressed their concerns about the risks.

The criticism appears to be justified

The speed at which companies develop and release AI features has been criticized before. Google has recently been in the news embarrassingly for all kinds of incorrect AI answers. OpenAI recently saw the departure of the company’s co-founder and chief scientist, Ilya Sutskever. The reason is unknown, but Sutskever was known as a safety champion within the company.

Key OpenAI researcher Jan Lake also left last month. “In recent years, safety culture and processes have been overshadowed by shiny products,” Leike said upon his departure. In other words, AI companies don’t really think about the consequences as long as new features are being pumped out.

More AI and don’t miss anything with our shining app.