Former AI Giant Leader Raises Concerns Over Safety Neglect
A former OpenAI leader expressed concerns over safety being sidelined by product innovation at the influential AI company. Jan Leike, departing the firm alongside a co-founder, emphasized the need for prioritizing safety in AI research to mitigate risks and analyze societal impacts. This shift in focus is crucial for the responsible advancement of smart machines. Jan Leike, a former leader at OpenAI, raised worrisome issues regarding the company’s neglect of safety in favor of product development. His departure, along with a co-founder, highlights the urgent call to prioritize safety in AI research to address risks and societal implications. This change is vital for the ethical progression of intelligent machines.
By The Associated Press May 17, 2024 at 11:30am
A former OpenAI leader who resigned from the company this week said Friday that safety has “taken a backseat to shiny products” at the influential artificial intelligence company.
Jan Leike, who ran OpenAI’s “Super Alignment” team alongside a company co-founder who also resigned this week, wrote in a series of posts on the social media platform X that he joined the San Francisco-based company because he thought it would be the best place to do AI research.
“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” wrote Leike, whose last day was Thursday.
An AI researcher by training, Leike said he believes there should be more focus on preparing for the next generation of AI models, including on things like safety and analyzing the societal impacts of such technologies.
He said building “smarter-than-human machines is an inherently dangerous endeavor” and that the company “is shouldering an enormous responsibility on behalf of all of humanity.”
“OpenAI must become a safety-first AGI company,” Leike wrote using the abbreviated version of artificial general intelligence, a futuristic vision of machines that are as broadly smart as humans or at least can do many things as well as people can.
Leike’s resignation came after OpenAI co-founder and chief scientist Ilya Sutskever said Tuesday that he was leaving the company after nearly a decade.
Sutskever was one of four board members last fall who voted to push out CEO Sam Altman — only to quickly reinstate him.
It was Sutskever who told Altman last November that he was being fired, but he later said he regretted doing so.
Will AI become more of a problem than a help?
Sutskever said he is working on a new project that’s meaningful to him without offering additional details. He will be replaced by Jakub Pachocki as chief scientist.
Altman called Pachocki “also easily one of the greatest minds of our generation” and said he is “very confident he will lead us to make rapid and safe progress towards our mission of ensuring that AGI benefits everyone.”
On Monday, OpenAI showed off the latest update to its artificial intelligence model, which can mimic human cadences in its verbal responses and can even try to detect people’s moods.
Note: The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.
The Western Journal has reviewed this Associated Press story and may have altered it prior to publication to ensure that it meets our editorial standards.
" Conservative News Daily does not always share or support the views and opinions expressed here; they are just those of the writer."
Now loading...