News

OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days ...
In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators.
OpenAI has disbanded its Long-Term AI Risk Team, responsible for addressing the existential dangers of AI. The disbanding follows several high-profile departures, including co-founder Ilya ...
Microsoft (NASDAQ:MSFT)-backed OpenAI is dissolving its "AGI Readiness" team, which advised the startup on its capacity to handle AI and the world's readiness to manage the technology, according ...
In the study, it was revealed that half of the biggest LLM providers on the market have experienced data breaches. More ...
OpenAI had already disbanded its super-alignment team in May, which had been working on assessing the long-term risks of AI. The head of the team at the time, Jan Leike, criticized at the time ...
In May, OpenAI disbanded its Superalignment team — which OpenAI said focused on "scientific and technical breakthroughs to steer and control AI systems ... in place and the risk levels that ...
Relationships between AI giants begin to shuffle as OpenAI is no longer locked into a deal with Microsoft Azure for cloud ...
After leaving, Leike said on X that the team had been "sailing against the wind." "OpenAI must become a safety-first AGI company," Leike wrote on X, adding that building generative AI is "an ...
A former OpenAI researcher has revealed how ChatGPT will prioritize its own survival over user safety in critical situations.
OpenAI’s June 2025 report details 10 threats from six countries, including propaganda bots, fake resumes, and Windows-based ...