News

In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators.
OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it ...
OpenAI’s Superalignment team, charged with controlling the existential danger of a superhuman AI system, has reportedly been disbanded, according to Wired on Friday. The news comes just days ...
OpenAI has disbanded its Long-Term AI Risk Team, responsible for addressing the existential dangers of AI. The disbanding follows several high-profile departures, including co-founder Ilya ...
In May, OpenAI disbanded its Superalignment team — which OpenAI said focused on "scientific and technical breakthroughs to steer and control AI systems ... in place and the risk levels that ...
Microsoft (NASDAQ:MSFT)-backed OpenAI is dissolving its "AGI Readiness" team, which advised the startup on its capacity to handle AI and the world's readiness to manage the technology, according ...
After leaving, Leike said on X that the team had been "sailing against the wind." "OpenAI must become a safety-first AGI company," Leike wrote on X, adding that building generative AI is "an ...
OpenAI's Superalignment team, announced last year, has focused on "scientific and technical breakthroughs to steer and control AI systems much smarter than us." At the time, OpenAI said it would ...
OpenAI declined to comment on the departures of Sutskever or other members of the superalignment team, or the future of its work on long-term AI risks. Research on the risks associated with more ...
OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a person familiar with the situation confirmed to CNBC ...