Another one of OpenAI’s leading safety researchers and current VP of research and safety, Lilian Weng, announced on Friday she is departing the startup. Weng served as VP of research and safety since August, and before that, was the head of the OpenAI’s safety systems team.
In a post on X, Weng wrote “After working at OpenAI for almost 7 years, I decide to leave. I learned so much and now I’m ready for a reset and something new.” Weng said her last day will be November 15th, but did not specify where she will go next.
“I made the extremely difficult decision to leave OpenAI,” said Weng in the post. “Looking at what we have achieved, I’m so proud of everyone on the Safety Systems team and I have extremely high confidence that the team will continue thriving.”
Weng’s departure marks the latest in a long string of AI safety researchers, policy researchers, and other executives who have exited the company in the last year, and several have accused OpenAI of prioritising commercial products over AI safety. Weng joins Ilya Sutskever and Jan Leike – the leaders of OpenAI’s now dissolved Superalignment team, which tried to develop methods to steer superintelligent AI systems – who also left the startup this year to work on AI safety elsewhere.
Weng first joined OpenAI in 2018, according to her LinkedIn, working on the startup’s robotics team that ended up building a robot hand that could solve a Rubik’s cube – a task that took two years to achieve, according to her post.
As OpenAI started focusing more on the GPT paradigm, so did Weng. The researcher transitioned to help build the startup’s applied AI research team in 2021. Following the launch of GPT-4, Weng was tasked with creating a dedicated team to build safety systems for the startup in 2023. Today, OpenAI’s safety systems unit has more than 80 scientists, researchers, and policy experts, according to Weng’s post.
That’s a lot of AI safety folks, but many have raised concerns around OpenAI’s focus on safety as it tries to build increasingly powerful AI systems. Miles Brundage, a longtime policy researcher, left the startup in October and announced that OpenAI was dissolving its AGI readiness team, which he had advised. On the same day, the New York Times profiled a former OpenAI researcher, Suchir Balaji, who said he left OpenAI because he thought the startup’s technology would bring more harm than benefit to society.
OpenAI tells TechCrunch that executives and safety researchers are working on a transition to replace Weng.
“We deeply appreciate Lilian’s contributions to breakthrough safety research and building rigorous technical safeguards,” said an OpenAI spokesperson in an emailed statement. “We are confident the Safety Systems team will continue playing a key role in ensuring our systems are safe and reliable, serving hundreds of millions of people globally.”
Other executives who have left OpenAI in recent months include CTO Mira Murati, chief research officer Bob McGrew, and research VP Barret Zoph. In August, the prominent researcher Andrej Karpathy and co-founder John Schulman also announced they’d be leaving the startup. Some of these folks, including Leike and Schulman, left to join an OpenAI competitor, Anthropic, while others have gone on to start their own ventures.