Yoshua Bengio testifies on the importance of AI regulation at a U.S. Senate Judiciary Committee … More
Computer scientist Yoshua Bengio, often referred to as the “godfather” of AI, has launched a nonprofit aimed at creating AI systems that prioritize safety over business priorities.
The organization, called LawZero, “was founded in response to evidence that today’s frontier AI models are developing dangerous capabilities and behaviors, including deception, self-preservation and goal misalignment,” reads a statement posted to its website on Tuesday. “LawZero’s work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today’s systems, including algorithmic bias, intentional misuse and loss of human control.”
LawZero is structured as a nonprofit “to ensure it is insulated from market and government pressures, which risk compromising AI safety,” the statement says.
Bengio is a worldwide leader in AI and a co-recipient of the 2018 A.M. Turing Award, the Association for Computing Machinery’s prestigious annual prize that’s sometimes called the Nobel Prize of Computing. He won the award alongside two other deep-learning pioneers — Geoffrey Hinton, another “godfather of AI” who worked at Google, and Yann LeCun — for conceptual and engineering breakthroughs, made over decades, that have positioned deep neural networks as a critical component of computing.
What Are The Main AI Safety Concerns?
While artificial intelligence has sparked considerable excitement across industries — and Bengio recognizes its potential as a driver of significant innovation — it’s also led to mounting concerns about possible pitfalls. Generative AI tools are capable of producing text, images and video that spread almost instantly over social media and can be difficult to distinguish from the real thing. Bengio has called for slowing the development of AI systems to better understand and regulate them.
“There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values,” the University of Montreal professor wrote in a blog post announcing why he’d signed a 2023 open letter calling for a slowdown in the development of some AI tools. He has signed other such statements, and appeared in front of the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law to outline the risks of AI misuse.
Bengio also serves as scientific director at Mila (Montreal Institute for Learning Algorithms), an artificial-intelligence research institute. Now, he’ll add LawZero president and scientific director to his resume.
What Could A Safer AI System Look Like?
LawZero says it’s assembling a team of world-class AI researchers, though it did not immediately respond to a request for comment on who is included in that group. Together, the scientists are working on a system called Scientist AI, which LawZero calls a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn’t have agency or work autonomously, but instead behaves in response to human input and goals.
“Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery and advance the understanding of AI risks and how to avoid them,” LawZero says. “LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing.”

