Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    What Facing Down A Lion Can Teach You About The AI Age

    April 21, 2026

    ‘Uber for nurses’: gig-work apps lobby to deregulate healthcare, report finds | US healthcare

    April 21, 2026

    China’s Moonshot AI Launches N

    April 21, 2026
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»AI Trends»‘Godfather Of AI’ Launches Nonprofit Focused On Safer Systems
    AI Trends

    ‘Godfather Of AI’ Launches Nonprofit Focused On Safer Systems

    AI Logic NewsBy AI Logic NewsJune 4, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Yoshua Bengio testifies on the importance of AI regulation at a U.S. Senate Judiciary Committee … More hearing in July 2023.

    Getty Images

    Computer scientist Yoshua Bengio, often referred to as the “godfather” of AI, has launched a nonprofit aimed at creating AI systems that prioritize safety over business priorities.

    The organization, called LawZero, “was founded in response to evidence that today’s frontier AI models are developing dangerous capabilities and behaviors, including deception, self-preservation and goal misalignment,” reads a statement posted to its website on Tuesday. “LawZero’s work will help to unlock the immense potential of AI in ways that reduce the likelihood of a range of known dangers associated with today’s systems, including algorithmic bias, intentional misuse and loss of human control.”

    LawZero is structured as a nonprofit “to ensure it is insulated from market and government pressures, which risk compromising AI safety,” the statement says.

    Bengio is a worldwide leader in AI and a co-recipient of the 2018 A.M. Turing Award, the Association for Computing Machinery’s prestigious annual prize that’s sometimes called the Nobel Prize of Computing. He won the award alongside two other deep-learning pioneers — Geoffrey Hinton, another “godfather of AI” who worked at Google, and Yann LeCun — for conceptual and engineering breakthroughs, made over decades, that have positioned deep neural networks as a critical component of computing.

    What Are The Main AI Safety Concerns?

    While artificial intelligence has sparked considerable excitement across industries — and Bengio recognizes its potential as a driver of significant innovation — it’s also led to mounting concerns about possible pitfalls. Generative AI tools are capable of producing text, images and video that spread almost instantly over social media and can be difficult to distinguish from the real thing. Bengio has called for slowing the development of AI systems to better understand and regulate them.

    “There is no guarantee that someone in the foreseeable future won’t develop dangerous autonomous AI systems with behaviors that deviate from human goals and values,” the University of Montreal professor wrote in a blog post announcing why he’d signed a 2023 open letter calling for a slowdown in the development of some AI tools. He has signed other such statements, and appeared in front of the U.S. Senate Judiciary Subcommittee on Privacy, Technology and the Law to outline the risks of AI misuse.

    Bengio also serves as scientific director at Mila (Montreal Institute for Learning Algorithms), an artificial-intelligence research institute. Now, he’ll add LawZero president and scientific director to his resume.

    What Could A Safer AI System Look Like?

    LawZero says it’s assembling a team of world-class AI researchers, though it did not immediately respond to a request for comment on who is included in that group. Together, the scientists are working on a system called Scientist AI, which LawZero calls a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn’t have agency or work autonomously, but instead behaves in response to human input and goals.

    “Such AI systems could be used to provide oversight for agentic AI systems, accelerate scientific discovery and advance the understanding of AI risks and how to avoid them,” LawZero says. “LawZero believes that AI should be cultivated as a global public good—developed and used safely towards human flourishing.”

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAmazon to invest $10B in new NC data center campus
    Next Article Anthropic Cuts Windsurf’s Claude Access Before OpenAI Acquisition
    AI Logic News

    Related Posts

    AI Trends

    What Facing Down A Lion Can Teach You About The AI Age

    April 21, 2026
    AI Trends

    Apple’s Tim Cook Exit Hides A $4 Trillion Agentic AI Power Move

    April 21, 2026
    AI Trends

    Warming Oceans, A Hot Year And ‘Elite’ Beliefs

    April 20, 2026
    Demo
    Top Posts

    DeepSeek V4 And Tencent’s New Hunyuan Model To Launch In April

    March 17, 202642 Views

    OpenAI’s Simo Said to Warn Staff Ag

    March 17, 202634 Views

    Houston’s Small Biz Gets Smarter: H

    July 29, 202513 Views
    Latest Reviews
    ailogicnews.ai
    © 2026 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.