Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Prompt Engineering Endorses ‘Cognitive Cognizance Prompting’ As A Vital Well-Being Technique

    January 20, 2026

    For These Women, Grok’s Sexualized Images Are Personal

    January 20, 2026

    Inside China’s buzzing AI scene a year after DeepSeek shock

    January 20, 2026
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»Deepseek»AI labs like Meta, Deepseek, and Xai earned worst grades possible on an existential safety index
    Deepseek

    AI labs like Meta, Deepseek, and Xai earned worst grades possible on an existential safety index

    AI Logic NewsBy AI Logic NewsDecember 5, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A recent report card from an AI safety watchdog isn’t one that tech companies will want to stick on the fridge.

    The Future of Life Institute’s latest AI safety index found that major AI labs fell short on most measures of AI responsibility, with few letter grades rising above a C. The org graded eight companies across categories like safety frameworks, risk assessment, and current harms.

    Perhaps most glaring was the “existential safety” line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor and president of the Future of Life Institute.

    “Reviewers found this kind of jarring,” Tegmark told us.

    The reviewers in question were a panel of AI academics and governance experts who examined publicly available material as well as survey responses submitted by five of the eight companies.

    Anthropic, OpenAI, and GoogleDeepMind took the top three spots with an overall grade of C+ or C. Then came, in order, Elon Musk’s Xai, Z.ai, Meta, DeepSeek, and Alibaba, all of which got Ds or a D-.

    Tegmark blames a lack of regulation that has meant the cutthroat competition of the AI race trumps safety precautions. California recently passed the first law that requires frontier AI companies to disclose safety information around catastrophic risks, and New York is currently within spitting distance as well. Hopes for federal legislation are dim, however.

    “Companies have an incentive, even if they have the best intentions, to always rush out new products before the competitor does, as opposed to necessarily putting in a lot of time to make it safe,” Tegmark said.

    In lieu of government-mandated standards, Tegmark said the industry has begun to take the group’s regularly released safety indexes more seriously; four of the five American companies now respond to its survey (Meta is the only holdout.) And companies have made some improvements over time, Tegmark said, mentioning Google’s transparency around its whistleblower policy as an example.

    But real-life harms reported around issues like teen suicides that chatbots allegedly encouraged, inappropriate interactions with minors, and major cyberattacks have also raised the stakes of the discussion, he said.

    “[They] have really made a lot of people realize that this isn’t the future we’re talking about—it’s now,” Tegmark said.

    The Future of Life Institute recently enlisted public figures as diverse as Prince Harry and Meghan Markle, former Trump aide Steve Bannon, Apple co-founder Steve Wozniak, and rapper Will.i.am to sign a statement opposing work that could lead to superintelligence.

    Tegmark said he would like to see something like “an FDA for AI where companies first have to convince experts that their models are safe before they can sell them.

    “The AI industry is quite unique in that it’s the only industry in the US making powerful technology that’s less regulated than sandwiches—basically not regulated at all,” Tegmark said. “If someone says, ‘I want to open a new sandwich shop near Times Square,’ before you can sell the first sandwich, you need a health inspector to check your kitchen and make sure it’s not full of rats…If you instead say, ‘Oh no, I’m not going to sell any sandwiches. I’m just going to release superintelligence.’ OK! No need for any inspectors, no need to get any approvals for anything.”

    “So the solution to this is very obvious,” Tegmark added. “You just stop this corporate welfare of giving AI companies exemptions that no other companies get.”

    This report was originally published by Tech Brew.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOpenAI Critic Arrested for SF Protest Ahead of Activist Group’s Criminal Trial
    Next Article AI deepfakes of real doctors spreading health misinformation on social media | Health
    AI Logic News

    Related Posts

    Deepseek

    Inside China’s buzzing AI scene a year after DeepSeek shock

    January 20, 2026
    Deepseek

    DeepSeek’s 24/7 Operations Pow

    January 19, 2026
    Deepseek

    Post Techcast: the DeepSeek sh

    January 19, 2026
    Demo
    Top Posts

    Houston’s Small Biz Gets Smarter: H

    July 29, 20259 Views

    How To Rank First In ChatGPT Even If You’re New To AI

    March 29, 20259 Views

    OpenAI to Focus on Safety Amid Deception Risks

    January 4, 20266 Views
    Latest Reviews
    ailogicnews.ai
    © 2026 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.