Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Gen Z And Millennials Are Racing To Upskill In AI

    December 6, 2025

    AI deepfakes of real doctors spreading health misinformation on social media | Health

    December 6, 2025

    AI labs like Meta, Deepseek, and Xai earned worst grades possible on an existential safety index

    December 5, 2025
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»Featured»OpenAI scales back AI safety testing: Report
    Featured

    OpenAI scales back AI safety testing: Report

    AI Logic NewsBy AI Logic NewsApril 11, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI is reportedly dialing down its safety evaluation efforts for upcoming AI models, according to a report by the Financial Times, sparking concerns over whether the pace of innovation is outpacing responsible development.

    Citing eight sources familiar with the matter, the report states that internal teams responsible for evaluating the risks and performance of new models were recently given only a few days to conduct safety checks—down from more extensive review timelines in the past. The testing process, according to insiders, has become notably less rigorous, with fewer resources being allocated to risk mitigation efforts.

    The timing is crucial. OpenAI is preparing to roll out its next major AI model, referred to internally as “o3,” within the coming week. While no official release date has been confirmed, the company’s accelerated timeline appears to be driven by mounting pressure to maintain its lead in an increasingly competitive field. Rivals, including fast-rising Chinese players like DeepSeek, have been rapidly advancing their own generative AI offerings.

    The safety concerns come as OpenAI’s focus shifts from model training—where large volumes of data are used to teach AI systems—to inference, where those models are deployed to generate responses and handle real-time data. This transition carries new risks, particularly around unexpected behavior or misuse of the technology at scale.

    Despite the internal concerns, OpenAI has continued to attract investor confidence. Earlier this month, the company secured $40 billion in funding in a round led by Japan’s SoftBank Group, pushing its valuation to a staggering $300 billion.

    While OpenAI has not publicly responded to the claims in the FT report, the developments point to a growing tension in the AI sector—balancing rapid progress with the need for robust ethical and safety oversight.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleThe Evolution of AI Agents: Beyond Generative AI
    Next Article Walmart Sets Fashion Trends Before They Ever Happen
    AI Logic News

    Related Posts

    Featured

    AI deepfakes of real doctors spreading health misinformation on social media | Health

    December 6, 2025
    Featured

    US senators seek to block Nvidia sales of advanced chips to China

    December 5, 2025
    Featured

    Sam Kirchner, Stop AI’s Co-Founder, Is Missing

    December 5, 2025
    Demo
    Top Posts

    FTC’s Holyoak Has Her Eyes On DeepSeek

    February 22, 20256 Views

    OpenAI Rejects Elon Musks Bid Further Escalating The Feud

    February 17, 20253 Views

    Optimize Inventory Management with AI for Small Online Retailers

    February 17, 20253 Views
    Latest Reviews
    ailogicnews.ai
    © 2025 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.