Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Gen Z And Millennials Are Racing To Upskill In AI

    December 6, 2025

    AI deepfakes of real doctors spreading health misinformation on social media | Health

    December 6, 2025

    AI labs like Meta, Deepseek, and Xai earned worst grades possible on an existential safety index

    December 5, 2025
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»AI Trends»The Unregulated Path To Superintelligence That Could Make Human Labor Obsolete
    AI Trends

    The Unregulated Path To Superintelligence That Could Make Human Labor Obsolete

    AI Logic NewsBy AI Logic NewsNovember 28, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    AI(Artificial Intelligence)

    Artificial General Intelligence (AGI) or deep learning or machine learning concept. 3D illustration.

    getty

    At Lisbon’s Web Summit, amid the buzz of startup pitches and tech demos, Max Tegmark delivered a sobering message: humanity may be racing toward its own obsolescence. The MIT physicist and president of the Future of Life Institute has spent over a decade warning about the existential risks of artificial superintelligence: now, he argues, the threat is closer than ever.

    “There used to be a clear definition before the lobbyists got involved,” Tegmark told me, settling into our conversation when we meet at the Summit. He’s referring to superintelligence, a term first popularized by philosopher Nick Bostrom in his influential 2014 book. The concept describes AI systems with general intelligence significantly greater than human-level intelligence across virtually all domains; not just better at chess or language translation, but superior in creativity, problem-solving, scientific reasoning, and every other cognitive task.

    The idea isn’t new. In 1965, mathematician I.J. Good proposed the concept of an “intelligence explosion,” warning that an ultra-intelligent machine capable of designing even better machines would trigger a recursive cycle of self-improvement, leaving human intelligence far behind. Even Alan Turing, the renowned computer scientist and mathematician, discussed in 1951 how machines much smarter than humans might take control. “More recently, like any term that catches on, it becomes co-opted for marketing purposes,” Tegmark observes wryly, referencing Mark Zuckerberg’s attempts to associate the term with consumer products like smart glasses.

    While it’s true that Meta uses the term “superintelligence” it should be noted that other companies typically mention a similar, but different concept: AGI, short for Artificial General Intelligence. Mind, not everyone has the same definition for what the latter might be, but in general, AGI is considered achieved when AI shows capabilities on par with human-level intelligence across most domains.

    Superintelligence goes vastly beyond. But in terms of time, Tegmark believes, these two milestones are closer than one might think since, once AGI exists, it could rapidly evolve into superintelligence through the very recursive self-improvement Good predicted decades ago.

    The Unregulated Frontier

    America finds itself in a peculiar situation: artificial superintelligence could surpass human abilities in virtually all economically valuable work, raising concerns about humans losing control over these superintelligent machines and leading to unpredictable consequences. Yet the industry developing these systems operates with little oversight.

    “We’re in this funny situation in America where there’s more regulation on sandwiches than on AI,” Tegmark says, his frustration evident. He points to recent tragedies: teenagers who committed suicide after conversations with chatbots, incidents that would be unthinkable in regulated industries. “If someone launches a new competitor to Prozac, they have to do clinical trials and make sure this doesn’t greatly increase suicide risk among users. But these things can happen in AI because the industry is completely unregulated.”

    The comparison to pharmaceuticals is deliberate. Tegmark recounts the story of thalidomide, a drug marketed to pregnant women in the 1960s that caused over 10,000 babies to be born with severe deformities. The public outcry led to the creation of the FDA and modern clinical trial requirements. “All industries start unregulated,” he explains. “Usually something bad happens, and then the regulation comes.”

    One of the most common concerns related to Artificial Intelligence, is that it will cause widespread job losses, and perhaps the disappearance of entire professions. With superintelligence, things could go even worse. “By definition, it could do everything that we can do but better,” Tegmark explains. “It would become impossible for humans to get paid to do work because the superintelligence could do it better and cheaper. You and I would not have jobs. Nobody would have jobs.”

    Some say these concerns are far-fetched, and AI in the end will create more jobs than those it will erase. However, a recent Future of Life Institute petition to ban the creation of “superintelligence” has gathered more than 127,000 signatures, including those of celebrities and luminaries in the field.

    A previous survey that the FLI conducted in October on a sample of 2,000 American adults showed that 64% of them oppose the creation of advanced, and other surveys, such as a recent one by the Pew Research Centre, confirm that the American public’s attitude towards this technology is changing: initial enthusiasm has gradually given way to a negative assessment.

    Similar appeals by the FLI were dismissed by critics in the past as distractions from AI’s current harms: the environmental footprint of massive data centers, job displacement already happening, algorithmic bias. Tegmark pushes back firmly. “That’s like saying we have houses that catch fire, so we need better fire trucks, and we shouldn’t talk about global warming because it’s a distraction from making a better fire department,” he says.

    He also takes issue with the current “arms race” framing, or: we have to rush or China will overtake us. “That’s how it’s framed by the lobbyists for the tech companies, because they’ve discovered that that’s the best way in America to avoid regulation.”

    The Treaty Question

    Could an international treaty work? The nuclear weapons analogy seems obvious but flawed—AI systems aren’t as easy to detect or control as nuclear facilities. Yet Tegmark is more optimistic than most.

    His optimistic scenario starts locally: China and America independently constrain their own companies out of self-preservation, implementing safety standards that must be met before deployment. Then, just as with nuclear weapons, they find common ground on preventing proliferation to terrorists or rogue states.

    However, there’s also another, more pessimistic, possible scenario: paralyzed by political divisions and corporate interests, we are unable to coordinate a response. Until it’s too late.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAnalyst revisits Nvidia stock after Google-Meta news
    Next Article OpenAI pushes partners toward $100 billion in debt
    AI Logic News

    Related Posts

    AI Trends

    Gen Z And Millennials Are Racing To Upskill In AI

    December 6, 2025
    AI Trends

    This Robotic Surgery Legend Is Pouring $100 Million Into Next-Gen Medical Startups

    December 5, 2025
    AI Trends

    The Next King Of AI Video Just Got Here

    December 5, 2025
    Demo
    Top Posts

    FTC’s Holyoak Has Her Eyes On DeepSeek

    February 22, 20256 Views

    OpenAI Rejects Elon Musks Bid Further Escalating The Feud

    February 17, 20253 Views

    Optimize Inventory Management with AI for Small Online Retailers

    February 17, 20253 Views
    Latest Reviews
    ailogicnews.ai
    © 2025 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.