Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    What Facing Down A Lion Can Teach You About The AI Age

    April 21, 2026

    ‘Uber for nurses’: gig-work apps lobby to deregulate healthcare, report finds | US healthcare

    April 21, 2026

    China’s Moonshot AI Launches N

    April 21, 2026
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»Deepseek»OpenAI’s models ‘memorized’ copyrighted content, new study suggests
    Deepseek

    OpenAI’s models ‘memorized’ copyrighted content, new study suggests

    AI Logic NewsBy AI Logic NewsApril 4, 2025No Comments3 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A new study appears to lend credence to allegations that OpenAI trained at least some of its AI models on copyrighted content.

    OpenAI is embroiled in suits brought by authors, programmers, and other rights-holders who accuse the company of using their works — books, codebases, and so on — to develop its models without permission. OpenAI has long claimed a fair use defense, but the plaintiffs in these cases argue that there isn’t a carve-out in U.S. copyright law for training data.

    The study, which was co-authored by researchers at the University of Washington, the University of Copenhagen, and Stanford, proposes a new method for identifying training data “memorized” by models behind an API, like OpenAI’s.

    Models are prediction engines. Trained on a lot of data, they learn patterns — that’s how they’re able to generate essays, photos, and more. Most of the outputs aren’t verbatim copies of the training data, but owing to the way models “learn,” some inevitably are. Image models have been found to regurgitate screenshots from movies they were trained on, while language models have been observed effectively plagiarizing news articles.

    The study’s method relies on words that the co-authors call “high-surprisal” — that is, words that stand out as uncommon in the context of a larger body of work. For example, the word “radar” in the sentence “Jack and I sat perfectly still with the radar humming” would be considered high-surprisal because it’s statistically less likely than words such as “engine” or “radio” to appear before “humming.”

    The co-authors probed several OpenAI models, including GPT-4 and GPT-3.5, for signs of memorization by removing high-surprisal words from snippets of fiction books and New York Times pieces and having the models try to “guess” which words had been masked. If the models managed to guess correctly, it’s likely they memorized the snippet during training, concluded the co-authors.

    OpenAI copyright study
    An example of having a model “guess” a high-surprisal word.Image Credits:OpenAI

    According to the results of the tests, GPT-4 showed signs of having memorized portions of popular fiction books, including books in a dataset containing samples of copyrighted ebooks called BookMIA. The results also suggested that the model memorized portions of New York Times articles, albeit at a comparatively lower rate.

    Abhilasha Ravichander, a doctoral student at the University of Washington and a co-author of the study, told TechCrunch that the findings shed light on the “contentious data” models might have been trained on.

    “In order to have large language models that are trustworthy, we need to have models that we can probe and audit and examine scientifically,” Ravichander said. “Our work aims to provide a tool to probe large language models, but there is a real need for greater data transparency in the whole ecosystem.”

    OpenAI has long advocated for looser restrictions on developing models using copyrighted data. While the company has certain content licensing deals in place and offers opt-out mechanisms that allow copyright owners to flag content they’d prefer the company not use for training purposes, it has lobbied several governments to codify “fair use” rules around AI training approaches.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleOpenAI Shifts Course, Says GPT-5 Co
    Next Article Pro-Palestine protester interrupts Microsoft AI chief Mustafa Suleyman at event
    AI Logic News

    Related Posts

    Deepseek

    China’s Moonshot AI Launches N

    April 21, 2026
    Deepseek

    Grimes Frames AI as Existentia

    April 20, 2026
    Deepseek

    Ryze brings ChatGPT, DeepSeek and Gemini into one platform

    April 20, 2026
    Demo
    Top Posts

    DeepSeek V4 And Tencent’s New Hunyuan Model To Launch In April

    March 17, 202642 Views

    OpenAI’s Simo Said to Warn Staff Ag

    March 17, 202634 Views

    Houston’s Small Biz Gets Smarter: H

    July 29, 202513 Views
    Latest Reviews
    ailogicnews.ai
    © 2026 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.