Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    US senators seek to block Nvidia sales of advanced chips to China

    December 5, 2025

    ByteDance and DeepSeek Are Placing Very Different AI Bets

    December 5, 2025

    Mitchell Green warns of ‘ludicrous’ burn rate

    December 5, 2025
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»AI Trends»Quantum, Moore’s Law, And AI’s Future
    AI Trends

    Quantum, Moore’s Law, And AI’s Future

    AI Logic NewsBy AI Logic NewsJune 29, 2025No Comments7 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email
    Green microchip set in a blue printed circuit board

    microchip integrated on motherboard

    getty

    In the game of AI acceleration, there are several key moving parts. One of them is hardware: what do the chips look like? And this is a very interesting question. Another is quantum computing: what role will it play? Another is scaling.

    Everyone from CEOs and investors to engineers is scrambling to figure out what the future looks like, but we got a few ideas from a recent panel at Imagination in Action that assembled some of the best minds on the matter.

    WSE and the Dinner Plate of Reasoning

    Not too long ago, I wrote about the Cerebras WSE chip, a mammoth piece of silicon about the size of a dinner plate, that is allowing the centralization of large language model efforts. This is an impressive piece of hardware by any standard, and has a role in coalescing the vanguard of what we are doing with AI hardware.

    In the aforementioned panel discussion, Julie Choi from Cerebras started by showing off the company’s WSE superchip, noting that some call it the “caviar of inference.” (I thought that was funny.)

    “I think that as we evolve, we’re just going to see even more innovative, novel approaches at the hardware architecture level,” she said.

    “The optimization space is extremely large,” said Dinesh Maheshwari, discussing architecture and compute units. “So I encourage everyone to look at it.”

    Panelist Caleb Sirak, also of MIT, talked about ownership of hardware.

    “As the models themselves start to change, how can businesses themselves integrate them directly and get them for a fair price, but also convert that AI, and the energy involved, into a productive utility?”

    “What is a computer, and what can a computer do?” asked Alexander Keesling, explaining his company’s work on hardware. “We took the fundamental unit of matter, a single atom, and turned it into the fundamental unit of information, which is a quantum bit … a quantum computer is the first time in human history where we can take advantage of the fundamental properties of nature to do something that is different and more powerful.”

    Jeremy Kepner of MIT’s Lincoln Lab had some thoughts on the singularity of computing – not the race toward AGI, but a myopic centralization of an overarching “operation.”

    “Every single computer in the high end that we built for the last many decades has only done one operation,” he said. “So there’s a lot to unpack there, but it’s for very deep mathematical and physics reasons: that’s the only operation we’ve ever been able to figure out how to accelerate over many decades. And so what I often tell the users is, the computer picks the application. AI happens to be acceleratable by that operation.”

    He urged the audience to move forward in a particular way.

    “Think about whatever you want to do, and if you can accelerate it with that kind of mathematical operation, you know the sky is the limit on what you can do,” he said. “And someone in your field will figure it out, and they will move ahead dramatically.”

    Engineering Challenges and AI Opportunities

    The panel also mentioned some of the headwinds that innovators must contend with.

    On the other hand, Jeff Grover noted the near-term ability of systems to evolve.

    “We’re actually quite excited about this,” he said.

    The Software End

    Panelists discussed the relevance of software and the directions that coding is going in.

    “Programming languages are built for people,” Sirak said. “How do you actually change that to build languages and tools that AI can use?”

    Choi mentioned benchmarks like inference rates of 2900 tokens per second for Llama 4.

    “Open source models are rich for developers,” she said. “What that’s doing is building a bridge between the bravest developers. I would say the early adopters tend to be very courageous, and they’re willing to code on things that they’ve never seen before.”

    The Fast Car

    Several panelists talked about a particular metaphor to a Ferrari, with Choi referencing “Ferrari-level” speeds for the Cerebras chip.

    Maheshwari talked about “exotic” chips, and design from an architecture paradigm, comparing certain builds to “picking up groceries in a Ferrari.”

    He also mentioned the imperative of keeping the technology “street legal.”

    Moore’s Law and Progress

    Kepner talked about being surprised by what computers can do, and the size of investment in the industry. Moore’s law, he said, implied an upper limit for spending. He predicted another decade of efficiencies, and cited the Ozaki scheme, a matrix method for preserving precision in calculations.

    What About Quantum?

    “I think that the first area where we’re going to see quantum computing impact is going to be in research,” Keesling said. “These problems, at their core, are (about) trying to answer what happens when atoms and electrons interact with one another and develop these emergent behaviors … how we think about chemistry, how we think about drug interactions, how we think about material properties, all comes from electrons and atoms moving.”

    There was a lot to unpack in this panel discussion, including details on how we’re going to achieve progress in the next few years.

    The Ozaki Scheme

    Going back to this matrix idea, I was not familiar with this term, so I looked it up and asked ChatGPT to describe it in basic English.

    “It’s named after Makoto Ozaki, the person who came up with the idea,” the model told me. “He found a smart way to do very accurate math (like multiplying big grids of numbers) using fast but less accurate tools (like low-precision numbers). His method splits the work into small, simple steps and then carefully puts the pieces back together to get the exact right answer.”

    Going further, ChatGPT, just to be nice, even gave me a medieval storyline to show how the Ozaki scheme works, and to contrast it to other alternatives.

    I’m just going to print that here, because it’s interesting.

    The Tale of the Kingdom of Matrixland

    In the kingdom of Matrixland, the royal court has a big job: multiplying giant tables of numbers (called matrices). But the royal calculator is slow when it uses fancy, high-precision numbers.

    So the King holds a contest:
    “Who can multiply big matrices both quickly and accurately?”

    Sir Ozaki’s Clever Trick

    Sir Ozaki, a wise mathematician, enters the contest.
    He says:

    “I’ll break each matrix into small, easy pieces that the royal calculator can handle quickly. Then I’ll multiply those simple parts and put them back together perfectly.”

    The crowd gasps!
    His method is fast and still gives the exact right answer.

    The King declares it the Ozaki Scheme.

    The Other Contestants

    But other knights have tricks too:

    Lady Refina (Iterative Refinement)
    She does the quick math first, then checks her work.
    If it’s off, she fixes it — again and again — until it’s just right.
    She’s very accurate, but takes more time.

    Sir Compenso (Compensated Summation)
    He notices small errors that get dropped during math and catches them before they vanish.
    He’s good at adding accurately, but can’t handle full matrix multiplication like Ozaki.

    Lady Mixie (Mixed Precision)
    She charges in with super speed, using tiny fast numbers (like FP8 or FP16).
    Her answers aren’t perfect, but they’re “good enough” for training the kingdom’s magical beasts (AI models).

    Baron TensorFloat (TF32)
    He uses a special number format invented by the kingdom’s engineers.
    Faster than full precision, but not as sharp as Ozaki.
    A favorite of the castle’s GPU-powered wizard lab.

    The Ending

    Sir Ozaki’s method is the most exact while still using fast tools.
    Others are faster or simpler, but not always perfect.

    The King declares:

    “All of these knights are useful, depending on the task. But if you want both speed and the exact answer, follow Sir Ozaki’s path!”

    Anyway, you have a range of ideas here about quantum computing, information precision, and acceleration in the years to come. Let me know what you think about what all of these experts have said about the future of AI.

    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAuthors call on publishers to limit their use of AI
    Next Article As OpenAI and Microsoft Duel Over AGI, Here Are My Real-World Tests
    AI Logic News

    Related Posts

    AI Trends

    The Next King Of AI Video Just Got Here

    December 5, 2025
    AI Trends

    5 ChatGPT Prompts To Transform Your Business With AI In 90 Days

    December 4, 2025
    AI Trends

    Healthcare AI Takes Off Despite Patient Concerns

    December 4, 2025
    Demo
    Top Posts

    FTC’s Holyoak Has Her Eyes On DeepSeek

    February 22, 20256 Views

    OpenAI Rejects Elon Musks Bid Further Escalating The Feud

    February 17, 20253 Views

    Optimize Inventory Management with AI for Small Online Retailers

    February 17, 20253 Views
    Latest Reviews
    ailogicnews.ai
    © 2025 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.