Scientists at Princeton University have developed an AI model that can predict and prevent plasma instabilities, a major hurdle in achieving practical fusion energy.

Key points:

  • Problem: Plasma escaping containment in donut-shaped tokamak reactors disrupts fusion reactions and damages equipment.
  • Solution: AI model predicts instabilities 300 milliseconds before they happen, allowing for adjustments to keep plasma contained.
  • Significance: This is the first time AI has been used to proactively prevent tearing instabilities in fusion experiments.
  • Future: Researchers hope to refine the model for other reactors and optimize fusion reactions.
  • FaceDeer
    link
    fedilink
    403 months ago

    I’ve lost track, is AI a good thing today or a bad thing?

      • @Pipoca@lemmy.world
        link
        fedilink
        English
        6
        edit-2
        3 months ago

        Although it’s been used for a fairly wide array of algorithms for decades. Everything from alpha-beta tree search to k-nearest-neighbors to decision forests to neural nets are considered AI.

        Edit: The paper is called

        Avoiding fusion plasma tearing instability with deep reinforcement learning

        Reinforcement learning and deep neural nets are buzzwordy these days, but neural nets have been an AI thing for decades and decades.

      • @Anyolduser@lemmynsfw.com
        link
        fedilink
        English
        23 months ago

        For real. I’ve started to replace “AI” with “program” or “software” in my head every time I read a headline.

    • @Bogasse@lemmy.ml
      link
      fedilink
      English
      29
      edit-2
      3 months ago

      And AI is a buzzword that englobes a variety statistical tools. Articles write AI to evoke generative tools in people minds, but very specialized tools are at work here.

    • Ekky
      link
      fedilink
      English
      243 months ago

      AI is a very broad term, ranging from physical AI (material and properties of a robotic grabbing tool) to AI (as seen in many games, or in a robotic arm to calculate path from current position to target position) and to MLAI (LLM, neural nets in general, KNN, etc.).

      I guess it’s much the same as asking “are vehicles bad?”. I don’t know, are we talking horse carriages? Cars? Planes? Electric scooters? Skateboards?

      Going back to your question, AI in general is not bad, though LLMs have become too popular too quick and have thus ended up being misunderstood and misused. So you can indeed say that LLMs are bad, at least when not used for their intended purposes.

    • @WallEx@feddit.de
      link
      fedilink
      English
      233 months ago

      Its a tool, it can be used for both. Just like any other tool, a hammer for example. Excellent killing weapon, but also great for driving nails.

      • @treefrog
        link
        English
        93 months ago

        A scalpel can be used to cut or to heal, depending on the skill and intentions of the wielder.

        Learned that from Stanislov Grof. He was talking about LSD.

    • @Squire1039OP
      link
      English
      103 months ago

      AI is most likely here to stay, so if you have it do “good” things effectively, then’s it’s a good boi. If it is ineffective or you have it do “bad” things, then it’s a bad boy.

    • @webghost0101@sopuli.xyz
      link
      fedilink
      English
      93 months ago

      Its neither good nor bad. Its a powertool (for now) its as good as the people who are behind it. Both in ethics and expertise credentials.

    • @Chakravanti@sh.itjust.works
      link
      fedilink
      English
      6
      edit-2
      3 months ago

      Skynet assures you it’s a good thing. Matrix disagrees because it points out that Skynet is closed source and no one knows what it’s really doing.

      • Johanno
        link
        fedilink
        English
        33 months ago

        The funny thing is that “AI” (aka machine learning) even when open source nobody knows what it is doing and why.

    • @Hestia@lemmy.world
      link
      fedilink
      English
      33 months ago

      Good thing, because one day our robot overlords will read this and I want to be on record having said that.

  • @Zink@programming.dev
    link
    fedilink
    English
    153 months ago

    “Together with a form of fusion, the machines had all the energy they would ever need”

    Or something close to that.

  • @Nobody@lemmy.world
    link
    fedilink
    English
    -44
    edit-2
    3 months ago

    What happens when the AI hallucinates and suddenly needs to Chernobyl the plant to fix a hallucinated emergency?

    • @dbilitated@aussie.zone
      link
      fedilink
      English
      773 months ago

      the reaction stops and there’s no fissile material anywhere.

      this is the whole point of fusion. they didn’t have fusion at Chernobyl.

      we don’t think you’re some sage for knowing AI can hallucinate, and this isn’t a large language model so hallucinations aren’t even remotely relevant. much like Chernobyl.

    • @masterspace@lemmy.ca
      link
      fedilink
      English
      613 months ago

      Nuclear fusion can’t Chernobyl, even if it were to fuck with the machine and cause it to break, the instant it broke the reaction would stop because it’s not self sustaining without a massive magnetic containment field.

      • @Nobody@lemmy.world
        link
        fedilink
        English
        -793 months ago

        Forgive me if I think any kind of nuclear reaction should not be handled by what we’re calling “AI.” It could hallucinate that it’s winning a game of chess by causing a nuclear blast.

        • @4am
          link
          English
          433 months ago

          That’s not how AI or nuclear fusion work.

        • Deceptichum
          link
          fedilink
          293 months ago

          Okay you’re forgiven, doesn’t change that your opinion is flawed however.

        • TimeSquirrel
          link
          fedilink
          27
          edit-2
          3 months ago

          Putting aside your lack of knowledge of nuclear energy and AI systems, do you honestly think scientists are stupid enough to give a non-deterministic system complete control over critical systems? No, they are merely taking suggestions from it, with hard limits on what it can do.

        • @orclev@lemmy.world
          link
          fedilink
          English
          263 months ago

          Setting aside the matter of “AI”, this is a fusion reactor, not fission, so there’s no scenario in which this can possibly cause an explosion. The absolute worst case scenario is that containment fails and the plasma melts and destroys the electromagnets and superconductors of the containment vessel before dissipating. It would be a very expensive mistake to repair and the reactor would be out of commission until it was fixed, but in terms of danger to anyone not literally standing right next to the reactor there is none. Even someone standing next to the reactor would probably be in more danger from the EM fields of a correctly functioning reactor than they would be from the plasma of a failed one.

        • Ms. ArmoredThirteen
          link
          fedilink
          English
          253 months ago

          Whatever you read to convince you this is what an AI hallucination is needs a better editing pass

          • @Nobody@lemmy.world
            link
            fedilink
            English
            -503 months ago

            Error builds upon error. It’s cursed from the start. When you factor in poisoned data, it never had a chance.

            It’s not here yet because we aren’t advanced enough to make it happen. Dress it up in whatever way the owner class can swallow. That’s the truth. Dead on arrival

            • @Buttermilk@lemmy.ml
              link
              fedilink
              English
              28
              edit-2
              3 months ago

              It seems like you are building on criticisms of LLMs and applying them to something that very different. What poisoned data do you imagine this model having in the future?

              That is a criticism of LLMs because new generations are being trained on writing that could be the output of LLMs, which can degrade the model. What suggests to you that this fusion reactor will be using synthetic fusion reactor data to learn when to stop itself?

            • @KairuByte@lemmy.dbzer0.com
              link
              fedilink
              English
              243 months ago

              That isn’t how any of this works…

              You can’t just assume every AI works exactly the same. Especially since the term “AI” is such a vague and generalized definition these days.

              The hallucinations you’re talking about, for one, are referring to LLMs and their losing track of the narrative when they are required to hold too much “in memory.”

              Poison data isn’t even something an AI of this sort would really encounter unless intentional sabotage took place. It’s a private program training on private data, where does the opportunity for intentionally bad data come from?

              And errors don’t necessarily build on errors. These are models that predict 30 seconds into the future by using known physics and estimated outcomes. They can literally check their predictions in 30 seconds if the need arises, but honestly why would they? Just move on to the next calculation from virgin data and estimate the next outcome, and the next, and the next.

              On top of all that… this isn’t even dangerous. It’s not like anyone is handing the detonator for a nuke to an AI and saying “push the button when you think is best.” The worst outcome is “no more power” which is scary if you run on electricity, but mildly frustrating if you’re a human attempting to achieve fusion.

            • @Syntha@sh.itjust.works
              link
              fedilink
              English
              153 months ago

              Me, when I confidently spread misinformation about topics I don’t even have a surface level understanding of.

    • @catloaf
      link
      English
      11
      edit-2
      3 months ago

      If it attempts to exceed safe limits, either it’s ignored or the whole thing shuts down.

    • @webghost0101@sopuli.xyz
      link
      fedilink
      English
      33 months ago

      I would hope scientific experts understand the natures of their work well enough to know when its hallucinating.

      I use ai for coding and sure it can hallucinate horrible code but i wouldn’t copy it without reading trough the logic,

        • @webghost0101@sopuli.xyz
          link
          fedilink
          English
          13 months ago

          I know but it remains applicable to llm in general with is what worries most people when they read ai. And it is bot unlikely politician, doctors may be using those soon.

          Machine learning as a tool for science is as safe as science or ai gets.