👍👍👍

The tax breaks in the Inflation Recovery Act are crucial to making the deal economically feasible, according to Constellation. They provide a credit for every megawatt hour of nuclear energy produced.

lmao so instead of this funding the energy transition it’s just subsidizing the AI grift

  • someone [comrade/them, they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    64
    ·
    2 months ago

    And of course the second tragedy is that the AI is absolute dogshit. They’re not powering an artificial general intelligence that could do useful things like help in running a modern global-scale Project Cybersyn. All this staggering amount of electricity wasted so that Github users don’t need to search Stackoverflow, so that people can say “hey google set a 4 minute timer” in their kitchens instead of hitting a half-dozen buttons on their microwave, so that people can tell Alexa to play Despacito.

    • iridaniotter [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      27
      ·
      edit-2
      2 months ago

      They’re not powering an artificial general intelligence that could do useful things like help in running a modern global-scale Project Cybersyn.

      You don’t need that for planning and in fact the People’s Commissariat for Energetics’ secret police would send you to super gulag for suggesting such a preposterous thing.

      • PorkrollPosadist [he/him, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 months ago

        People also put a lot of emphasis on the “I” in Artificial General Intelligence. It gives us the impression that we will have some kind of contraption with a button on it, and every time you push the button it conjures up a new, distinct digital agent of Albert Einstein. For a long, long time, at best, these things will conjure your average Redditor. People think if we create AGI we can tell computers to compose Mozart, but we’ll be lucky if we get anywhere farther than “I glued my balls to my butthole again.”

  • Infamousblt [any]@hexbear.net
    link
    fedilink
    English
    arrow-up
    43
    arrow-down
    1
    ·
    2 months ago

    Oh sure NOW we want nuclear power. Not because of global warming or the immense pollution that burning fossil fuels produces, no no, those aren’t good reasons to move to nuclear. But powering AI servers? That’s what we need nuclear for! That’s more important than the health of a population or the entire biosphere!

    • hypercracker@hexbear.netOP
      link
      fedilink
      English
      arrow-up
      26
      ·
      edit-2
      2 months ago

      On one hand the idea of some AI embodying a huge computer complex powered by its own reactor is straight out of sci-fi (yes I did just finish playing Rain World, how did you know?). On the other hand this vision is significantly undermined by the mundane reality of a radiation hazard powering a million confidently incorrect redditor chatbots

        • iridaniotter [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          6
          ·
          edit-2
          2 months ago

          It’s a tossup between Three Mile Isle and Centralia for who gets to be called Pennsylvania’s Chernobyl. (Personally I vote Centralia since it’s still a hazard… I should visit)

          • chickentendrils [any, comrade/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            9
            ·
            2 months ago

            Centralia is fun.

            Three Mile is undisputed PA Chernobyl for me. My family were friends with another, the mother & daughter of which were from Philly but just happened to be a few miles from Three Mile the day things went down. Both of them developed breast cancer decades apart, with no prior family history thinky-felix

        • HumongousChungus [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          Current status, definitely not an ongoing hazard. At the time, though, a husband-wife team that joined up as a radiation monitoring technician and a senior surveillance technician, the Thompsons, spoke out about a health/dosimeter badge coverup and had to flee town after a stranger warned them their life was in danger. When they settled in NM and began working on a book about it with the wife’s brother, him and the husband were run off the road, killing the brother while a manuscript of the book that was in the trunk went ‘missing’. Epidemiology links increased rates of health issues that stem from ionizing radiation to both the locations surrounding the incident and the areas downwind. Jean Trimmer, in the area, reported a flash of heat and rain, followed by bad sunburns, hair turning white and falling out, and an idiopathic atrophy of the kidney that warranted presentation to a symposium of doctors nearby from how strange it is. None of these are consistent with the official estimates of exposure, but do match the symptoms of acute exposure of a much higher dose.

          Of course, this was also a time when the Soviets presented an information warfare challenge. On the same token, disasters of any size and sort are often covered up when there’s a cold war justification. See: the pandemic (ongoing, unabated)

          Potentially, the only difference between this and foreign radioactive disasters is the competency of US intelligence. I would not be surprised to learn much later that a coverup was instituted, which would have been perfectly possible especially in the information environment of the time. I recommend nuclear energy advocates cease condescendingly using it as an example of nuclear panic, and instead make an effort to compassionately address people’s concerns over potential health hazards and lack of government support in the future. At the very least, to avoid potential embarrassment and backlash if a “full story” ever comes out about the incident.

  • InevitableSwing [none/use name]@hexbear.net
    link
    fedilink
    English
    arrow-up
    21
    ·
    edit-2
    2 months ago

    I didn’t have the motivation to read the whole thing so I scanned it for funny stuff. But it looked dreary so the only thing I read was the final paragraph. The article ends on a funny note. Tech companies don’t even bother to make an effort to lie anymore. Look at this shit.

    Microsoft has signed a contract to purchase fusion energy from a start-up that claims it can deliver it by 2028.

    -–

    Edit

    Related - [“tech bro bullshit” news] Nuclear fusion startup Helion claims it will have a working power plant by 2028. Microsoft is already a customer. More in body. - Hexbear

    • QuillcrestFalconer [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      That startup (helion or whatever its called) claimed in 2013 they would be producing power by 2018, then in 2018 claimed they would be producing power by 2023, and then in 2023 claimed they would be producing power by 2028. I’m starting to see a pattern

    • ComradeKingfisher [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      14
      ·
      2 months ago

      Yes

      spoiler

      Each time you use AI to generate an image, write an email, or ask a chatbot a question, it comes at a cost to the planet.

      In fact, generating an image using a powerful AI model takes as much energy as fully charging your smartphone, according to a new study by researchers at the AI startup Hugging Face and Carnegie Mellon University. However, they found that using an AI model to generate text is significantly less energy-intensive. Creating text 1,000 times only uses as much energy as 16% of a full smartphone charge.

      Their work, which is yet to be peer reviewed, shows that while training massive AI models is incredibly energy intensive, it’s only one part of the puzzle. Most of their carbon footprint comes from their actual use.

      The study is the first time researchers have calculated the carbon emissions caused by using an AI model for different tasks, says Sasha Luccioni, an AI researcher at Hugging Face who led the work. She hopes understanding these emissions could help us make informed decisions about how to use AI in a more planet-friendly way.

      Luccioni and her team looked at the emissions associated with 10 popular AI tasks on the Hugging Face platform, such as question answering, text generation, image classification, captioning, and image generation. They ran the experiments on 88 different models. For each of the tasks, such as text generation, Luccioni ran 1,000 prompts, and measured the energy used with a tool she developed called Code Carbon. Code Carbon makes these calculations by looking at the energy the computer consumes while running the model. The team also calculated the emissions generated by doing these tasks using eight generative models, which were trained to do different tasks.

      Generating images was by far the most energy- and carbon-intensive AI-based task. Generating 1,000 images with a powerful AI model, such as Stable Diffusion XL, is responsible for roughly as much carbon dioxide as driving the equivalent of 4.1 miles in an average gasoline-powered car. In contrast, the least carbon-intensive text generation model they examined was responsible for as much CO2 as driving 0.0006 miles in a similar vehicle. Stability AI, the company behind Stable Diffusion XL, did not respond to a request for comment.

      AI startup Hugging Face has undertaken the tech sector’s first attempt to estimate the broader carbon footprint of a large language model.

      The study provides useful insights into AI’s carbon footprint by offering concrete numbers and reveals some worrying upward trends, says Lynn Kaack, an assistant professor of computer science and public policy at the Hertie School in Germany, where she leads work on AI and climate change. She was not involved in the research.

      These emissions add up quickly. The generative-AI boom has led big tech companies to integrate powerful AI models into many different products, from email to word processing. These generative AI models are now used millions if not billions of times every single day.

      The team found that using large generative models to create outputs was far more energy intensive than using smaller AI models tailored for specific tasks. For example, using a generative model to classify movie reviews according to whether they are positive or negative consumes around 30 times more energy than using a fine-tuned model created specifically for that task, Luccioni says. The reason generative AI models use much more energy is that they are trying to do many things at once, such as generate, classify, and summarize text, instead of just one task, such as classification.

      Luccioni says she hopes the research will encourage people to be choosier about when they use generative AI and opt for more specialized, less carbon-intensive models where possible.

      “If you’re doing a specific application, like searching through email … do you really need these big models that are capable of anything? I would say no,” Luccioni says.

      The energy consumption associated with using AI tools has been a missing piece in understanding their true carbon footprint, says Jesse Dodge, a research scientist at the Allen Institute for AI, who was not part of the study.

      Comparing the carbon emissions from newer, larger generative models and older AI models is also important, Dodge adds. “It highlights this idea that the new wave of AI systems are much more carbon intensive than what we had even two or five years ago,” he says.

      Google once estimated that an average online search used 0.3 watt-hours of electricity, equivalent to driving 0.0003 miles in a car. Today, that number is likely much higher, because Google has integrated generative AI models into its search, says Vijay Gadepally, a research scientist at the MIT Lincoln lab, who did not participate in the research.

      Not only did the researchers find emissions for each task to be much higher than they expected, but they discovered that the day-to-day emissions associated with using AI far exceeded the emissions from training large models. Luccioni tested different versions of Hugging Face’s multilingual AI model BLOOM to see how many uses would be needed to overtake training costs. It took over 590 million uses to reach the carbon cost of training its biggest model. For very popular models, such as ChatGPT, it could take just a couple of weeks for such a model’s usage emissions to exceed its training emissions, Luccioni says.

      This is because large AI models get trained just once, but then they can be used billions of times. According to some estimates, popular models such as ChatGPT have up to 10 million users a day, many of whom prompt the model more than once.

      Studies like these make the energy consumption and emissions related to AI more tangible and help raise awareness that there is a carbon footprint associated with using AI, says Gadepally, adding, “I would love it if this became something that consumers started to ask about.”

      Dodge says he hopes studies like this will help us to hold companies more accountable about their energy usage and emissions.

      “The responsibility here lies with a company that is creating the models and is earning a profit off of them,” he says.

  • dom [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    2 months ago

    Reactivating a notorious nuclear power plant solely to run AI sounds like a story beat that was cut from a Kojima game.

    • Des [she/her, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      be fun if this was an elaborate trick to get states to build up nuclear and renewable power and then the bubble pops and it’s “lol that shit was fake the whole time”