Summary: Meta, led by CEO Mark Zuckerberg, is investing billions in Nvidia’s H100 graphics cards to build a massive compute infrastructure for AI research and projects. By end of 2024, Meta aims to have 350,000 of these GPUs, with total expenditures potentially reaching $9 billion. This move is part of Meta’s focus on developing artificial general intelligence (AGI), competing with firms like OpenAI and Google’s DeepMind. The company’s AI and computing investments are a key part of its 2024 budget, emphasizing AI as their largest investment area.

  • Wanderer
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    10 months ago

    Interesting. Thanks for posting.

    So you’re saying we might see something 1/10 of a human brain (obviously I understand that’s a super rough estimate) next year.

    This is the first I heard about GPT not learning. So if I interact with chat gpt it’s effectively a finished product and it will stay like that forever even if it is wrong and I correct it multiple times?

    This is where I’m really confused with the analogue. If GPT is not really close to a human brain how is it able to interact with so many people instantly. I couldn’t hold 3 conversations never mind a million. Yet my brain power is much much higher than GPT. Couldn’t it just talk to 1 person and be smarter as it can use all the computing power for that 1 conversation?

    • 31337@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      10 months ago

      Correct, when you talk to GPT, it doesn’t learn anything. If you’re having a conversation with it, every time you press “send,” it sends the entire conversation back to GPT, so within a conversation it can be corrected, but remembers nothing from the previous conversation. If a conversation becomes too long, it will also start forgetting stuff (GPT has a limited input length, called the context length). OpenAI does periodically update GPT, but yeah, each update is a finished product. They are very much not “open,” but they probably don’t do a full training between each update. They probably carefully do some sort of “fine-tuning” along with reinforcement-learning-with-human-feedback, and probably some more tricks to massage the model a bit while preventing catastrophic forgetting.

      Oh yeah, the latency of signals in the human brain is much, much slower than the latency of semiconductors. Forgot about that. That further muddies the very rough estimates. Also, there are multiple instances of GPTs running, not sure how many. It’s estimated that each instance “only” requires 128 GPUs during inference (responding to chat messages), as opposed to 25k gpus for training. During training, the model needs to process multiple training examples at the same time for various reasons, including to speed up training, so more GPUs are needed. You could also think of it as training multiple instances at the same time, but combining what’s “learned” into a single model/neural network.

      • Wanderer
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        edit-2
        10 months ago

        This is really cool. Thanks for taking the time. Confusing but the good kind.

        I’m just using this to info to then try and extrapolate.

        I understand the growth of moores law and such. But the efficiency I was talking about seems almost like 1 exponential jump on an exponential curve.

        Let’s just say for argument sake that meta makes AGI next year with 350,000 GPUs it would only need 2,000 GPU’s to make use of what it’s built. That’s pretty mind-boggling. That really is singularity sort of talking.

        So in your mind AGI when? And ASI when? You working in this field?

        • 31337@sh.itjust.worksOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          Yeah, those GPU estimates are probably correct.

          I specialized in ML during grad school, but only recently got back into it and keeping up with the latest developments. Started working at a startup last year that uses some AI components (classification models, generative image models, nothing nearly as large as GPT though).

          Pessimistic about the AGI timeline :) Though I will admit GPT caught me off guard. Never thought a model simply trained to predict the next word in a sequence of text would capable of what GPT is (that’s all GPT does BTW, takes a sequence to text and predicts what the next token should be, repeatedly). I’m pessimistic because, AFAIK, there isn’t really a ML/AI architecture or even a good theoretical foundation that could achieve AGI. Perhaps actual brain simulation could, but I’m guessing that is very inefficient. My wild-ass-guess is AGI in 20 years if interest and money stays consistent. Then ASI like a year after, because you could use the AGI to build ASI (the singularity concept). Then the ASI will turn us into blobs that cannot scream, because we won’t have mouths :)

          • Wanderer
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            10 months ago

            Yea I had a feeling it was still a long way away. At least the media will get bored of it in a year and only the big breakthroughs will make it.

            But I think there will still be a lot of “stupid” yet impressive developments like GPT. It appears smart but isn’t that smart. Sure there will be other things.

            It’s the same as the manufacturing developments. Only now are we beginning to build things similar to the complexity of a human in limited functions. But that doesn’t mean the machines we have built haven’t put millions of people out of work, we just changed manufacturing to better utilise the stupid things they can do much faster and accurately than we can and made a better product because of it. I found out about a year ago we couldn’t make a Saturn v rocket now even if we had all the money in the world. The ability of man has been lost. The way they did the machining of the rockets and the welding and things like that, no one alive has that ability anymore. Robots can’t do it either. But the rockets we make now are more accurate that the ones made in the 60’s. It’s just done differently.

    • Miaou@jlai.lu
      link
      fedilink
      English
      arrow-up
      2
      ·
      10 months ago

      You’re confused by the analogie because it’s a shitty one. If we wanted to reproduce the behaviour of the human, we would invest in medecin, not computer science