• Primarily0617@kbin.social
      link
      fedilink
      arrow-up
      16
      ·
      edit-2
      1 year ago

      multibillion dollar company discovers the memory hierarchy

      could the next big leap be integrating instructions and data in the same memory store?

    • pelya@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      The article does not say if they’d innovated enough to produce capacitor-based DRAM with the CPU on the same die. I guess it would come in 1GB variant if they managed that.

      • proctonaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I haven’t really kept up with it but doesn’t the zen architecture have separate L1 and L2 for each core?

        • MostlyHarmless@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Even if it does it isn’t the same thing. The article explains everything

          NorthPole is made of 256 computing units, or cores, each of which contains its own memory… The cores are wired together in a network inspired by the white-matter connections between parts of the human cerebral cortex, Modha says. This and other design principles — most of which existed before but had never been combined in one chip — enable NorthPole to beat existing AI machines

  • SturgiesYrFase@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    While this will speed up loads of things, it also feels like this will end up being another way to remove upgradability from devices. Want more ram in your desktop buy a new cpu.

    • Patapon Enjoyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      1 year ago

      The article says the whole CPU has like 200MB of memory, so it’s not really replacing the RAM already in PCs. Plus this seems focused on AI applications, not general computing.

      • SturgiesYrFase@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        hits blunt That’s just your opinion, man.

        And that’s fair, at the same time it’s still quite new, once it’s matured a bit I could definitely see this being how things go until…idk, hardlight computing or w.e

      • DaPorkchop_@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        So… they’ll probably add some slower, larger-capacity memory chips on the side, and then they’ll need to copy data back and forth between the slow off-chip memory and the fast on-chip memory… I’m pretty sure they’ve just invented cache

    • glimse@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      I mean the physical distance between the RAM and CPU will eventually be the limiting factor right? It’s inevitable for more reasons than profit

  • iHUNTcriminals
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    edit-2
    1 year ago

    Right, like I’m going to believe Baylor Swift.

    …/s