James Cameron on AI: “I warned you guys in 1984 and you didn’t listen”::undefined

    • LetMeEatCake@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      edit-2
      1 year ago

      That’s not what they said.

      What people are calling “AI” today is not AI in the sense of how laypeople understand it. Personally I hate the use of the term in this context and think it would have been much better to stick with Machine Learning (often just ML). Regardless, the point is that you cannot get from these system to what you think of as AI. To get there it would require new, different systems. Or changing these systems so thoroughly as to make them unrecognizable from their origins.

      If you put e.g. ChatGPT into a robotic body with sensors… you’d get nothing. It has no concept of a body. No concept of controlling the body. No concept of operating outside of the constraints within which it already operates. You could debate if it has some inhuman concept of language, but that debate is about as far as you can go.

      Actual AI in the sense of how we conceive of it at a societal level is something else. It very well may be that many years down the line that historians will look back at the ML advancements happening today as a major building block for the creation of that “true” AI of the future, but as-is they are not the same thing.

      To put it another way: what happens if you connect the algorithms controlling a video game NPC to a robotic body? Absolutely nothing. Same deal here.

    • Orphie Baby@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s not about improvement, it’s about actual AI being completely different technology, and working in a completely different way.

    • eee
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Not the guy you were referring to, but it’s not so much “improve” as “another paradigm shift is still needed”.

      A “robotic body with sensors” has already been around since 1999. But no matter how many sensors, no matter how lifelike and no matter how many machine learning algorithms/LLMs are thrown in, it is still not capable of independent thought. Anything that goes wrong is still due to human error in setting parameters.

      To get to a Terminator level intelligence, we need the machine to be capable of independent thought. Comparing independent thought to our current generative AI technology is like comparing a jet plane to a horse drawn carriage - you can call it “advancement”, yes, but there are many intermediate steps that need to happen. Just like an internal combustion engine is the linkage between horse-drawn carriages and planes, some form of independent thought is the link between generative AI and actual intelligent machines.