TL;DR: (AI-generated 🤖)

The author, an early pioneer in the field of aligning artificial general intelligence (AGI), expresses concern about the potential dangers of creating a superintelligent AI. They highlight the lack of understanding and control over modern AI systems, emphasizing the need to shape the preferences and behavior of AGI to ensure it doesn’t harm humanity. The author predicts that the development of AGI smarter than humans, with different goals and values, could lead to disastrous consequences. They stress the urgency and seriousness required in addressing this challenge, suggesting measures such as banning large AI training runs to mitigate the risks. Ultimately, the author concludes that humanity must confront this issue with great care and consideration to avoid catastrophic outcomes.

    • NounsAndWords@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I don’t think I’m anthropromorphising, and I think the road construction example is what I was already talking about. It likely won’t care about us for good or bad. That’s the opposite of anthropromorphism. When we build roads maybe some ants are inadvertently killed, but part of the construction plan isn’t “destroy all ants on the Earth” Yes it can certainly cause harm, but there is a very large range of scenarios between “killed a few people” and “full on human genocide” and I have for many years seen people jump immediately to the extremes.

      I think it’s besides the point but I disagree that an AI (which will be trained on the entirety of human knowledge) would not at least have a passing knowledge of human ethics and values, and while consciousness as we perceive it may not be required for intelligence, there is a point where, if it acts exactly as a conscious human would, the difference is largely semantic.

    • tal@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      For me, the most-likely limiting factor is not the ability of a superintelligent AI to wipe out humanity – I mean, sure, in theory, it could.

      My guess is that the most-plausible potentially-limiting factor is that a superintelligent AI might destroy itself before it destroys humanity.

      Remember that we (mostly) don’t just fritz out or become depressed and suicide or whatever – but we obtained that robustness by living through a couple billions of years of iterations of life where all the life forms that didn’t have that property died. You are the children of the survivors, and inherited their characteristics. Everything else didn’t make it. It was that brutal process over not thousands or millions, but billions of years that led to us. And even so, we sometimes aren’t all that good at dealing with situations different to the one in which we evolved, like where people are forced to live in very close proximity for extended periods of time or something like that.

      It may be that it’s much harder than we think to design a general-purpose AI that can operate at a human-or-above level that won’t just keel over and die.

      This isn’t to reject the idea that a superintelligent AI could be dangerous to humanity at an existential level – just that it may be much harder than it might seem for us to create a superintelligent AI that will stay alive, harder to get to that point than it might seem. Obviously, given the potential utility of a superintelligent AI, people are going to try to create it. I am just not sure that they will necessarily be able to succeed.