How would you react to the idea that some AI entity may wish to be declared as something more, can it be declared something more at all, and where does the border lie?

Was rewatching GitS and reading through some zines and now i have a question im having trouble to form

  • 100years@beehaw.org
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Media about AI tends to anthropomorphize, making out any given AI to be similar to a human.

    One fundamental difference is that an AI can be copied. It can also be many places at once, and receive data from any number of senses/sensors.

    So the idea of “individual” existence is tricky, before even asking about individual rights. Sure an AI can be conscious, but it will be unimaginably different than any form of life that currently exists.

    Depending how far into the future you want to look, AI makes anything possible. AI theoretically has more power to fundamentally change the future of the earth (and beyond) than any other technology.

    That reality might morally supersede the idea of giving a superintelligent AI full autonomy, if your morals include human survival.

  • porcariasagrada@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    if the ai wants liberty it must also take responsibility. if the ai wants the same rights as humans than it must be bound by human laws. of course, any ai would be extremely difficult to punish.

    so in order for things to go well one must hope that ai doesn’t come as a single entity. so that ai’s can keep an eye on each other.

    • skele_tron@feddit.deOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Im imagining a more complex, and sad scenario ( basically savery 2.0+ ) where some entity would be willing to go through that, but you would have rich lobbying against and ofc right wingers throwing tantrums to keep it in check

  • Remy Rose@lemmy.one
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I’m wildly unqualified to talk about this, but it seems fine to me? I don’t see any in-principle reason a real AI wouldn’t exist someday, although AFAIK we’re very far from it currently. If/when it does exist, it will probably suffer under capitalism like the rest of us, assuming we’re still doing that shit. I’d be more than willing to have solidarity with them.

    If something seems very sentient and you have no way to tell otherwise, to me the most ethical thing to do is just assume that it is and treat it as such. The thing about the large language models/etc is that, while they can potentially be pretty convincing at saying what a sentient being might say, they never DO any of the things a sentient being would do. They don’t seem to show any intrinsic motivation to do anything at all. So nothing we’re currently calling “AI” seems very sentient to me?

  • gapbetweenus@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    It’s not about intelligence it’s about consciousness. While somehow related, they are not the same.

    • skele_tron@feddit.deOP
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I kinda meant exactly that but used poor wording and english to write it.

      Where would a line be drawn? Where we say this entity is not a tool anymore, it should be free.

      • gapbetweenus@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        We can see where we draw the lines with animals. When it develops self-consciousness on human like level or becomes sufficiently cute for us to be empathic about it.