For me it’s detailed describtions about people’s dreams.

Not only doesn’t your story make any sense, but you’re also telling me about something that didn’t even happen. It’s kind of like telling about an event, and then ending the story by saying you just made it all up, except with dreams you begin by telling it’s all made up. I’m already not interested before you even started.

  • Thorny_InsightOP
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    As a person who loves talking about AI I’d like to note that by AI “we” usually mean AGI (artificial general intelligence) and not generative AI like chatGPT or midjourney.

    • Phanatik@kbin.social
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      I love Sci-Fi. One of my favourite authors is Philip K. Dick and I’ve written stories about AI.

      I cannot grant the concession that the likes of ChatGPT and Midjourney qualify for the moniker of AI. It would require lowering my standard for what I constitute to be intelligence, such as having a basic degree of awareness. ChatGPT for example will contradict itself and hallucinate information (not new information, just irrelevant and incorrect information) and can do so in the same response. This is not intelligence, this is the mere imitation of intelligence and that is not sufficient.

      • Thorny_InsightOP
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        1 year ago

        The AI that fascinates me is just an idea. No such thing, or really even anything close to it exists. Atleast not yet. It’s more of a thought experiment and a philosophical dilemma. That however doesn’t make me any less worried of it. This technology keeps making big leaps forward.

        ChatGPT for example will contradict itself and hallucinate information

        So do humans. I agree though. At best GPT4 only shows *signs of *intelligence. Still nothing close to AGI. Can’t deny it’s not impressive though.

        • Phanatik@kbin.social
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          When it comes to contradictions and hallucinations, sometimes people will put out contradictory information or act hypocritical, the difference is intention. ChatGPT can’t help but make contradictions and hallucinations because it has no awareness of what information it’s putting forth. It will very convincingly present incorrect information and not know that’s what it’s done.

          It is impressive. Except it’s being touted as more impressive than it actually is and that’s what annoys me. The complete lack of critically evaluating these models and then giving into survivorship bias.

          That’s all aside from the privacy and copyright concerns.