• Contentedness@lemmy.nz
    link
    fedilink
    English
    arrow-up
    64
    arrow-down
    18
    ·
    1 month ago

    ChatGPT didn’t nearly destroy her wedding, her lousy wedding planner did. Also whats she got against capital letters?

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      3
      ·
      1 month ago

      Yea yea guns don’t kill people, bullet impacts kill people. Dishonesty and incompetence are nothing new, but you may note that the wedding planner’s unfounded confidence in ChatGPT exacerbated the problem in a novel way. Why did the planner trust the bogus information about Vegas wedding officiants? Is someone maybe presenting these LLM bots as an appropriate tool for looking up such information?

      • HedyL@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 month ago

        Yes, even some influential people at my employer have started to peddle the idea that only “old-fashioned” people are still using Google, while all the forward-thinking people are prompting an AI. For this reason alone, I think that negative examples like this one deserve a lot more attention.

      • GBU_28
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        6
        ·
        edit-2
        1 month ago

        Bullet impacts don’t kill people, tissue deorganization and fluid loss kill people!

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 month ago

    “Comment whose upvotes all come from programming dot justworks dot dev dot infosec dot works” sure has become a genre of comment.

  • DannyBoy@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    14
    ·
    1 month ago

    I can make a safe assumption before reading the article that ChatGPT didn’t ruin the wedding, but rather somebody that was using ChatGPT ruined the wedding.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      33
      ·
      edit-2
      1 month ago

      “blame the person, not the tools” doesn’t work when the tools’ marketing team is explicitly touting said tool as a panacea for all problems. on the micro scale, sure, the wedding planner is at fault, but if you zoom out even a tiny bit it’s pretty obvious what enabled them to fuck up for as long and as hard as they did

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        21
        ·
        1 month ago

        do you think they ever got round to reading the article, or were they spent after coming up with “hmmmm I bet chatgpt didn’t somehow prompt itself” as if that were a mystery that needed solving

      • null@slrpnk.net
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        1 month ago

        “This hammer can’t plan a wedding. Hammers are useless.”

        • self@awful.systems
          link
          fedilink
          English
          arrow-up
          12
          ·
          1 month ago

          almost all of your posts are exactly this worthless and exhausting and that’s fucking incredible

          • LainTrain@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            9
            ·
            edit-2
            1 month ago

            I get the feeling you’re exactly the kind of person who shouldn’t have a proompt, much less a hammer

            • self@awful.systems
              link
              fedilink
              English
              arrow-up
              13
              ·
              1 month ago

              no absolutely, I shouldn’t ever “have a proompt”, whatever the fuck that means

              the promptfondlers really aren’t alright now that public opinion’s against the horseshit tech they love

              • froztbyte@awful.systems
                link
                fedilink
                English
                arrow-up
                10
                ·
                1 month ago

                istg these people seem to roll “b-b-b-but <saltman|musk|sundar|…> gifted this technology to me personally, how could I possibly look this gift horse in the mouth” on the inside of their heads

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 month ago

    As a fellow Interesting Wedding Haver, I have to give all the credit in the world to the author for handling this with grace instead of, say, becoming a terrorist. I would have been proud to own the “Tracy did nothing wrong” tshirt.

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          21
          arrow-down
          1
          ·
          1 month ago

          Scams are LLM’s best use case.

          They’re not capable of actual intelligence or providing anything that would remotely mislead a subject matter expert. You’re not going to convince a skilled software developer that your LLM slop is competent code.

          But they’re damn good at looking the part to convince people who don’t know the subject that they’re real.

      • Pandemanium
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 month ago

        I think we should require professionals to disclose whether or not they use AI.

        Imagine you’re an author and you pay an editor $3000 and all they do is run your manuscript through ChatGPT. One, they didn’t provide any value because you could have done the same thing for free; and two, if they didn’t disclose the use of AI, you wouldnt even know your novel had been fed into one and might be used by the AI for training.

        • bitofhope@awful.systems
          link
          fedilink
          English
          arrow-up
          16
          ·
          1 month ago

          I think we should require professionals not to use the thing currently termed AI.

          Or if you think it’s unreasonable to ask them not to contribute to a frivolous and destructive fad or don’t think the environmental or social impacts are bad enough to implement a ban like this, at least maybe we should require professionals not to use LLMs for technical information