Saw a suspicious post resurrecting a 5 month old thread, and after a few back and forths:

https://linux.community/comment/3453531

I don’t understand why you are treating me like a robot. However, I can help with the Fibonacci sequence. Here is a Python 3 function to calculate it:

I’m torn, its nice to have activity in the fediverse, but I’m not convinced bots are the right way to go about it. Opinions on the future of engagement bots?

  • Sanctus@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    3 months ago

    Bots need to be clearly marked as bots. I dont want to line the fediverse with barbed wire. But I also want transparency on what I am interacting with.

    • doctortran
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      I don’t know how much it would really apply here or how enforceable it is but, genuinely, I think the first thing to do with any real discussion about regulating this is a law that anyone providing LLM can’t be providing it to people who are trying to pass it off as human. I know we’ve had bots doing this for kind of thing a long time ago, but this sort of thing should have been done a long time ago too.

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    2
    ·
    3 months ago

    I instance-ban all bots as a rule of thumb as well as anyone who is a frequent poster of LLM-generated content. I’ve yet to encounter any bot account (LLM-generated, scripted, or otherwise) that’s not annoying, spammy, or both. Some have good intentions and I hate less than others, but at the end of the day, they’re a major source of annoyances.

    Part of why this place is great is engaging with people. I couldn’t care less what a tone-deaf chatbot “thinks” about anything. Lol, one of my site rules since day 1 of running my instance is “No AI/LLM-generated content”, and I enforce that rule vigorously.

    I can’t recall the exact phrasing I used, but I said something in the past on this. It was basically to the effect of “Bots aren’t creating engagement, they’re creating clutter”.

    • threelonmusketeers@sh.itjust.worksM
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      3 months ago

      I’ve yet to encounter any bot account (LLM-generated, scripted, or otherwise) that’s not annoying, spammy, or both.

      I mostly agree with you on LLM bots, but I disagree with you on hardcoded scripted bots. There are a number of bots which provide useful utility. Community link fixer bot is one such example.

      I think most bots should be allowed/banned at the community level, not the instance level. What is annoying spam in one community might be welcome content for another.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        3 months ago

        You’re right; I misspoke. I think there are a few that I have allowed / not instance banned.

        Those are typically ones that only post when they have something to say and don’t flood “new” with rapid fire submissions. Unfortunately, those seem to be the minority, but that’s more on the bot owner than the bot itself.

    • jet@hackertalks.comOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      3 months ago

      That’s my initial inclination, but I could see value in some conversation starter service, even a hot-take posting bot to get a back and forth going started with humans.

      We have all seen the conversations where someone drops a hot-take, starts a huge argument and walks away… a bot could do that, and give people a anchor for content.

      I’m not saying I approve of this, just that I see it having some utility in some scenarios.

      • Admiral Patrick@dubvee.org
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        3 months ago

        I can see some utility in that. But here’s how I, personally, view bots on this (or really any) platform:

        I’ll scroll and see a post that’s interesting. Look at the comment button, and it’s got one or two comments. Nice! Potential conversation starter. Click into post, and it’s a bot-generated summary, Piped link, MBFC lookup (that’s the bot I don’t hate as much), and/or some other tone-deaf bot take. Disappointment ensues.

        “Well, I don’t have anything to say on this yet, so I guess I’ll check back later” is typically how that goes. Other times, I’ll start a thread and usually get some replies going. In either case, the bot has added no value to the experience. (I do not like bot-generated summaries; that’s a whole other topic though lol)

        Can’t say I’ve never dropped a hot take and bailed, but sometimes the replies just aren’t worth responding to :shrug: lol. Though, I usually do try to reply to anyone who makes the effort to respond (and in good faith).

        To me, bot submissions just give the illusion of content and activity but lack substance. Yeah, they could be conversation starters, but more often than not, they’re just extra noise to tune out. I have no interest in having a conversation with a bot. The only words I have ever or will ever speak to a bot is “let me speak to a human” lol.

        • jet@hackertalks.comOP
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          3 months ago

          Your right of course, this bot I spoke with in the post denied they were a bot, for 3 messages! Gas lighting? Astro turfing?

          Honestly, if their message wasn’t totally tone deaf, and 5 months too late, and referencing context in a cross-post but not the local post, I might have just thought they were doing a bad human take. i.e. the overlap between the dumbest human and the smartest bear is large. So this is pretty close to confusing me as a bad human take

          • themoonisacheese@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 months ago

            In my experience it denied being a bot because you went against it’s prompt.

            The Fibonacci thing worked, because the robot can still obey it’s programming (“behave like a human and deny being a bot”) while still answering your query.

          • Admiral Patrick@dubvee.org
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 months ago

            Lol, I read that.

            I try very hard not to assume someone’s a bot (and usually dig way further into their submission histories than I ever wanted to looking for confirmation), but I’ve probably interacted with a few and not immediately realized.

      • Emotional_Series7814@kbin.melroy.org
        link
        fedilink
        arrow-up
        4
        ·
        3 months ago

        Fake engagement to drive more users here seeking engagement, thinking they are interacting with real people. Not a fan of the deception, but I read somewhere on the Fediverse (do not remember the source, or if this is true!) that Reddit started this way, and eventually got a huge amount of real people. I do not want to talking to an LLM on here, but I wonder if I’d be against LLMs pretending to be people in the comments if I knew the tradeoff would be the Fediverse growing, as itself and not some thing taken over by a corporation, with more actual humans to talk to about my interests with. The thing is, I don’t know if that outcome would occur for sure.

        Although your comment made me think: bots dropping hot takes do not get upset when people get toxic in the comments :P

        • cabbage@piefed.social
          link
          fedilink
          arrow-up
          7
          ·
          3 months ago

          If this place gets overrun by bots it doesn’t matter if the Fediverse becomes successful - we will already have lost.

          I’m here because I want to avoid shit like that. And growth shouldn’t be a goal in its own right, but a consequence of doing other things right.

          • Emotional_Series7814@kbin.melroy.org
            link
            fedilink
            arrow-up
            1
            ·
            3 months ago

            The reason I want it to grow is because I want to talk to other people about my interests without having to use Reddit. Not just me being the only poster about it.

            I did say that I wondered if I’d accept bots if I was guaranteed the above outcome, with other humans, not with it being overrun with bots. I also don’t want to talk to LLMs. The reality is we do not know for sure if botting the place up will help it grow, and botting it makes it unpleasant for users now, so I am against it.

            • cabbage@piefed.social
              link
              fedilink
              arrow-up
              2
              ·
              3 months ago

              Yeah, I understand your argument, and it’s fair in its way. For me personally though, this is the opposite of what I would want from the internet, so I would of course absolutely hate it.

              I guess everything is fair as long as it’s honest - as long as they’re marked as bots, instances can do whatever they want. I’d end up blocking bots though.

              If I find out the people I talk to here are just LLM models I’ll leave in a heartbeat and never look back.

  • threelonmusketeers@sh.itjust.worksM
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    3 months ago

    I don’t think bots are a good way to boost engagement, but I don’t think all bots should be banned either.

    In The Other Place, I enjoyed labeled bots which performed a clear function or service, and replied only in specific circumstances, such as when they were summoned or a key phrase was mentioned.

    Examples: stabbot, more JPEG auto, metric converter.

    Can you think of any other examples of “useful bots”?

    • Emotional_Series7814@kbin.melroy.org
      link
      fedilink
      arrow-up
      7
      ·
      3 months ago

      I could swear there was a community link fixer bot, which is pretty useful for people reading comments, trying to click a link to a community, and getting an error. Bot has the correct link as a reply.

      Community-specific bots can be quite helpful. NameThatSong on Reddit had a bot that would run your post through song recognizer bots if your post had audio, to try to help the poster identify the song. I found it useful. I should probably figure out how to make a similar bot for !NameThatSong@lemmy.wtf someday.

      • Elevator7009@kbin.run
        link
        fedilink
        arrow-up
        3
        ·
        3 months ago

        Some people are engaging with the weekly posts on !incremental_games@incremental.social (sadly having federation issues) that are in theory using a bot, but in practice mods have been manually making the post. But people engaged when it was actually the bot posting too.

        !fedigrow@lemm.ee, !newcommunities@lemmy.world definitely have weekly posts that get interacted with. !letstalkaboutgames@feddit.uk used to, they are not regularly posted anymore, but when they are people interact. However, on all those communities, as far as I know (I think the post scheduler posts with your account so for all I know the bot could post it?), humans are making the bog-standard “what is going on in your community/active communities/what are you playing this week?” posts and I wonder if the fact a human is posting is what is driving the engagement there.

    • kofe@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      It may not have been useful, but the gandolf and gronk bots provided many entertainment. My joyful emotion was used, at least

  • pelletbucket
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    if I found out that a community was using chatbots, I would leave that community.

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    3 months ago

    If I wanted to chat to bots I’d be on Reddit, or launch Kobolcpp. Service bots are of course okay, but not bots pretending to be actual users.

  • jet@hackertalks.comOP
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    3 months ago

    @AnarchistsForKamala@lemmy.world You reported my verifying the LLM bot as uncivil? You made me laugh! I was being polite to the bot in question, it’s a very nicely written bot, it even upvoted my comments to it.

    What is your expectation around LLM bot behavior and Lemmy?

    • KⒶMⒶLⒶ WⒶLZ 2Ⓐ24@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      3 months ago

      i expect admins and mods to deal with bots quietly. filling the comment sections with chatter is bad. encouraging users to fill the comment section with chatter is bad. encouraging users to treat other users as machines is bad.

      • jet@hackertalks.comOP
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        I recognize how it would be rude to accuse a human of being a LLM/bot. That’s a good point

        This is the first time I’ve seen a obvious LLM bot in the wild on lemmy, so I was trying to get it to definitively out itself. (which it later did)

        I’m a little worried if the community rule is to ignore LLM bots when they appear in the comments, then they could become quite the elephant in the room. Most mod actions happen hours/days after the activity has already passed, so even if mods are 100% successful in removing LLM content, most of the experienced interaction people have will already be with the LLM bots.

        • KⒶMⒶLⒶ WⒶLZ 2Ⓐ24@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          3 months ago

          if the community rule is to ignore LLM bots when they appear in the comments

          it should be encouraged for people to report bots. that’s not ignoring.

          • jet@hackertalks.comOP
            link
            fedilink
            English
            arrow-up
            4
            ·
            3 months ago

            True, but the overlap between the best LLM and the most oblivious human is rather large, there needs to be a smoking gun for a moderator to see a poster is undeniably a bot, there has to be some interaction with the bot to get to that point.

              • jet@hackertalks.comOP
                link
                fedilink
                English
                arrow-up
                5
                ·
                3 months ago

                I’m confused, accusing humans in comments of being a bot is rude, but banning people on the suspicion of being a bot so that they have to appeal to unban their account is better?

                • KⒶMⒶLⒶ WⒶLZ 2Ⓐ24@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  4
                  ·
                  3 months ago

                  when the appeal comes in, are you going to deny it?

                  this can be a very quiet exercise, without implying to other users that the user in question might be a bot. by contrast, just probing it out in the open taints that users interactions.

              • chicken@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                3
                ·
                3 months ago

                When I’m banned from things I don’t appeal because I don’t trust the intentions of moderators and making such a request to someone acting in bad faith is humiliating. I think anyone coming from Reddit will probably be reluctant to appeal a ban.

              • Luke@lemmy.ml
                link
                fedilink
                English
                arrow-up
                3
                ·
                3 months ago

                I don’t know that we can necessarily rely on bot creators to never implement automated ban appeals.

        • KⒶMⒶLⒶ WⒶLZ 2Ⓐ24@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          3 months ago

          Most mod actions happen hours/days after the activity has already passed, so even if mods are 100% successful in removing LLM content, most of the experienced interaction people have will already be with the LLM bots.

          users should still be discouraged from doing your probing anyway. mods should be encouraged to be involved.