I’ve started to realize that every social media platform, including Facebook, Telegram, Twitter, etc., has issues with bot spam and fake follower accounts. These platforms typically combat this problem by implementing various measures such as ban waves, behavior detection, and more.

What strategies/tools did Lemmy employ to address bots, and what additional measures could further improve these efforts?

  • Otter
    link
    fedilink
    English
    3616 days ago

    Currently, it’s mostly manual removals which isn’t sustainable if the platform grows. Various instances are experimenting with their own moderation tools outside of Lemmy, and I don’t think Lemmy itself has any features to combat this. Moderation improvements is something that’s been talked about with Sublinks.

    What additional measures could further improve these efforts?

    Having an ‘automod’, similar to but more advanced than Reddit, would help a lot as the first step. No one likes excess use of automod, but not having it at all will be much worse. Having an improved automod system with guides and tips on how to use it effectively, will go a long way towards making moderation easier.

    • I think the right strategy is providing all the tools, and then the instances themselves have to stay attractive. That’s not on the developers, that’s on the instances themselves.

  • @JohnDClay@sh.itjust.works
    link
    fedilink
    2416 days ago

    We’re not mainstream enough to have many bots yet. I think some instances needed to deal with bot spam, but I haven’t seen any in the community I moderate.

    • HubertManne
      link
      fedilink
      516 days ago

      I don’t know if its lemmy or other parts of the federation but I see plenty of drug and other stuff that I guess could be manually done but my guess is its bots.

      • @Cheradenine@sh.itjust.works
        link
        fedilink
        English
        616 days ago

        That’s a kbin thing. I have never seen ‘buy cheap Viagra, Oxycontin, etc.’ on Lemmy. It probably exists, but whenever I block and report a user they’re from kbin.

  • Emily
    link
    fedilink
    1116 days ago

    As a moderator of a couple communities, some basic/copypasta misbehaviour is caught by automated bots that I largely had to bootstrap or heavily modify myself. Near everything else has to be manually reviewed, which obviously isn’t particularly sustainable in the long term.

    Improving the situation is a complex issue, since these kinds of tools often require a level of secrecy incompatible with FOSS principles to work effectively. If you publicly publish your model/algorithm for detecting spam, spammers will simply craft their content to avoid it by testing against it. This problem extends to accessing third party tools, such as specialised tools Microsoft and Google provide for identifying and reporting CSAM content to authorities. They are generally unwilling to provision their service to small actors, IMO in an attempt to stop producers themselves testing and manipulating their content to subvert the tool.