Welcome to the RD thread!

This is a place for casual random chat and discussion.

A reminder for everyone to always follow the community rules and observe the Code of Conduct.

Image

Mobile apps:

Quick tips:

Footnotes:

  • Daily pixel art by Paul Sabado
  • Report inappropriate comments and violators
  • Message the moderation team for any issues
      • s08nlql9
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        yep, sa para na rin di mag-cause ng issue yun mga suspicious accounts

    • megane-kun
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      From one of the comments in the linked thread:

      Obviously we can make inferences based on active users and speed of growth, but a smart person could minimize those signs to the point of being unnoticeable.

      Off the top of my head, correlating the growth of new accounts to active users (plus some lag) could be a useful heuristic here. Preemptively de-federating from suspicious instances (based on the heuristic I’ve just given, or some other, more refined heuristic) can also be useful until there are better tools that are available against this.

      I’ve noticed from looking at pretty graphs that there’s usually a correlation between new registrations and user activity across the board. However, since bot accounts don’t necessarily generate activity (and for a good reason—a lot of those bots are either made by well-meaning people demonstrating the danger, or are put as sleeper agents until they are needed), one can reasonably guess that an instance with an abnormally high rate of new user registrations, without the corresponding rise in user activity, is a bot-infested instance.

      Of course, this could lead to bots artificially rising activity through spamming, and this needs action as well.

      All this I’ve said have already been discussed in the comment thread where I’ve lifted the quote from. Of particular note to me is this:

      A simple way to do so is to just start with a handful of popular, well run instances and consider them trustworthy. Then they can vouch for any other instances being trustworthy and if people agree, the instance is considered trustworthy. It would eventually build up a network of trusted instances. It’s still decentralized. Sure, it’s not as open as before, but what good is being open if bots and trolls can ruin things for good as soon as someone wants to badly enough?

      I think the opposite approach, like how it is done in ad-blockers, would work: a list of instances that are considered bot-infested and are untrustworthy. Basically, a block list. Publicly-available lists would be made available, and instances can be might be more strict—and hence liberal in adding more instances to this list (Beehaw.org is one example I immediately think of that would do things this way), or less strict.

      Having said all that, I have this suspicion that it’s already how things work currently, just a bit more ad-hoc and “manual”. Also, this is probably way too over my head.

      Anyways, for whatever it’s worth, that’s my two cents.