• 22 Posts
  • 64 Comments
Joined 5 months ago
cake
Cake day: January 26th, 2024

help-circle

  • For what it’s worth, I completely agree that threatening historical artifacts to get people’s attention is counterproductive. I looked over Just Stop Oil and I don’t agree with all of their tactics. Promoting some other type of action sounds better, to me.

    But on the other hand at least they are doing something. If 10% of the world cared as much as they do, we’d have a much better chance of taking effective action against the apocalypse that’s coming. As it stands right now, billions will die. We probably can’t avoid that anymore, but we can reduce the number of billions, and the quality of the wreckage we’ll get to inhabit in 100 years.

    You can:

    • Join with Just Stop Oil, and participate in the good stuff and object to the bad stuff and not participate in it.
    • Or, join some other group whose actions are aligned better with what you think is a good way to accomplish the goal.
    • Or, pick someone who is actively harming the climate on a global scale every single day, on purpose, and direct your constructive internet criticism at them.
    • Or, out of all the universe of actions you could take in reference to the coming global hellstorm, pick a 10% segment that’s doing something not quite right, out of the 0.1% segment that even cares at all, and point all your “here’s what you should do better” feedback directly at them.

    To me, I think doing one of the first three makes more sense than the fourth one. Again, I won’t say you’re wrong, but less involvement in doing anything is not the solution to it.



  • I don’t want to go into any detail on how it works. Your message did inspire me, though, to offer to explain and demonstrate it for one of the admins so there isn’t this air of secrecy. The point is that I don’t want the details to be public and make it easier to develop ways around it, not that I’m the only one who is allowed to know what it is doing.

    I’ll say that it draws all its data from the live database of a normal instance, so it’s not fetching or storing any data other than what every other Lemmy instance does anyway. It doesn’t even keep its own data aside from a little stored scratch pad of its judgements, and it doesn’t feed comment data to any public APIs in a way that would give users’ comments over to be used as training data by God knows who.


  • Other things that have occurred to me in the meantime:

    1. I’m fine with explaining how it works to one of the slrpnk admins in confidence. We can get in Matrix, I can show the code and some explanation, and depending on how it goes I might even be fine giving access to the same introspection tools I use, to examine in detail going forward why it made some particular decision and if it’s on the right track. The point is not that I’m the only one who’s allowed to understand it, just that I don’t want it to become common knowledge.
    2. I’m not excited to be a “full time” moderator, for reasons of time investment and responsibility level. Just like with !inperson@slrpnk.net, I want to be able to create this community because I think it is important, not necessarily to “run it” so to speak. My preferred perfect trajectory in the long run is that it becomes a tool that people can use to automate moderation for their own communities, if it can prove useful, instead of just being used by me to run my own little empire. I just happen to think that this type of bad-actor-resistant political community would be a great thing on its own, as well as a good test of this automated approach to moderation of communities political and otherwise.

  • Perfectly reasonable. It’s not feeding any users’ comments into any LLM public API like OpenAI that might use them for training the model in the future. As a matter of face it’s not communicating with any API or web service, just self contained on the machine that runs it.

    As far as transparency, I completely get it. I would hope that the offer to point to specific reasons for any user that wants to ask questions about why they can’t post will help to alleviate that, but it won’t make it completely go away. Especially because as I said, I’m expecting that it will get its decisions wrong some small percentage of the time. I just know there’s an arms race between moderation tooling and people trying to get around the moderation tooling, and I don’t want to give the bad actors any legs up in that competition even if there are very valid reasons for it in terms of giving people reasons to trust that the system is honest.


  • Yes, this is an attempt at something similar. I think the reality is that when things grow beyond a certain size you have to do some automated moderation things or else it gets overwhelming for the mods. This is an attempt at a new model for that, since I think human moderation of everything has a couple of different flaws, and some of the automated things reddit did had glaring flaws.



  • Thanks. Let’s see what happens.

    I don’t anticipate it being a “working against” thing for an overwhelming majority of people. Most people’s experience should be simply that they get to talk about politics without a bunch of disruptive comments all over the place.

    You’re right that anything I can do to show transparency will help create that, because it would be easy to interpret the place as a “working against” thing where everyone has to be obedient to my way or else I’ll ban them, even if everything with the bot works perfectly and there’s no reality to that at all. More likely, everything won’t work perfectly, and there will be some small number of people who legitimately wind up tangling with the bot even if they are fine.

    I do anticipate there will be a certain population that will get very upset that they’re not allowed to come in and make whatever type of hostile or disruptive comments they want, and make a big stink about how it’s grossly unfair that I am running the community like my own little echo chamber and kicking out any unpopular opinion, even though 99% of the time nothing like that is happening. I plan to ignore those people.


  • I like the general thinking of these. I was aiming with this bot to achieve very similar things. Meaning, certain types of discussions are impossible on the internet right now because there’s no penalty for being a jerk or hard to talk to, as long as you’re within the bounds of the community rules. The types of discussions that I want to make possible are very similar to the conversations you’re talking about in these communities.



  • Yes. That probing on the part of bad actors is part of why I don’t want to explain anything about how it works even though that raises massive transparency questions. I’m happy to point out a message to any particular person who has a question and say “Here is the kind of thing you did, that you can’t do anymore if you want to post here,” but I definitely don’t want to draw out a little roadmap for how to trick the bot.

    Mostly the process is for the 95% of people that it is fine with to just talk as they want to, and for anyone that’s in the 5% to have an avenue to ask reasonable questions, and then run the experiment, and see what happens.

    And yes, I’ll certainly abide by whatever your decision is about whether this is the place to try it out. Making it about news in general (bringing that to slrpnk without the bickering that comes with it whenever anything political comes in) sounds like it might be a real positive for the instance. Making it about politics (as I did in my original pitch), now that I think about it, sounds a little bit wrong. But let me know what you and everyone thinks.


  • You are not banned. The number of users from slrpnk that are banned is very small.

    “Ban” is not quite the right word, since it’s always flexible to current behavior. Maybe that is me trying to whitewash my self propaganda about how good an idea it is, but I pictured it more as this model: Whatever user in question has not met the bar of productive discussion to be let in, at the present time.

    Maybe the bot should be called elitistbot.

    And yes, if you are being racist or something, the bot is not needed and the mods and admins would give you an actual ban of the permanent kind. This is about detecting misbehavior at a more subtle and forgivable level than that, and reacting to it with a more temporary action.


  • My vision is that if some person is unable to post, and wants to post asking why, I can give them some sort of answer (similar to what I said to Alice in another message here). The ban decision is never permanent, either, it’s just based on the user’s recent and overall posting history. If you want to be on the whitelist, there’s specific guidance on what you “did wrong” so to speak, and if you decide the whole thing is some mod overreach one viewpoint whitewash and you want no part of it, that’s okay too. My hope is that it winds up being a pleasant place to discuss politics without being oppressive to anyone’s freedom of speech or coming across as arbitrary or bad, but that is why I want to try the experiment. Maybe the bot in practice turns out to be a capricious asshole and people decide that it (and me) are not worth dealing with.

    The whole model is more of a private club model (we’ll let you in but you have to be nice), different from the current moderation model. The current implementation would want to exclude about 200 users altogether. Most are from lemmy.world or lemmy.ml (And 3 from slrpnk. I haven’t investigated what those people did that it didn’t like.)

    Specific answers to your questions:

    1. Only after. The scale means it would be unworkable to try to talk to every single person before. Transparency of talking to people after, if they wanted to post and found out they couldn’t, I think is an important part.
    2. I think necessarily yes. I envision a community which is specifically for ban complaints and explanations for people who want them, although maybe that would develop as a big time sink and anger magnet. I would hope that after a while people have trust that it’s not just me secretly making a list of people I don’t like, or something, and then that type of thing would quiet down, but in the beginning it has to be that way for there to be any level of trust, if I’m trying to keep the algorithm a secret.
    3. It’s a fair question. Explaining how the current model works exposes some ways to game the system and post some obnoxious content without the bot keeping you out. But, I like the current model’s performance at this difficult task. So I want to keep the way it works now and keep it secret. I realize that’s unsatisfying of course. I’m not categorically opposed to the idea of publishing the whole details, even making it open source, so people can have transparency, and then if people are putting in the effort to dodge around it then we deal with that as it comes.
    4. None.
    5. Not at all.

    I thought about calling the bot “unfairbot”, just to prime people for the idea that it’s going to make unfair decisions sometimes. Part of the idea is that because it’s not a person making personal decisions, it can be much more heavy handed at tone policing than any human moderator could be without being a total raging oppressive jerk.


  • Right. Or, even if people aren’t getting openly hostile, if they’re just not being productive with how they approach the discussion. It’s for an exchange of ideas and not for shouting opinions in short hostile bursts and nothing more.

    Compare it to a big party at someone’s house, where you can come in if you’re a communist or a jock or a DJ or whoever you are, but if you’re openly being annoying to people, you may have to go, and there’s a rough understanding of who is and isn’t supposed to be there and the social contract. In contrast to a bar, where there are some baseline legalistic rules but nothing to prevent any random person from “having a right to be there” even if they’re kind of being a jerk.

    Maybe it can come with a guidebook about what to do so the bot won’t active Judge Dredd mode on you.

    Maybe this is just my imagination at work, but I think it’s a good idea.


  • I know, I was just trying to give a frame of reference for what the level of ban-worthiness would be.

    So you’re okay if I try this experiment? Looking now at how it might play out, I admit I’m having second thoughts about whether it’s even a good fit for this instance. Maybe something would be better like “pleasant news,” where people can post news stories even about political or geopolitical topics, but the actors who like to turn the comments into a war zone are removed to a much lower level. Tell me what you think, though, and I also want to think about it a little bit more.


  • Sure. I’ll take some time for a detailed answer:

    Question One: I already said; it’s nothing to do with the user’s politics. What you’re saying about the flaws in the normal moderation model, I agree with. In practice I have seen political moderation boil down to “you better be leftist or we’ll ban you,” or else “anything goes unless you’re crossing certain way-too-loose boundaries, but if you just make the conversation unpleasant for everyone, that’s fine.” That’s exactly why I would like to try a third way that works by a different model.

    I just checked, and you would be banned. Not for anything political, but for things like this and this. Maybe I shouldn’t have brought the word “asshole” into it, because neither of those comments is any kind of asshole thing. But the point is, there’s a high bar. If you came in saying “Karl Marx is wrong and here is why” or the same for Biden or Trump or Bernie Sanders, or Swedish politicians, I think you’d be fine.

    Your comment about how Sweden should ban rape first is really a perfect example of what this is specifically intended to pick up on. It deciding that means you’re not allowed to post, is it working as I intended. Whether that is a feature or a bug depends on individual viewpoint of course.

    Question Two: Yes, it should be very clear what’s going on. The whole point of the community is to offer this moderation model for people who want to be in that community. But like I say, think of it more like a whitelist. You don’t really have to do anything “wrong” for it to not let you in.

    If there’s a way to set the CSS so people have a warning about what’s going on, also, that sounds good to me.

    Question Three: The Lemmy political communities are maybe one third people talking about politics, and two thirds people yelling opinions at each other with no interest in hearing what the other person is saying, no interest in explaining why they hold their viewpoints, just barking “this way this way” in discord with each other, and it makes it unpleasant.

    Look at this thread for a good example.

    • Top level comment with a little bit of explanation, fine
    • Reply with a pretty inflammatory response with 0 explanation and no follow-up when the person asks questions. What a bunch of crap.
    • Top level comment with detailed argument for, basically, the same thing Linkerbaan’s reply said, but with a lot of argument in favor. Fine.

    To me, the first and last ones would be influential to a “don’t ban” decision, and the middle one would be influential to a “ban” decision.

    I’m not saying that’s how the technology would see it, but you asked me how I would like the conversation to look. If it was the first guy and the third guy disagreeing with each other but explaining why and going into some detailed back and forth about it, and little inflammatory opinion-bombs like the second one weren’t allowed, I think that would help things be less painful.

    Hope this explanation and answer is helpful.