cross-posted from: https://lemmy.dbzer0.com/post/2896209

I noticed a bit of panic around here lately and as I have had to continuously fight against pedos for the past year, I have developed tools to help me detect and prevent this content.

As luck would have it, we recently published one of our anti-csam checker tool as a python library that anyone can use. So I thought I could use this to help lemmy admins feel a bit more safe.

The tool can either go through all your images via your object storage and delete all CSAM, or it canrun continuously and scan and delete all new images as well. Suggested option is to run it using --all once, and then run it as a daemon and leave it running.

Better options would be to be able to retrieve exact images uploaded via lemmy/pict-rs api but we’re not there quite yet.

Let me know if you have any issue or improvements.

  • Meldrik@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 months ago

    Is admins also not obligated to report such content on our servers to the authorities?

    Is there an IP attached to the uploader or something?

    • db0@lemmy.dbzer0.comOP
      link
      fedilink
      English
      arrow-up
      20
      ·
      edit-2
      10 months ago

      I don’t know. It depends on your jurisdiction. However this is an automated tool and most detected images will be false positives. Requirements for reporting are necessary for validated CSAM but IANAL.

      The IP is not visible from the object storage. I do store the OS path, so one would need to trace that to the pict-rs ID, and from the to the lemmy post id, and from there to the user.

  • skymtf@pricefield.org
    link
    fedilink
    English
    arrow-up
    9
    ·
    10 months ago

    Damn this would be great for mastodon too! Imagine a plugin that auto flagged images locally, and gave you a heads up. Kinda like what tech companies have but wont freak out if it finds “extremist” content.