Hey everyone, so for the past few month I have been working on this project and I’d love to have your feedback on it.
As we all know any time we publish something public online (on Reddit, Twitter or even this forum), our posts, comments or messages are scrapped and read by thousands of bots for various legitimate or illegitimate reasons.
With the rise of LLMs like ChatGPT we know that the “understanding” of textual content at scale is more efficient than ever.
So I created Redakt, an open source zero-click decryption tool to encrypt any text you publish online to make it only understandable to other users that have the browser extension installed.
Try it! Feel free to install the extension (Chrome/Brave/Firefox ): https://redakt.org/browser/
EDIT: For example, here’s a Medium article with encrypted content: https://redakt.org/demo/
Before you ask: What if the bots adapt and also use Redakt’s extension or encryption key?
Well first they don’t at the moment (they’re too busy gathering billions of data points “in clear”). If they do use the extension then any changes we’ll add to the extension (captcha, encryption method) will force them to readapt and prevent them to scale their data collection.
Let me know what you guys think!
deleted by creator
Not once did I claim that LLMs are sapient, sentient or even have any kind of personality. I didn’t even use the overused term “AI”.
LLMs, for example, are something like… a calculator. But for text.
A calculator for pure numbers is a pretty simple device all the logic of which can be designed by a human directly.
When we want to create a solver for systems that aren’t as easily defined, we have to resort to other methods. E.g. “machine learning”.
Basically, instead of designing all the logic entirely by hand, we create a system which can end up in a number of finite, yet still near infinite states, each of which defines behavior different from the other. By slowly tuning the model using existing data and checking its performance we (ideally) end up with a solver for something a human mind can’t even break up into the building blocks, due to the shear complexity of the given system (such as a natural language).
And like a calculator that can derive that 2 + 3 is 5, despite the fact that number 5 is never mentioned in the input, or that particular formula was not a part of the suit of tests that were used to verify that the calculator works correctly, a machine learning model can figure out that “apple slices + batter = apple pie”, assuming it has been tuned (aka trained) right.