It’s a bit of a weird shower thought but basically I was wondering hypothetical if it would be possible to take data from a social media site like Reddit and map the most commonly used words starting at 1 and use a separate application to translate it back and forth.

So if the word “because” was number 100 it would store the value with three characters instead of seven.

There could also be additions for suffixes so “gardening” could be 5000+1 or a word like “hoped” could be 2000-2 because the “e” is already present.

Would this result in any kind of space savings if you were using larger amounts of text like a book series?

  • SpaceNoodle@lemmy.world
    link
    fedilink
    arrow-up
    90
    arrow-down
    2
    ·
    1 year ago

    That’s kind of how file compression works - by replacing patterns with shorter symbolic representations!

    • RvTV95XBeo@sh.itjust.works
      link
      fedilink
      arrow-up
      17
      ·
      1 year ago

      If I remember correctly, you can even pre-build dictionaries for some compression algorithms to speed up the process if you’re working with compressing lots of similar data.

  • Rexios
    link
    fedilink
    arrow-up
    69
    ·
    1 year ago

    That’s literally how compression works

    • Corroded@leminal.spaceOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Huh that was an interesting read. I’m not sure I understood the entirety of it but it sounds like it would be a lot more efficient than what I was thinking

      • morhp@lemmy.wtf
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Yes, it’s very efficient and the core of what complession formats like .zip do.

        The main difference to your idea is that computers count in binary like 0, 1, 10, 11, 100, 101 and so on and that you don’t want to assign these very low codes words directly. Say you’d have assigned 1 the most common word, then that will would be encoded very short, but you’d sort of take the one away from all the other codes, as you don’t know if 11 is twice the most common word or once the 11th (3rd in decimal) common word.

        Huffman essentially computes the most optimal word assignments mathematically.

        The main other difference between your suggestion and most compression algorithms is that you wouldn’t use a huge dictionary in real time and loading it and looking into it would be very slow. Most compression algorithms have a rather small dictionary buildin and/or they build one on the fly looking at the data that they want to compress.

  • youngalfred
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    1 year ago

    That’s pretty much what a tokenizer does for Large Language Models like Chat-GPT. You can see how it works here: https://platform.openai.com/tokenizer

    Type in the word ‘Antidisestablishmentarianism’ and you can see it becomes 5 tokens instead of 28 characters.

    • Sethayy@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Computers are quick enough nowadays its better to just count how many of each word in a block, then use it as indices. Saves from having a dictionary the size of the English language for decompression

  • Bizarroland@kbin.social
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    There’s a website, I can’t remember the link, but any text you search can be found to have already been written on one of its pages.

    It used an algorithm to generate every possible page of text in the English language, including gobbledygook and random keyboard smashing.

    You can then share a link to a full page of text that is far smaller than the page itself, given infinite computational resources, it would be possible to parse any program or any bit of software into its text equivalent and then generate the URL that attaches to this algorithm for that entire page reducing a thousand characters to 16.

    It would then be possible to make a page consisting of these 16 digit characters and then compress that page to another page, turning 70,000 words of text or so into 16 digits.

    Once you had a page full of those, you could compress them again by finding a page that contains a link to the links of all of the pages.

    Given that this is perfect compression you could literally reduce any file to the URL of the page that you need to use (and some information about how many pages deep you have to go and any needed information on the final file structure and format at the end of the reconstruction) to regenerate the page that you need to use to regenerate the page that you need to use to regenerate the page and so on until you have fully reconstructed the file.

    And, given that these pages that can be summoned by a URL and do not actually exist until they are generated by the algorithm, it is entirely feasible that the algorithm itself could be stored on your local computer, meaning that with a somewhat complicated but still reasonable system, you could send a file of any size to any person in a text.

    The problem is the computational resources needed to utilize this algorithm. It would have to crunch out the numbers to rebuild every single file that is sent its way and even on a fairly fast system that’s going to take some time.

    However, if anyone wants to take this idea and run with it I’m pretty sure the world would appreciate your efforts. Since we have the ability to create a page of text using an algorithm then we probably should use that and then once it becomes commonplace I’m sure people will build chiplet accelerators for the process.

    • number6@feddit.nl
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      1 year ago

      There’s a website, I can’t remember the link, but any text you search can be found to have already been written on one of its pages.

      It’s The Library of Babel

      You can type up to 3200 characters in lower case. With a short sentence though, the “title” and page number may be longer than your original text!

      Edit: Fixed url title

    • Kerfuffle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      it would be possible to parse any program or any bit of software into its text equivalent and then generate the URL that attaches to this algorithm for that entire page reducing a thousand characters to 16.

      This can’t work. Let’s use a simpler example, instead of 16 characters for the link let’s say it’s a single digit and let’s say the content of the “page” is 4 digits. One digit has 10 possible values: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. 4 digits have 10,000 possible combinations. With only one digit to index into the 10,000 possible combinations, you can point to only 10 of them.

      It’s the same thing for pages of text. If you have a 16 character link and the content you’re trying to index with it is more than 16 characters then you can only point to some of the possibilities in the larger set.

      • Bizarroland@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Then how is it that I was able to link to 800 words with 5 characters, (stripping aside the static portion of the links)?

        • Kerfuffle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          First just think about the logic of what I said before: if there are finite number of combinations in the link, how can you possibly link to a larger number of items? It’s just logically impossible.

          Then how is it that I was able to link to 800 words with 5 characters, (stripping aside the static portion of the links)?

          The fact that you were able to link to 800 words doesn’t really mean anything. somesite.com/a could point to a file that was gigabytes. This doesn’t meant the file got compressed to a. Right?

          There also might be less combinations for that site than it appears. For an 800 word chunk of grammatical English text, there are a lot less combinations than the equivalent length in arbitrary characters. Instead of representing each character in a word, it could just use an id like dog=1, antidisestablishmentarianism=2 and so on. Even using tricks like that though, it’s pretty likely you’re only able to link to a subset of all the possible combinations.

          Regarding compression in general, it’s a rule that you can’t compress something independent of its content. If you could do that, even if the compression only reduced the file by the tiniest fraction you could just repeatedly apply the algorithm until you end up where the problem I described is obvious. If you could compress any large file down to a single byte, then that single byte can only represent 256 distinct values. However there are more than 256 distinct files that can exist, so clearly something went wrong. This rule is kind of like breaking the speed of light or perpetual motion: if you get an answer that says you have perpetual motion or FTL travel then you automatically know you did something wrong. Same thing with being able to compress without regard to the content.

    • xordos@lonestarlemmy.mooo.com
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      There is simply no way to pre-store any arbitrary long text because the possible combination is just tooo huge. It is like draw a bullseye where you arrow landed. You search any paragraph, it search existing index, if not found then save it and assign it a new index. Of course it may use some algorithms to optimize/decrease the space needed.

      From it’s wiki page, https://en.m.wikipedia.org/wiki/The_Library_of_Babel_(website) ‘’’ The website can generate all possible pages of 3200 characters and allows users to choose among about 10^4677 potential pages of books. ‘’’

      PS, I think again, indeed this actually does not need any storage, it can be URL to text encoding/decoding, the URL itself can determine/generate the actual text. And same for the opposite direction

    • nychtelios@rlyeh.icu
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      You cannot represent everything using english text, and text in known languages can be extremely compressed just because we know details about its structure. (And anyway, it cannot be compressed that extremely, information theory explains this very well, computational power isn’t the only limit here).

      If you cannot represent everything using valid english text, you cannot compress at high rates without losing information. A big part of digital data is actually noise, and noise cannot be compressed by definition.