• QuizzaciousOtter
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    1
    ·
    1 month ago

    Is 600 MB a lot for pandas? Of course, CSV isn’t really optimal but I would’ve sworn pandas happily works with gigabytes of data.

    • MoonHawk@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      edit-2
      1 month ago

      What do you mean not optimal? This is quite literally the most popular format for any serious data handling and exchange. One byte per separator and newline is all you need. It is not compressed so allows you to stream as well. If you don’t need tree structure it is massively better than others

      • QuizzaciousOtter
        link
        fedilink
        English
        arrow-up
        14
        ·
        1 month ago

        I think portability and easy parsing is the only advantage od CSV. It’s definitely good enough (maybe even the best) for small datasets but if you have a lot of data you need a compressed binary format, something like parquet.

      • elmicha@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        But which separator is it, and which line ending? ASCII, UTF-8, UTF-16 or something else? What about quoting separators and line endings? Yes, there is an RFC, but a million programs were made before the RFC and won’t change their ways now.

        Also you can gzip CSV and still stream them.

      • merari42@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        Have you heard that there are great serialised file formats like .parquet from appache arrow, that can easily be used in typical data science packages like duckdb or polars. Perhaps it even works with pandas (although do not know it that well. I avoid pandas as much as possible as someone who comes from the R tidyverse and try to use polars more when I work in python, because it often feels more intuitive to work with for me.)

        • driving_crooner@lemmy.eco.br
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          I used to export my pandas DataFrames as pickles, but decided to test parquet and it was great. It was like 10x smaller and allowed me to had the the databases on a server directory instead of having to copy everything to the local machine.

    • tequinhu@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 month ago

      It really depends on the machine that is running the code. Pandas will always have the entire thing loaded in memory, and while 600Mb is not a concern for our modern laptops running a single analysis at a time, it can get really messy if the person is not thinking about hardware limitations

      • naught@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        Pandas supports lazy loading and can read files in chunks. Hell, even regular ole Python doesn’t need to read the whole file at once with csv

        • tequinhu@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 month ago

          I didn’t know about lazy loading, that’s cool!

          Then I guess that the meme doesn’t apply anymore. Though I will state that (from my anedoctal experience) people that can use Panda’s most advanced features* are also comfortable with other data processing frameworks (usually more suitable to large datasets**)

          *Anything beyond the standard groupby - apply can be considered advanced, from the placrs I’ve been

          **I feel the urge to note that 60Mb isn’ lt a large dataset by any means, but I believe that’s beyond the point

    • marcos@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      Is 600 MB a lot for pandas?

      No, but it’s easy to make a program in Python that doesn’t like it.

      • QuizzaciousOtter
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        Oh, I know, believe me. I have some painful first-hand experience with such code.

    • gigachad@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      I guess it’s more of a critique of how bad CSV is for storing large data than pandas being inefficient

    • mvirts@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      It’s more likely you’ll eat up storage when you read a 600mb parquet and try to write it as CSV.

      • QuizzaciousOtter
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I mean, yeah, that’s the point of compression. I don’t quite get what you mean by that comment.

        • mvirts@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 month ago

          Ah I was trying to point out that CSV is the inefficient format. Reading a large amount of data from a more efficient format like parquet is more likely to cause trouble because the memory required can be more than the file size. CSV is the opposite where it will almost always use more disk space than is required to represent the data in memory.