Cross-posting this here as I saw some misconceptions about Rust language

I think that blog describes well the pros of using a strongly-typed language like Rust is. You may fight the compiler and get slower build times but you get less bugs because of the restrictions the language imposes you.

The biggest con of Rust is that it requires learning to be used, even for someone who has already programmed before. It’s not like Python or Ruby where you can just dive in a code base and learn on the go. You really need to read the Rust book (or skim through it) to get through the notions. So it has a higher entry level, with all the misunderstandings that come with it.

  • Opafi@feddit.de
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    I really don’t get the article. It’s not the compiler’s purpose to prevent logic errors nor does it do that properly. Trying to overcomplicate your types to the degree where they prevent a few of them at the cost of making your code less flexible concerning potential future issues doesn’t sound like a good idea either.

    What’s wrong with tests? Just write tests to see if your code does what it’s expected to do and leave the compiler for what it’s made for.

    • potterman28wxcv@beehaw.orgOP
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Why would you have to choose between tests and compiler checks? You can have both. The more you have the less chance of finding bugs.

      I would also add that tests cannot possibly be exhaustive. I am thinking in particular of concurrency problems - even with fuzzing you can still come across special cases where it goes wrong because you forgot a mutex somewhere. Extra static checks are complementary to tests.

      I think you can write “unsafe” code in Rust that bypass most of the extra checks so you do have the flexibility if you really need it.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        My favorite tests are the ones I don’t need to remember to write. I haven’t needed to write a test for what happens when a function receives null in a while thanks to TS/mypy/C#'s nullable reference types/Rust’s Option/etc. Similarly, I generally don’t need to write tests for functions receiving the wrong type of values (strings vs numbers, for example), and with Rust, I generally don’t even need to write tests for things like thread safety and sometimes even invalid states (since usually valid states can be represented by enum variants, and it’s often impossible to have an invalid state because of that).

        There is a point where it becomes too much, though. While I’d like it if the compiler ensured arbitrary preconditions like “x will always be between 2 and 4”, I can’t imagine what kinds of constraints that’d impose on actually writing the code in order to enforce that. Rust does have NonZero* types, but those are checked at runtime, not compile time.

        • potterman28wxcv@beehaw.orgOP
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          There are techniques like abstract interpretation that can deduce lower and upper bounds that a value can take. I know there is an analysis in LLVM called ValueAnalysis that does that too - the compiler can use it to help dead code elimination (deducing that a given branch will never be taken because the value will never satisfy the condition so you can get rid of the branch).

          But I think these techniques do not work in all use cases. Although you could theoretically invent some syntax to say “I would like this value to be in that range”, the compiler would not be able to tell in all cases whether it’s satisfied.

          If you are interested in a language that has subrange checks at runtime, Ada language can do that. But it does come at a performance cost - if your program is compute bound it can be a problem