esafak 2 hours ago

> "Modern" languages try to avoid exceptions by using sum types and pattern matching plus lots of sugar to make this bearable. I personally dislike both exceptions and its emulation via sum types. ... I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid.

Special values like NaN are half-assed sum types. The latter give you compiler guarantees.

  • kace91 2 hours ago

    I’d like to see their argument for it. I see no help in pushing NaN as a number through a code path corrupting all operations it is part of, and the same is true for the others.

    • cb321 an hour ago

      There is no direct argument/guidence that I saw for "when to use them", but masked arrays { https://numpy.org/doc/stable/reference/maskedarray.html } (an alternative to sentinels in array processing sub-languages) have been in NumPy (following its antecedents) from its start. I'm guessing you could do a code-search for its imports and find arguments pro & con in various places surrounding that.

      From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though.

      Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general.

      I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on.

      Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA.

      There is also the whole database NULL question: https://en.wikipedia.org/wiki/Null_(SQL)

      To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.

      • kace91 2 minutes ago

        >To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.

        That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference.

        I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity.

        Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones.

        There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious.

  • elcritch 2 hours ago

    The compiler can still enforce checks, such as with nil checks for pointers.

    In my opinion it’s overall cleaner if the compiler handles enforcing it when it can. Something like “ensure variable is initialized” can just be another compiler check.

    Combined with an effects system that lets you control which errors to enforce checking on or not. Nim has a nice `forbids: IOException` that lets users do that.

    • ux266478 an hour ago

      Both of these things respectively are just pattern matches and monads, just not user-definable ones.

    • umanwizard 2 hours ago

      > The compiler can still enforce checks, such as with nil checks for pointers.

      Only sometimes, when the compiler happens to be able to understand the code fully enough. With sum types it can be enforced all the time, and bypassed when the programmer explicitly wants it to be.

      • wavemode 33 minutes ago

        There's nothing preventing this for floats and ints in principle. e.g. the machine representation could be float, but the type in the eyes of the compiler could be `float | nan` until you check it for nan (at which point it becomes `float`). Then any operation which can return nan would return `float | nan` instead.

        tbh this system (assuming it works that way) would be more strict at compile-time than the vast majority of languages.

kbd 3 hours ago

The biggest thing I still don’t like about Nim is its imports:

    import std/errorcodes

    proc p(x: int) {.raises.} =
      if x < 0:
        raise ErrorCode.RangeError
      use x
I can’t stand that there’s no direct connection between the thing you import and the names that wind up in your namespace.
  • PMunch 3 hours ago

    There is a direct connection, you just don't have to bother with typing it. Same as type inference, the types are still there, you just don't have to specify them. If you have a collision in name and declaration then the compiler requires you to specify which version you wanted. And with language inspection tools (like LSP or other editor integration) you can easily figure out where something comes from if you need to. Most of the time though I find it fairly obvious when programming in Nim where something comes from, in your example it's trivial to see that the error code comes from the errorcodes module.

    Oh, and as someone else pointed out you can also just `from std/errorcodes import nil` and then you _have_ to specify where things come from.

  • xigoi 3 hours ago

    It needs to be this way so that UFCS works properly. Imagine if instead of "a,b".split(','), you had to write "a,b".(strutils.split)(',').

    • polotics 3 hours ago

      ok I do not understand.

      What is preventing this import std/errorcodes

      from allowing me to use: raise errorcodes.RangeError instead of what Nim has?

      or even why not even "import std/ErrorCodes" and having the plural in ErrorCodes.RangeError I wouldn't mind

      • PMunch 3 hours ago

        Nothing, and it fact this works. To move to an example which actually compiles:

            import math
            
            echo fcNormal
            echo FloatClass.fcNormal
            echo math.fcNormal
            echo math.FloatClass.fcNormal
        
        All of these ways of identifying the `fcNormal` enum value works, with varying levels of specificity.

        If instead you do `from math import nil` only the latter two work.

  • treeform 14 minutes ago

    Nim imports are great. I would hate to qualify everything. It feels so bureaucratic when going back to other languages. They never cause me issues and largely transparent. Best feature.

  • summarity 3 hours ago

    You are free to import nil and type the fully qualified name.

    • Symmetry 2 hours ago

      There are many things to like about Nim, but it does benefit from adherence to a style guide more than most languages.

esafak an hour ago

From my interaction with the Nim community, I came to the conclusion that nim could be more popular if its founder devolved decision making to scale up the community. I think he likes it the way it is; small, but his. He is Torvaldsesque in his social interactions.

  • nallerooth 27 minutes ago

    I feel the same way - as I suspect a lot of people here do. Nim posts are always upvoted and usually people say nice things about the language in the comments.. but there are few who claim to actually -use- the language for more than a small private project, if even that.

  • oscillonoscope an hour ago

    I worked in nim for a little bit and it truly has a lot of potential but ultimately abandoned it for the same reason. It's never going to grow beyond the founder's playground.

  • xigoi an hour ago

    Please no. Design by committee would lead to another C++.

    • almostgotcaught an hour ago

      The second or third most popular language of all time? God forbid lol

      • xigoi an hour ago

        Popular does not mean good. Tobacco smoking is also popular.

        • almostgotcaught an hour ago

          Do you think this is clever? For a metaphor to be relevant to a discussion it has to be fitting, not just a dunk.

          • xigoi an hour ago

            It’s not a metaphor. I was giving a counterexample to your implied claim that popularity is an indicator of quality.

            • kanaffa12345 44 minutes ago

              That wasn't an implied claim because we're not discussing metrics for judging quality.

mwkaufma 4 days ago

Big "college freshman" energy in this take:

  I personally prefer to make the error state part of the objects: Streams can be in an error state, floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).
It's fine to pick sentinel values for errors in context, but describing 0x80000000 as "pointless" in general with such a weak justification doesn't inspire confidence.
  • ratmice 3 hours ago

    Without the low int the even/odd theorem falls apart for wrap around I've definitely seen algorithms that rely upon that.

    I would agree, whether error values are in or out of band is pretty context dependent such as whether you answered a homework question wrong, or your dog ate it. One is not a condition that can be graded.

    • umanwizard 2 hours ago

      What is the "even/odd theorem" ?

      • ratmice an hour ago

        that all integers are either even or odd, and that for an even integer that integer + 1 and - 1 are odd and vice versa for odd numbers. That the negative numbers have an additional digit from the positive numbers ensures that low(integer) and high(integer) have different parity. So when you wrap around with overflow or underflow you continue to transition from an even to odd, or odd to even.

        • xigoi an hour ago

          If you need wraparound, you should not use signed integers anyway, as that leads to undefined behavior.

          • ratmice an hour ago

            Presumably since this language isn't C they can define it however they want to, for instance in rust std::i32::MIN.wrapping_sub(1) is a perfectly valid number.

            • xigoi an hour ago

              Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.

          • umanwizard an hour ago

            Signed overflow being UB (while unsigned is defined to wrap) is a quirk of C and C++ specifically, not some fundamental property of computing.

            • Symmetry 24 minutes ago

              Specifically, C comes form a world where allowing for machines that didn't use 2's compliment (or 8 bit bytes) was an active concern.

            • xigoi an hour ago

              Nim (the original one, not Nimony) compiles to C, so making basic types work differently from C would involve major performance costs.

              • ratmice 37 minutes ago

                Presumably unsigned want to return errors too?

                Edit: I guess they could get rid of a few numbers... Anyhow it isn't a philosophy that is going to get me to consider nimony for anything.

              • umanwizard 43 minutes ago

                > making basic types work differently from C would involve major performance costs.

                Not if you compile with optimizations on. This C code:

                  int wrapping_add_ints(int x, int y) {
                      return (int)((unsigned)x + (unsigned)y);
                  }
                
                Compiles to this x86-64 assembly (with clang -O2):

                  wrapping_add_ints:
                          lea     eax, [rdi + rsi]
                          ret
                
                Which, for those who aren't familiar with x86 assembly, is just the normal instruction for adding two numbers with wrapping semantics.
  • sevensor 3 hours ago

    I have been burned by sentinel values every time. Give me sum types instead. And while I’m piling on, this example makes no sense to me:

        proc fib[T: Fibable](a: T): T =
          if a <= 2:
            result = 1
          else:
            result = fib(a-1) + fib(a-2)
    
    Integer is the only possible type for T in this implementation, so what was the point of defining Fibable?
    • Hendrikto 3 hours ago

      I agree about sentinel values. Just return an error value.

      I think the fib example is actually cool though. Integers are not the only possible domain. Everything that supports <=, +, and - is. Could be int, float, a vector/matrix, or even some weird custom type (providing that Nim has operator overloading, which it seems to).

      May not make much sense to use anything other than int in this case, but it is just a toy example. I like the idea in general.

      • sevensor 2 hours ago

        Well, I agree about Fibable, it’s fine. It’s the actual fib function that doesn’t work for me. T can only be integer, because the base case returns 1 and the function returns T. Therefore it doesn’t work for all Fibables, just for integers.

        • cb321 an hour ago

          In this case, it compiles & runs fine with floats (if you just delete the type constraint "Fibable") because the string "1" can be implicitly converted into float(1) { or 1.0 or 1f64 or float64(1) or 1'f64 or ..? }. You can think of the "1" and "2" as having an implicit "T(1)", "T(2)" -- which would also resolve your "doesn't work for me" if you prefer the explicitness. You don't have to trust me, either. You can try it with `echo fib(7.0)`.

          Nim is Choice in many dimensions that other PLang's are insistently monosyllabic/stylistic about - gc or not or what kind, many kinds of spelling, new operator vs. overloaded old one, etc., etc., etc. Some people actually dislike choice because it allows others to choose differently and the ensuing entropy creates cognitive dissonance. Code formatters are maybe a good example of this? They may not phrase opposition as being "against choice" as explicitly as I am framing it, but I think the "My choices only, please!" sentiment is in there if they are self-aware.

        • Hendrikto an hour ago

          I see, I misunderstood your complaint then.

          However, the base case being 1 does not preclude other types than integers, as cb321 pointed out.

    • treeform 12 minutes ago

      There can be a lot of different integers, int16, int32 ... and unsigned variants. Even huge BigNum integers of any lengths.

andyferris 3 hours ago

> floats can be NaN and integers should be low(int) if they are invalid (low(int) is a pointless value anyway as it has no positive equivalent).

I have long thought that we need a NaI (not an integer) value for our signed ints. Ideally, the CPU would have overflow-aware instructions similar to floats that return this value on overflow and cost the same as wrapping addition/multiplication/etc.

  • mikepurvis an hour ago

    From an implementation point of view, it would be similar to NaN; a designated sentinel value that all the arithmetic operations are made aware of and have special rules around producing and consuming.

jcmfernandes 22 minutes ago

>WCET ("worst case execution time") is an important consideration: Operations should take a fixed amount of time and the produced machine code should be predictable.

Good luck. Give the avionics guys a call if you solve this at the language level.