> I won’t fall into the trap of trying to define Monads in this post. Instead, let’s talk about monadic-style APIs – that is, APIs that allow you to do a bunch of things one after another, with the ability to use the result of a previous computation in the next computation, and also allows some logic to happen between steps.
Am I crazy, or did he just give a really good definition of monads in programming? I think that it benefits by not letting itself get bogged down in Category Theory nomenclature which doesn't actually matter when programming.
He described a problem people use monads to solve, not monads themselves.
Haskell people do talk about monadic vs. applicative combinators that are different by whether you can use the results of a previous step on the next ones. But that doesn't have a direct relation with the actual definition of those.
But yes, if you are teaching a programming language that uses monads to someone, you will probably want to explain the problem they solve, not the actual structures. As most things in math, the structures become obvious once you understand the problem.
It's a good description of one application of monads, which is often helpful to beginners if they have been thrown into real code without yet understanding the "why" of monads. If you look up "railway-oriented programming," you'll find more presentations of it.
I think it is a very practical place to start, especially for programmers who have been thrown into a codebase while still new with monads, because it helps them avoid a common mistake that plagues beginners: accidentally dropping error values on the non-success track. Often you simply want to drop values on the non-success track, and there are convenient idioms for doing so, but just as often, you need to examine those values so you can report failures, by returning metrics on validation failures, by providing the right status code in an HTTP response, etc. Railway-oriented programming is a vivid metaphor that reminds programmers that they need to make a decision about how to handle values on the other track.
Smalltalk (and Dart) also have "cascading" which is method chaining with special supporting syntax e.g. in ST you'd send four different messages to the same object with something like
scene add: sprite;
add: otherSprite;
setBackGround: stage;
start
I'm not sure if it matches the "reuse values from previous computation" but it should since messages will affect the object, you just don't have local variables.
In Smalltalk, `;` does two things: terminates the current message (EDIT: while ignoring its return value) and propagates the target object of the current message as a target for the following message.
So this:
scene add: sprite;
add: otherSprite;
setBackGround: stage;
start
is equivalent to:
scene add: sprite.
scene add: otherSprite.
scene setBackGround: stage.
scene start.
It's a style I really enjoy, and it's definitely not exclusive to one language or paradigm, exactly. I see it as more of less of a kind with pipelines in Unix shells, too.
In Scala, a language with OOP heritage and support, plus lots of functional programming features, some of the most common methods you use in such chains are monads.
Not really. The big important part of monads is flattening/unnesting the output.
Basically, if you can convert a `Foo<T>` into a `Foo<U>` by applying a function `T -> U`, it's a monoid. Think `map` or `fold`.
But if you can convert a `Foo<T>` into a `Foo<U>` by applying a function `T -> Foo<U>`, it's a monad. Flattening is "some logic", but not any logic, it's inherent to `Foo<>` itself.
The greatest power of BEAM-based languages is the fully preemptive actor model. Nobody else supports it. This is a superpower, the solution of most problems with concurrent programming.
In Erland and Elixir, actors and actor-based concurrency hold the central place in the corresponding ecosystems, well supported by extensive documentation.
In Gleam, actors and OTP are an afterthought. They are there somewhere, but underdocumented and abandoned.
This is exactly what I want from Gleam. It does seem to be under documented and abandoned. Is there any understanding of why? Like you say, this seems like a super power. I see so much potential. A language that’s ergonomic, pragmatic as the author says, great performance, low-ish barrier to entry, etc. It seems like it could be an awesome tool for building highly reliable software that’s not so difficult to maintain.
It is not abandoned, I am the maintainer. The documentation covers the APIs of the package but not the “zen” of the wider OTP framework, for that the official OTP documentation and existing books are recommended.
That’s great information, thanks. I seem to recall checking the commit history and it didn’t seem dead, but I’m also accustomed to experimental packages being dropped early and often. Do you want help with maintenance or are you doing this independently?
I'm not at that level yet, but I'd love to if I get there. I look at projects like these and wonder what the hell I've been doing with my career. Thanks for the invite!
Are there any articles that do a deeper dive into this? I ask because straight up I've been curious about Gleam, but not enough to do a really deep dive because Elixir is too good and, like Erlang, is a very special kind of dynamic language that doesn't leave me feel too lacking.
As I understand it, there have been a few "high profile" attempts to bring static typing to Erlang, all of which gave up when it came to typing messages. Your comment essentially confirms my bias, but is Gleam making real strides in solving this, or is it poised to merely cater to those who demand static-typing with curly braces--everything-else-be-dammed?
It is referenced in multiple places on the main site. The home page has a code snippet from it, though it does not go into any detail about any specific library.
I understand things best by comparing across different languages so don’t take this the wrong way but I wonder if you can help me understand: If say I start a goroutine in Go and give it a channel to use as a mailbox, concurrency in Go is cooperative but it’ll automatically use OS threads and yield whenever it reads from the channel. Does Erlang/OTP do something different? If so what does it do and what are the advantages? Or is it more that the library and ecosystem are built around this model?
I believe go yields after every function exit. Erlang does the same, but there are no loops (you must use tailcall) so you can't lock up the CPU with a while(true).
Erlang gives a reductions budget to processes. After a certain number of reductions, or if a process hits a yield point (like waiting to receive a message), the process will yield allowing another process to run.
Go uses preemption now (since 1.14), but it didn't always. It used to be that you could use a busy loop and that goroutine would never yield. Yield points include things like function entries, syscalls, and a few other points.
> its actor implementation is not built upon Erlang/OTP
This seems to be the opposite of pragmatic.
The most pragmatic approach to actors when you're building a BEAM language would be to write bindings for OTP and be done with it. This sounds kind of like building a JVM language with no intention of providing interop with the JVM ecosystem—yeah, the VM is good, but the ecosystem is what we're actually there for.
If you're building a BEAM language, why would you attempt to reimplement OTP?
Because of type safety. The OTP lib is already great, but there are still some things missing, most requested being named processes. But there is work being done to figure out how to best make it work for gleam.
The question of type safety has come up so often here that I guess it's worth replying:
That's exactly what I mean by this not seeming pragmatic. Pragmatic would be making do with partial type safety in order to be fully compatible with OTP. That's the much-maligned TypeScript approach, and it worked for TypeScript because it was pragmatic.
Now, maybe Gleam feels the need to take this approach because Elixir is already planning on filling the pragmatic gradually-typed BEAM language niche. That's fine if so!
Type safety is one of the goals of the language I don't see a reason to throw it out of the window now. I see what you mean, but the type system is one of the things that makes gleam pragmatic. If you really need some missing OTP feature you can super easily step into Erlang using FFI and get it. That's one of the reasons the article doesn't call gleam pure.
And what has this approach gotten them? A language as complex as c++ and haskell combined, but that still has runtime type errors. A typescript backlash is coming.
It uses the same primitives as Erlang, the difference is that it exposes type safe APIs instead of untyped ones which you would get from using the Erlang abstractions.
It implements the same protocols and does not have any interop shortcomings.
I agree with the part about reusing OTP but some of the server syntax of Erlang and Elixir is not good IMHO. I never liked using those handle_* functions. Give them proper names and you cover nearly all the normal usage, which is mutating the internal state of a process (an object in other families of languages.) That would be the pragmatic choice, to lure Java, C++ programmers.
Elixir gives you Agent, which is what you want, but for reasons, Agent is a bad choice.
What you're not seeing with the handle_* functions is all the extra stuff in there that deals with, for example, "what if the thing you want to access is unavailable?". That's not really something that for example go is able to handle so easily.
defmodule Robot do
def handle_call(:get_state, _from, state) do
something()
end
def get_state() do
GenServer.call(__MODULE__, :get_state)
end
end
robot = Robot.start_link()
robot.get_state()
just let me write (note the new flavor of def)
defstatefulmodule Robot do
def get_state() do
something()
end
end
robot = Robot.new()
robot.get_state()
Possibly add a defsync / defasync flavor of function definition to declare when the caller has to wait for the result of the function.
The idea is that I don't have to do the job of the compiler. It should add the boilerplate during the compilation to BEAM bytecode.
I know that there are a number of other possible cases that the handle_* functions can accommodate and this code does not, but this object-oriented-style state management is the purpose of almost all the occurrences of GenServers in the code bases I saw. Unfortunately it's littered by handle_* boilerplate that hides the purpose of the code and as all code, adds bugs by itself.
So: add handle_* to BEAM languages for maximum control but also add a dumbed down version that's all we need almost anytime.
Ok, I kind of see what you're saying, but IMHO, you're trying to hide the central, enabling abstraction of BEAM environments, which is sending messages to other processes.
If you really don't like the get_state above, I think it'd make more sense to just ditch it, and use GenServer.call(robot, :get_state) in places where you'd call robot.get_state(). Those three lines of definition don't seem to be doing you much good, and calling GenServer directly isn't too hard; I probably wouldn't write the underlying make_ref / monitor / send / receive / demonitor myself in the general case, but it can be useful sometimes.
In my experience with distributed Erlang, we'd have the server in one file, and the client in another; the exports for the client were the public api, and the handle_calls where the implementation. We'd often have a smidge of logic in the client, to pick the right pg to send messages to or whatever, so it useful to have that instead of just a gen_server:call in the calling code.
In the early days of Elixir what you are proposing here was popular[1], but over time the community largely decided it wasn't beneficial and I rarely see it any more.
It is production ready and has been used for numerous non-trivial projects. Experimental in this context means there is expected to be API changes and feature additions in future.
Gleam's 1.0 release was in May and it's still adding major features.
JavaScript support looks interesting. Browsing the package repo, I don't see how to tell which packages are supported on Erlang's VM, when compiling to JavaScript, or both. JavaScript-specific documentation seems pretty thin so far?
You're right about the lack of FFI-specific docs, but Gleam is such a simple language that it's very workable.
I wrote Vleam[0], which allows writing Gleam inside Vue SFCs, and the experience was pretty good even without the docs.
You do have to sometime read the source of other Gleam packages to understand how things work, but again -- Gleam is so simple it's not too bad of an experience.
This is a very concise overview! I have made a small example chat app [1] to explore two interesting aspects of gleam: BEAM OTP and compilation to javascript (typescript actually). If anyone is interested...
The `use` syntax is interesting - don't recall seeing anything similar before. But I'm struggling to understand how exactly it is executed and a glance at the Gleam docs didn't help.
Is the `use` statement blocking (in which case it doesn't seem that useful)? Or does it return immediately and then await at the point of use of the value it binds?
Hmm, it definitely looks more interesting in combination with effect handlers. Still not sure I find it super compelling in Gleam vs just not using callbacks.
It’s a generalization of async/await syntax in languages like JavaScript or Swift. I like that it provides a generalized syntax that could be used for coroutines, generators, or async/await without adding any of those specifically to the language syntactically.
One level of callback nesting in a function is totally fine, two is a bit confusing, but if you have many async things going on do you really want 10, 15, 20 levels of nesting? What to do about loops?
I certainly greatly prefer async programming with async/await languages that keep the appearance of linear function execution to stacking my callbacks and having a ton of nesting everywhere
Everything after the line containing '<-' happens in a callback.
Since it's a callback, I assume it's up to the function whether to call it, when to call it, and how many times to call it, so this can implement control statements.
I would guess that it also allows it to be async (when the callback isn't called until after an I/O operation).
That was way more than a solution for callback hell. With some plumbing, you could get Continuation monad working! With no further support from the language, too. I really miss LiveScript, it's a shame its development stopped. If only it could emit TypeScript, it would still have a chance to fight back, I think.
Ha, ok so I gotta give one of these "that's a really strange thing to get hung up on" responses.
Erlang and Elixir don't overload the `+` operator. In fact, they don't overload ANY operators. If you can forgive the syntactic choice of the operator itself (which I think it pretty fair considering Erlang predates Postgres by a decade and F# by two decades), this allows them to be dynamic while maintaining a pretty high level of runtime type safety. For example, one of the "subtle bugs" people refer to when criticizing dynamic languages (even strongly typed dynamic languages) is the following would work when both args are given strings or numbers:
function add(a, b) { a + b }
Erlang/Elixir eliminate this particular subtle bug (and it goes beyond strings and numbers) since:
def add(a, b), do: a + b
will only work on numbers and raise if given strings.
ML (which is the precursor to OCaml/f#), pascal, basic, and sql use <>.
If you consider that <, <=, etc are used as comparison operators it makes sense for <> to be in that camp. I actually never thought of it that way.
>It doesn’t predate sql and certainly not it’s use in mathematics.
What do you mean by "it's use in mathematics"? To my knowledge <> was invented by Algol language creators to use it for inequality. There was no previous use in mathematics. And to my opinion, that was an unfortunate error.
When looking at new languages, getting the basics right is the first thing I look at. Clumsy string concatenation is a blocker in my business, which is like 75% of the code.
Actually in Elixir when doing string building you want to use "improper" lists which lets you very efficiently build up a string without doing any copying.
Ha, I was going to mention this but there is none. `+` is for both ints and floats. OCaml, which is statically typed, has a separate operators for ints and floats, though.
I don't want to get into it but Erlang is dynamic by design. There have been several attempts to bring static typing to it over the years which have failed. People are still trying, though!
> One thing I dislike with erlang based languages (both gleam and elixir) is that they use “<>” for string concatenation.
Erlang doesn't use <> for concatenation so it's odd to name it in this comment, like that language and its developers have anything to do with your complaint. If it upsets you so much, lay it at the feet of the actual groups that chose <> for concatenation instead.
- in Elixir <> is Binary concatenation operator. Concatenates two binaries. This seems like it might be kind of a joke, actually, purposefully confusing "binary operator" with "an operator that takes two binaries" for humorous effect?
- in Gleam <> is string concatenation operator
As far as I can see it, they are taking inspiration from Haskell, where <> denotes the monoid binary operation, one concrete example being in the monoid of Lists binary operator being list concatenation, of which String is one example.
But really, <> for inequality is also kind of dumb and nonstandard idea (from mathematical notation perspective), originating from Algol. != which C popularized is more clear, and corresponds to the mathematical symbol, of course =/= would be even more close, but that is one more character.
ML originally used <> for inequality, following the standard (in CS) of Algol, and it was Haskell which deviated from that tradition. So F# uses still Algol tradition, but Haskell uses /= and C and others use !=, for more mathematical and logical notation.
F# inherits <> from ML, which inherits it from Algol, which invented it. But that was actually a bad idea, since it deviates from mathematical practice. To follow math, it would be better to use != as in C and those inspired by it, or /= as in Haskell. Or maybe even =/= if you really want to go for the mathy looking notation.
Elixir uses <> as an operator for concatenation of binaries, (which does form a monoid of course), not to be confused with how Haskell uses <> as a binary operator of a Monoid, but for sure inspired by it. And Gleam picked it up from them, probably, to use for a special case of a list monoid, String. And Haskell created <> for Monoid, because it would be too confusing to use multiplication sign for the binary operation like mathematicians do. It would not be ok in programming context.
Then Gleam (and others) use “|>” when piping with “|” would make more sense, except that’s a bit wise OR, not to be confused with “||” which is… string concatenation (in Postgres).
I converted the example on the Gleam home page [0] to F#:
let spawn_task i =
async {
let n = string i
printfn $"Hello from {n}"
}
// Run loads of threads, no problem
seq { 0..200_000 }
|> Seq.map spawn_task
|> Async.Parallel
|> Async.RunSynchronously
|> ignore
The two are pretty similar, but I would give F# the nod on this one example because it doesn't actually have to create a list of 200,000 elements, doesn't require an explicit "main" function, and requires fewer brackets/parens.
Or maybe your "no one else is smart/brave enough to say it" wrapper detracted from your message.
Additionally, throwing yet more syntax/features at a language to make it pseudo-functional doesn't appeal to everyone. A grab bag of features doesn't an appealing language make (for some).
I wouldn’t call C# pseudo-functional. The ability to move from one approach to another approach as the problem spaces change is exactly what pragmatism is about. To be flexible and bending and to yield to what the developer needs to do. It is most definitely not for purists if that’s what you’re getting at, but then we are not talking about pragmatism to begin with.
Unless we mean to use the word “pragmatic” in the sense of “I like it, therefore it’s pragmatic” in which case we’ve entered circlejerk territory and the article should be flagged as such. But again, let’s be charitable and talk about what really is a pragmatic programming language.
As it stands, any competitors of F# are in more serious competition with C#. They just don’t know it. It’s worth discussing if we want to discuss pragmatic languages.
People forget their history lessons. This is exactly why and how C++ gained the massive adoption over C that it did. It allowed itself to be used as the Swiss knife of systems programming, and developers did what they wanted/needed to to get the job done. For every C89 (or Heaven forbid pre-standard C) purist, plentiful C++ programmers arose. And C# is entering that territory by stippling things on and incorporating them into its syntax. F# is excellent as a barometer in this regard. Good for testing the waters with a certain crowd and taking what works.
I don’t need to claim people aren’t “smart/brave” enough to point this out. That’s not what it’s about at all. It’s more about looking one or two steps ahead rather than looking at the here and now.
So, is Gleam pragmatic? Well, is it more pragmatic than C#?
C# is indeed adopting some functional techniques from F#, but C# is still so bogged down with imperative cruft that the resulting combination of styles is a mess.
Wow, this is a great overview. I’ve been playing with Gleam a bit and this was really helpful. I’ll definitely refer to this later.
I’d like to dig into the OTP library (I’m curious if anyone has worked with it much?) and create a state chart library with it, but I’m still firmly in the “I don’t totally get it” camp with a few parts of Gleam. I don’t deny that it’s pragmatic. Maybe it’s more so that I’m not up to speed on functional patterns in general. I was for years, but took a hiatus to write code for a game engine and supporting infrastructure. It was so Wild West, but I kind of liked it in the end. Lots of impure, imperative code, haha.
I've tried to get my head around functional programming and also OTP but I also just never got my head around it.
Functional programming seems too limiting and OTP seems more complicated than I would have hoped for a supposedly distributed concurrency system.
I'm sure it's just a skill issue on my part. Right now I'm way too rust-brained. I've heard lots of things about gleam being good for productivity but I don't feel unproductive writing web apps in Rust but I felt every unproductive trying to write a non-trivial web app in gleam
I agree. I've been trying to learn functional programming for years. My brain just doesn't get it. And I've actually built a non-trivial web app in Elm, and started trying to write one in Gleam and I was very very slow and unproductive. Eventually I gave up and wrote the whole thing in Go + TS for the frontend.
For Gleam I was trying to write the whole FE + BE in the same language - I really like that it can be compiled to JS, and I'm honestly sick of the whole React + seven thousand dependencies game, so I was using Lustre (an Elm-like library for Gleam). And again, I've programmed an app in Elm, after a lot of hair pulling, and in the end I didn't enjoy it that much.
I've gone through tutorials and I don't understand things like types having different wildly unrelated constructors, currying (I didn't notice much currying in Gleam but really disliked it in Elm, I cannot follow past the first or second arrow). For writing the front end of the app, I would make _zero_ progress unless referring to other Github projects (and it was hard to find any since Gleam was so new). Anyway, if someone has a book or something that can teach me this stuff it would be great. I want to use the OTP and a single language for FE/BE that's not JS. I'm not dumb, I've been programming since I was a little kid, but maybe I'm too stuck in imperative models.
Yeah FP can for sure take some getting used to before it clicks! I think a great resource for that is Gleam's Exercism track (https://exercism.org/tracks/gleam), not only will it teach you the language but by starting with small-ish exercises it can definitely help grokking FP concepts
And if you feel like you're stuck and need help Gleam's Discord is a great place to ask questions :)
Isn’t manual ser/de pretty common? I like it personally. Being explicit at program boundaries usually means far fewer bugs inside the program. In JS I can pile whatever JSON I want into an object, but eventually I need to throw Zod or something at it to tame the crazy.
Maybe a generic “pile this data into this value and pretend it’s safe” tool might be nice for prototyping.
i dont think manual ser/de is common at all, and languages like dart where it was used is a massive pain point for people so much that they are adding macros to the language and the first macro they add is for serialization. whats not explicit about saying hey i have a struct this is the data i expect, serialize/deseralize in this shape, validation is a another but separate concern. in javascript you are not doing anything manually so i'm not sure why thats an example?
I'm a bit confused. How can you control how your data is serialized if not manually? Are there languages that use some kind of magically-figures-it-out layer that negotiates the appropriate serialization on the fly?
Or with build-time source generation (because this specific pattern of reflection is AOT-unfriendly). It's not as convenient if you are using default serializer options, but if you don't - it ties together JsonTypeInfo<T> and JsonSerializerOptions, so it ends up being a slightly terser way to write it. I do prefer the Rust-style serde annotations however.
record User(string Name, DateOnly DoB);
[JsonSerializable(typeof(User))]
partial class SerializerContext: JsonSerializerContext;
...
var user = new User("John", new(1984, 1, 1));
var response = await http.PostAsJsonAsync(
url, user, SerializerContext.Default.User);
Sorry I wasn’t clear; I meant to use JavaScript as an example where it isn’t manual.
Despite it being easy to use, I find I inevitably wind up requiring a lot of ceremony and effort to ensure it’s safe. I’m not a huge fan of automatic serialization in that it appears to work fine when sometimes it shouldn’t/won’t. I agree that it’s a lot of effort though. I guess the question is if you want the effort up front or later on. I prefer up front, I guess.
You either trust the input or you don't. If you don't trust your input you need validation like Zod anyway. Parsing untrusted data without validation in Rust or Go is not much better than in JS. You get the basic types checked, but that's all. You need to validate at the boundaries with Rust or Go just the same as with JS. It seems to me that many bloggers of new trendy languages are not aware of validation. A value for name is a string, but how about the length?
That’s a good distinction. I almost always include validation in the process, but you’re right, it’s not inherent to serialization.
In the JavaScript space, Effect offers an awesome package for ser/de which integrates validation. I think it’s my favourite tool in the ecosystem, but I prefer it over options in many other languages as well.
I agree that the stdlib decoder functions aren't the most ergonomic, but I think people are aware it's a pain point and there is development in that are, these two packages for example:
This is the biggest reason I cooled a bit on Gleam and whenever I want to do some backend stuff I'd much rather use Rust (using serde to convert to structs) or Elixir (put it in dynamic maps).
I wish Gleam would implement some kind of macro system, making a serde-like package possible.
I understand why the `use` syntax is preferable for its generalizability to many different "callback style" things, but the whole construct of `use foo <- result.try(bar())` is so much worse than defining let* in ocaml and being able to write `let* foo = bar() in`...
> Running on the battle-tested Erlang virtual machine that powers planet-scale systems such as WhatsApp and Ericsson, Gleam is ready for workloads of any size.
Does a Gleam programmer in practice need to deal with Erlang? Do Erlang error messages leak through?
Pure Gleam will get you really far without having to touch any Erlang, I've done Gleam for almost a year now and there were very little cases where I needed to write Erlang code myself, usually there's already a library that deals with it for most common needs :)
> Could you say something about the cases where you did need to write Erlang code?
Sure! For one of my most used packages (https://github.com/giacomocavalieri/birdie) I needed to get the terminal width to display a nice output, that has to be implemented using FFI based on the specific runtime (erlang or js) so I had to write it in Erlang, that was just a couple of lines of code.
But now there's a Gleam package to do it, so if I were to rewrite it today I wouldn't even need to write Erlang for that and could just use that!
> What kind of cases?
Usually it is when you need some functionality that has to rely on specific things from the runtime (like IO operations, actors on the BEAM, async on the JS target, ...) and there's no package to do it already.
Most of the common things (like file system operations and such) are already covered
> Were you already proficient in Erlang and its ecosystem?
Not at all :) I knew very little about Erlang (basically nothing behind the syntax), Gleam was my introduction to the BEAM ecosystem and it has worked out great so far!
Hope this is helpful, happy to share my experience here
If you error from Erlang or Elixir you will get the error as those languages construct them, even if you call that code from Gleam. The Gleam build tool attempts to print them more nicely than Erlang does by default, but it cannot add additional information to them. Gleam runtime errors have more information attached to them.
In practice runtime errors in Gleam are rare. The one place you'll likely have to deal with poor Erlang errors is if you are writing Erlang code to create Gleam bindings to an existing Erlang library.
Is there a way to implement matrix arithmetic with nice syntax (for instance, "A + B" to add two matrices A and B) in Gleam? The lack of ad-hoc polymorphism might paradoxically be a blessing.
It has some syntax similarities to Rust, but it has GC so there's no borrow checker (or any of the associated syntax). It is also fully immutable, unlike Rust. It leans heavily on sum types, just like Rust. Also expression-based syntax and some other things resemble Rust. However, it lacks Traits. Overall it looks Rust-ish but it's much simpler and has a functional focus.
With Go it shares a lazer focus on simplicity and preemptive channel-based concurrency. But of course for all the above reasons listed above it looks very different from Go in most other ways.
In many way its language choices are the opposite of Python (static types, immutability, massive concurrency is the norm).
Does the C# runtime or language or so have any of the abilities that a BeamVM language can make use of? Afaik Gleam can do the same things you can do with Erlang, easily making a cluster of machines and easily running code on any of the machines in the cluster, load balanced, pattern matching on binary ...
Otherwise I don't understand what C# has to do with Gleam.
I believe the comparison to C# is incorrect. Given what the article highlights, the direct competitor to Gleam is going to be F#. By virtue of using .NET, it offers an order of magnitude better performance, significantly cheaper per-process/task cost and equally capable overall concurrency primitives. A much bigger ecosystem with many polished libraries too.
.NET's Task<T> + the state machine box for it emitted by Roslyn start at about 100B of heap-allocated memory, as of .NET 8/9.
The state machines generated for those by F# don't seem to be far behind either (I tested this before replying, F#'s asynchronous computations aka async { } appear to be much less efficient however so the guidance is to avoid them in favor of task { } and .NET's regular tasks).
Notably, BEAMs processes come with their own per-process GC each, which is going to add a lot of additional cost every time a new process is spawned. In a similar vein, Go's goroutines pre-allocate quite a bit of memory for their virtual stacks (60 KiB?).
.NET's tasks, as sibling comment mentions, are stackless coroutines[0] so their memory usage is going to be much lower. They come with a different set of tradeoffs but overall their cost is going to be significantly cheaper because bytecode is JIT/AOT compiled to native, the GC has precise tracking of object liveness and because .NET does not perform BEAM-style preemptive userspace scheduling.
Instead, .NET employs work-stealing threadpool with hill-climbing thread count scaling to achieve optimal throughput. This way, when the workers cannot advance all submitted work items in time, additional threads are injected, which are then preempted by kernel thread scheduler. This means that even if other workers are busy, the work items will not wait in the queues indefinitely. This is a pathological case and usually the thread count varies between 1-2x physical core count.
This has a downside of achieving potentially worse scheduling fairness, and independent tasks allocating can and do affect each other w.r.t. GC pause impact. I believe this to be non-issue because this is more than compensated for by spending <10x CPU time vs BEAM on computing the same result, and significantly less memory (I don't have hard numbers but .NET is quite well behaved in terms of allocation traffic) too. At the end of the day, Task<T> is designed for much higher granularity of concurrency and parallelism so it would be quite unusable if it had greater cost.
If you're curious, I made an un-scientific and likely incorrect but maybe interesting comparison some time ago (it's in Ukrainian but the table is readable enough I hope):
This calculates the CPU time and max MEM RSS usage required to spawn 1M tasks/coroutines/processes/futures that sleep for 5s and await their completion.
[0]: This might stop being true in a pure sense of the word in .NET 10 because the task handling is going to be completely changed by replacing state machines generated by a language that targets .NET with specially annotated methods, for which the runtime itself is going to implicitly emit state machines instead, allowing to pay the cost only at "true" suspend points. Reference: https://github.com/dotnet/runtimelab/blob/feature/async2-exp...
This is interesting! I learned quite a bit, thank you :)
One comment, though: it's not an apples-to-apples comparison! (I'm talking about your gist.) Specifically, in Elixir, you should create the state machine yourself (very easy in Elixir, thanks to pattern-matching definitions of functions) and schedule it across a few processes manually. You'd need ~100loc for this, and you'd get results much closer to C# and Rust.
What your comparison highlights is that the primitives behind the same async/await API can vary widely in their specifics. Goroutines and BEAM processes are much more lightweight than OS-level threads, but they are much more complex than just a compiled state machine, so they are heavier than coroutines. On the other hand, BEAM processes do the "preemptive userspace scheduling," which means they can be used for scheduling (an implementation of) coroutines. An interesting note about F# `async { }` - I thought this one also builds coroutines through CPS transform; it should be on par with Task regarding overhead.
It would be nice to expand the comparison: Java got virtual threads lately, I'm curious how that would fare - closer to C# or BEAM? Throw in Kotlin's coroutines (probably closer to C#) and Python's asyncio (or Lua native coroutines, but then you need to write the scheduler) for a good measure... We could see how concurrency primitives differ in overhead across more languages.
> One comment, though: it's not an apples-to-apples comparison! (I'm talking about your gist.) Specifically, in Elixir, you should create the state machine yourself (very easy in Elixir, thanks to pattern-matching definitions of functions) and schedule it across a few processes manually. You'd need ~100loc for this, and you'd get results much closer to C# and Rust.
Hmm, I can't see myself agreeing to this. Each state machine would have to be hand-rolled manually, or another abstraction for that would need to be introduced. It becomes a lot of manual work very quickly. Or it becomes a callback hell. Whichever you prefer, both are among reasons behind .NET pioneering async/await and its later adoption across other languages. I am unsure about scheduling cost too - significant amounts of application logic implemented in pure Erlang or Elixir will be subject to interpreter-tier performance which is unlikely to match the performance of JIT/AOT-compiled languages. BEAM has excellent allocation throughput thanks to per-process GCs, but not the raw speed of code execution.
The comparison was linked as an addendum to the discussion, didn't intend to make it the main focus :) But if you're interested in the context - the initial idea behind the comparison was that I got tired of hearing a colleague giving unjustified praise to Go. Especially when it comes to highly-granular interleaving of concurrent operations (for the lack of better term).
And while yes, BEAM processes and Goroutines belong to category of stackful coroutines with preemptive userspace scheduling, which has very different cost model when compared to how task continuations are handled by .NET's threadpool and task scheduler implementations, one of the most common ways of using `go func(...` in Golang is as if it was a task/future, where one or multiple goroutines are fired once to yield one or multiple results collected via channel/slice/some other collection + WaitGroup. In Go, the users do not have an alternative to this for highly granular ad-hoc concurrency save for re-implementing an async/await or fork/join-like API themselves that leverages a custom pool of goroutines and yield/join primitives.
So for all intents and purposes the numbers indicate the user experience when someone wants to spawn a lot of independently ran operations and then wait for them all to complete. This overhead is real, albeit pushing the task/goroutines/process count to 1 million is going to be very uncommon.
It's probably on the opposite end to what would idiomatic Erlang or Elixir code look like, but below is a popular pattern that relies on this:
using var http = new HttpClient {
BaseAddress = new("https://news.ycombinator.com")
};
// Tasks are hot-started, making the requests parallel
var page1 = http.GetStringAsync("?p=1");
var page2 = http.GetStringAsync("?p=2");
Console.WriteLine(await page1 + await page2);
This way, you can also map e.g. an array of elements into a sequence of tasks, which is then fed into Task.WhenAll which awaits all of them and returns an array of results. This is what the linked comparison measures. You can easily spawn thousands of requests awaited concurrently and the runtime will scale with this very well (also because Socket has efficient epoll/kqueue integration underneath).
> On the other hand, BEAM processes do the "preemptive userspace scheduling,"
Perhaps you meant "processes are scheduled preemptively"? My understanding is that processes are the main scheduling unit of BEAM, much like Goroutines are in Go (which Go has an explicit mechanism for suspension mid-execution).
> It would be nice to expand the comparison: Java got virtual threads lately, I'm curious how that would fare - closer to C# or BEAM? Throw in Kotlin's coroutines (probably closer to C#) and Python's asyncio (or Lua native coroutines, but then you need to write the scheduler) for a good measure... We could see how concurrency primitives differ in overhead across more languages.
The Java way of solving this will be using its Structured Concurrency which is currently in an incubator. I did not add Python because Python example was attempted by a colleague and it was impossibly slow, and there are multiple ways of going about it. If you care about performance, especially in multi-tasking, then using Python at all is a terrible idea. I mostly focused on the languages that put concurrency and parallelism at their forefront, and without making it a proper automated comparison with controlled environment I don't think the numbers will be very useful, it's also more tedious than it looks unfortunately.
> An interesting note about F# `async { }` - I thought this one also builds coroutines through CPS transform; it should be on par with Task regarding overhead.
F#'s Async (and async { } blocks that express it) is the original source of where async/await comes from. It is implemented via F#'s "Computation Expressions"[0] as a form of do-notation. It is only afterwards that C# got its own async/await implementation. For all intents and purposes both achieve the same goal and both implement a stackless coroutine. F#, however, keeps its core netstandard2.0-compatible and overall runtime-agnostic making it unable to use the APIs that are available for newer targets.
As measured, it appears to be much preferable to use task { } blocks instead, which have significantly lower heap size impact, and are similar in their performance to C#'s async methods. It's important to note that async { } and task { } blocks have different semantics w.r.t. hot vs cold start, multiple awaits/result caching and explicit vs implicit cancellation propagation.
> One issue I’ve heard from F# devs was the lack of ability to comfortably call C# asynchrony methods
This is no longer the case starting with F# 6: https://learn.microsoft.com/en-us/dotnet/fsharp/whats-new/fs... C# and F# can now transparently interoperate using the same main Task<T> type (in the past, this was done via community libraries for C#->F#, and F#->C# was always possible with |> Async.AwaitTask).
> Would the async experiment also imply better interop between .NET languages?
This is a good question. If a language already uses Task<T> and ValueTask<T>, the interop is already a solved problem (like with F#). However, this could allow to completely remove or at least significantly simplify the implementation for emitting state machines / resumable code / coroutines for asynchronous methods for any language that targets .NET 10+ (it will most likely get into 10 but is not guaranteed yet from what I've heard).
For example, a guest language could simply choose to emit calls to async methods with special modreq(?) attribute and the runtime would handle suspension, creating the closure for the state captured by asynchronous continuation and then subsequent resumption without any additional work of that guest language to be async-compatible. You could likely even write that by hand in IL with ILAsm. This is a significant improvement for a .NET as a hosting platform, and arguably makes it better than JVM at some high-level scenarios (because for low-level it already supports almost the entirety of what is expressible with C plus struct generics, and for FP it supports tail. prefix for call* opcodes for mandatory tailcalls required by recursive functions).
And for those in the camp of "function coloring considered harmful" (which I'm not a fan of but to each their own), you could even have a guest language that completely hides the fact that it performs such async calls underneath.
I'd rather use an external job scheduler and message queue, eg in kubernetes, rather than build it into the language runtime. With a message queue and horizontal pod autoscaling you can very quickly build an easy distributed workload with (at least to me) little effort in any language
Or you could do with almost zero effort in a BEAM language. And sometimes setting up a kuberbetes cluster is not easy. Suppose you want to have one part that is hosted on a cloud service and another that is on-prem.
It's certainly possible with kuberbetes, but you'll be fighting the kuberbetes paradigm. Plus you're checking in to this kuberbetes middleman for part A to understand the availability of part B, for example.
Kuberbetes is highly devex optimized for stateless services that are kinda cattle. Not all use cases fit so neatly into that rubric.
I would not say it's zero effort in BEAM. I wasn't able to figure out how to get it to work at all (I spent a weekend trying to build a distributed auth system, failed). But I would be able to spin up kubernetes, rabbitmq and a non BEAM language in not very long.
> Suppose you want to have one part that is hosted on a cloud service and another that is on-prem.
Is this hard with kubernetes? Seems pretty simple to add node taints and set up the appropriate VPN to me.
BEAM is not magic. It will be doing a lot of the same stuff that any other scheduler or message queue will do, so I would personally rather choose the more composable solution rather than have to manage BEAM and be forced to use one family of languages only.
I'm thinking if your company is already running kubernetes, adding BEAM on top is probably more effort than adding a message queue (given my limited understanding of BEAM)
Never said it was zero effort. It's almost zero effort. But you have to have deep knowledge of BEAM primitives if you intend on doing something crazy (still almost zero effort). If you don't know what you're doing or don't have experience in the BEAM you will probably get it wrong. Sometimes libraries haven't gotten it right so composability might not be there and you might have to write a sidecar genserver and supervisor (~40 LOC)
Good news is in elixir, though it is not easy to write (still low effort) most of it is pretty straightforward to read so a junior could probably successfully work through and understand a CR.
The other very nice thing about doing it all in the BEAM is that you can make distribution and availability guarantees part of your testing suite very easy, because its even part of you localdev flow. That also means your devs will write that test, instead of of assuming they understand how that kuberbetes operator works (they don't).
I can understand having to import the "dirty" parts of the stdlib, like I/O, or the "heavy" parts, like Unicode or timezones. But why force someone to import every single type? Most functional languages have a prelude that covers the types every non-trivial program uses: booleans, numbers, strings, collections.
> But why force someone to import every single type?
That's not importing the types, it's importing a suite of functions related to the types.
https://hexdocs.pm/gleam_stdlib/gleam/int.html - gleam/int for example. The int type is already in the language and usable, this import brings in some specific functions that are related to operations on int.
Because they selected to make a functional, not OO, language based largely on BEAM which was designed for Erlang, a functional, not OO, language. Why would you make an OO language if your goal is to make a functional language?
Dogma not appreciated. Personally don't care if a lang is functional or OO, they aren't exclusive categories. For example, several OO langs have added functional features. Don't see why this one couldn't use static methods for its immutable types. Wouldn't hurt anything, would it?
As mentioned in this thread, having to import libraries to operate on basic types is suboptimal to say the least.
One of Gleam's design goals is to not have multiple ways to do the same thing, so having to pick between using method chains or pipelines would work against that.
> having to import libraries to operate on basic types is suboptimal to say the least.
The language server will do this for you in a Gleam project.
I could imagine a method call done in a pipeline, but would have to work out the details. Maybe self/this or omit the variable name? Not sure how doable.
Folks recommended tools to alleviate Java verbosity back in the day as well. But you still have to read it—which unfortunately happens 100x more than writing.
It's not that it's wrong—at least I don't think so. It's that it's an example of a choice that is not pragmatic.
I suppose we should agree on what "pragmatic" even means, since it has become something of a cliché term in software engineering. To me, it roughly means "reflective of common and realistic use as opposed to possible or theoretical considerations".
So is having to import basic functionality a pragmatic design? I would argue no. Having to import basic functionality for integers, strings, and IO is not pragmatic in the sense that most realistic programs will use these things. As such, the vast majority of ordinary programs are burdened by extra steps that don't appear to net some other benefit.
Importing these very basic functionalities appeals to a more abstract or theoretical need for fine-grained control or minimalism. Maybe we don't want to use integers or strings in a certain code module. Maybe we want to compile Gleam to a microcontroller where the code needs to be spartan and freestanding.
These aren't pragmatic concerns in the context of the types of problems Gleam is designed to address.
To give a point of comparison, the Haskell prelude might be considered a pragmatic design choice, as can be seen from the article. It is a bundle of common or useful functionality that one expects to use in a majority of ordinary Haskell programs. One doesn't need to "import" the prelude; it's just there.
I don't personally find Gleam's design choice a bad one, and while GP was a bit flippant, I do agree that it is not an example of a pragmatic design choice.
Pragmatism is more than just giving people the quickest way to complete their task. There are other axes to consider, such as the simplicity of the compiler and the uniformity of the language experience. These contribute to the maintainability of the language itself and your own code also.
When the rule is "if you need a module, you must import it", and that applies equally to standard library modules, hex packages or your own internal modules, there are fewer mental overheads. The procedure is always the same. Incidentally, this also means that the Gleam language server can automatically add or remove import statements, which it now does [0].
Personally, I also find it pleasing that I can look at the top of a file and say "oh, this module appears to be doing some stuff with floating point math and strings". It often gives me an overview of what the module might be doing before I begin reading the detail.
I guess I don't entirely agree, but I do wonder why each import has to include 'gleam' in the path. Why can't it assume that the default path is 'gleam' and import libraries relative to that path. Like `import string` instead of having to do `import gleam/string`?
EDIT: quick note, this is a tangent; Gleam does support partial application with `_` and it works with pipelines as well.
> This is not how it works in curried languages, however. In curried languages with a |> operator, the first expression still returns "Hello, World!" but the second one returns "World!Hello, " instead. This can be an unpleasant surprise for beginners, but even experienced users commonly find that this behavior is less useful than having both of these expressions evaluate to the same thing.
The upside (on the curried side) is that you can define `|>` as a normal function (even without lazy semantics, as in OCaml.) How much of an "upside" this is will vary, but note that this generalizes to many other operators that can be added. The rest is a matter of API design, i.e., the order of arguments and the use of named arguments (and/or other syntax sugar.) For example, in the case of the post's example:
"Hello, "
|> Str.concat "World!"
You can get the "beginner friendly" semantics with just a little change to the `Str.concat` (assuming named args support, using OCaml syntax):
"Hello, "
|> Str.concat ~rest:["World!"]
In non-curried languages, this has to be a macro (or it needs to be built-in). If you already have macros in your language - that's good: you can easily make `|>` do things like `x |> func(arg1, _, arg2)` and more. However, if you make this a special case in the language, it will be hard to extend and impossible to generalize. So personally, I'd grade the options in order of power and convenience:
- infix macros
- infix functions in curried languages
- special, hardcoded |> operator
- beating your language black and blue until you get a work-alike (like with implicits in Scala)
There's also a special category of languages where pipelines are primitives (shells, jq), but that is outside the scope of this comment, since they are more than syntactic sugar for function application :)
I don't even think curried functions by default is a good idea, but that article really made me support them. Richard Feldman is usually so reasonable, what happened? That's the worst argument I've seen in a while.
> I won’t fall into the trap of trying to define Monads in this post. Instead, let’s talk about monadic-style APIs – that is, APIs that allow you to do a bunch of things one after another, with the ability to use the result of a previous computation in the next computation, and also allows some logic to happen between steps.
Am I crazy, or did he just give a really good definition of monads in programming? I think that it benefits by not letting itself get bogged down in Category Theory nomenclature which doesn't actually matter when programming.
He described a problem people use monads to solve, not monads themselves.
Haskell people do talk about monadic vs. applicative combinators that are different by whether you can use the results of a previous step on the next ones. But that doesn't have a direct relation with the actual definition of those.
But yes, if you are teaching a programming language that uses monads to someone, you will probably want to explain the problem they solve, not the actual structures. As most things in math, the structures become obvious once you understand the problem.
It's a good description of one application of monads, which is often helpful to beginners if they have been thrown into real code without yet understanding the "why" of monads. If you look up "railway-oriented programming," you'll find more presentations of it.
I think it is a very practical place to start, especially for programmers who have been thrown into a codebase while still new with monads, because it helps them avoid a common mistake that plagues beginners: accidentally dropping error values on the non-success track. Often you simply want to drop values on the non-success track, and there are convenient idioms for doing so, but just as often, you need to examine those values so you can report failures, by returning metrics on validation failures, by providing the right status code in an HTTP response, etc. Railway-oriented programming is a vivid metaphor that reminds programmers that they need to make a decision about how to handle values on the other track.
No, this isn’t a good description of monads. It merely describes a case that shows up sometimes.
Dang, when I made this silly, little comment about FP, I didn't expect to get corrected by a legend in the field!
Thanks for taking the time to respond.
A monad is just a monoid in the category of endofunctors, what's the problem?
Yin the OOP world I’ve seen this pattern called chaining : usually either method or object chaining.
Smalltalk (and Dart) also have "cascading" which is method chaining with special supporting syntax e.g. in ST you'd send four different messages to the same object with something like
I'm not sure if it matches the "reuse values from previous computation" but it should since messages will affect the object, you just don't have local variables.visual basic has the `with` statement for that https://learn.microsoft.com/en-us/dotnet/visual-basic/langua...
Nim has a similar `with` for the same use case. It can be handy!
It is using ';' instead of parenthesizing the messages to the objects, correct?
In Smalltalk, `;` does two things: terminates the current message (EDIT: while ignoring its return value) and propagates the target object of the current message as a target for the following message.
So this:
is equivalent to: In Dart, they use `..` prefix instead of `;` postfix: https://dart.dev/language/operators#cascade-notation You can model this with monads easily, but it's just one, very limited application of them - monads are much more general.It's a style I really enjoy, and it's definitely not exclusive to one language or paradigm, exactly. I see it as more of less of a kind with pipelines in Unix shells, too.
In Scala, a language with OOP heritage and support, plus lots of functional programming features, some of the most common methods you use in such chains are monads.
Not really. The big important part of monads is flattening/unnesting the output.
Basically, if you can convert a `Foo<T>` into a `Foo<U>` by applying a function `T -> U`, it's a monoid. Think `map` or `fold`.
But if you can convert a `Foo<T>` into a `Foo<U>` by applying a function `T -> Foo<U>`, it's a monad. Flattening is "some logic", but not any logic, it's inherent to `Foo<>` itself.
Your point on unnesting is apt but don't you mean functor instead of monoid?
Yeah, you're right, I do. Thank you.
It's a good spit, some people used to describe them as "programmable semi colon" but while it's simple, it may be too short for most people to grasp.
I think you just fell into the trap.
The greatest power of BEAM-based languages is the fully preemptive actor model. Nobody else supports it. This is a superpower, the solution of most problems with concurrent programming.
In Erland and Elixir, actors and actor-based concurrency hold the central place in the corresponding ecosystems, well supported by extensive documentation.
In Gleam, actors and OTP are an afterthought. They are there somewhere, but underdocumented and abandoned.
This is exactly what I want from Gleam. It does seem to be under documented and abandoned. Is there any understanding of why? Like you say, this seems like a super power. I see so much potential. A language that’s ergonomic, pragmatic as the author says, great performance, low-ish barrier to entry, etc. It seems like it could be an awesome tool for building highly reliable software that’s not so difficult to maintain.
It is not abandoned, I am the maintainer. The documentation covers the APIs of the package but not the “zen” of the wider OTP framework, for that the official OTP documentation and existing books are recommended.
That’s great information, thanks. I seem to recall checking the commit history and it didn’t seem dead, but I’m also accustomed to experimental packages being dropped early and often. Do you want help with maintenance or are you doing this independently?
Contributions are very much welcome! Gleam is entirely a community project.
I'm not at that level yet, but I'd love to if I get there. I look at projects like these and wonder what the hell I've been doing with my career. Thanks for the invite!
OK, thanks! I will try to write something with it and perhaps come help with the documentation.
It is a very young language that may explain the why
Are there any articles that do a deeper dive into this? I ask because straight up I've been curious about Gleam, but not enough to do a really deep dive because Elixir is too good and, like Erlang, is a very special kind of dynamic language that doesn't leave me feel too lacking.
As I understand it, there have been a few "high profile" attempts to bring static typing to Erlang, all of which gave up when it came to typing messages. Your comment essentially confirms my bias, but is Gleam making real strides in solving this, or is it poised to merely cater to those who demand static-typing with curly braces--everything-else-be-dammed?
Sorry, the end of my comment is quite reductive. Compiling to JS is pretty nice.
this is Gleam OTP package https://github.com/gleam-lang/otp
I agree it's underdocumented but doesn't seem abandoned (has commits in last week)
Hello! I’m the maintainer of the Gleam OTP library. It is not abandoned or an afterthought.
Don't be so modest!! You are the creator of the Gleam language as well.
Hi! Good to hear. Why it is not mentioned anywhere on the main site?
It is referenced in multiple places on the main site. The home page has a code snippet from it, though it does not go into any detail about any specific library.
I understand things best by comparing across different languages so don’t take this the wrong way but I wonder if you can help me understand: If say I start a goroutine in Go and give it a channel to use as a mailbox, concurrency in Go is cooperative but it’ll automatically use OS threads and yield whenever it reads from the channel. Does Erlang/OTP do something different? If so what does it do and what are the advantages? Or is it more that the library and ecosystem are built around this model?
I believe go yields after every function exit. Erlang does the same, but there are no loops (you must use tailcall) so you can't lock up the CPU with a while(true).
Erlang gives a reductions budget to processes. After a certain number of reductions, or if a process hits a yield point (like waiting to receive a message), the process will yield allowing another process to run.
Go uses preemption now (since 1.14), but it didn't always. It used to be that you could use a busy loop and that goroutine would never yield. Yield points include things like function entries, syscalls, and a few other points.
That used to be true, but no longer, goroutines are truly preëmptive, in 10ms time slices.
Thanks!
Gleam runs on the BEAM
It does. However, its actor implementation is not built upon Erlang/OTP, and currently is “experimental” and not even mentioned on the main site.
> its actor implementation is not built upon Erlang/OTP
This seems to be the opposite of pragmatic.
The most pragmatic approach to actors when you're building a BEAM language would be to write bindings for OTP and be done with it. This sounds kind of like building a JVM language with no intention of providing interop with the JVM ecosystem—yeah, the VM is good, but the ecosystem is what we're actually there for.
If you're building a BEAM language, why would you attempt to reimplement OTP?
Because of type safety. The OTP lib is already great, but there are still some things missing, most requested being named processes. But there is work being done to figure out how to best make it work for gleam.
The question of type safety has come up so often here that I guess it's worth replying:
That's exactly what I mean by this not seeming pragmatic. Pragmatic would be making do with partial type safety in order to be fully compatible with OTP. That's the much-maligned TypeScript approach, and it worked for TypeScript because it was pragmatic.
Now, maybe Gleam feels the need to take this approach because Elixir is already planning on filling the pragmatic gradually-typed BEAM language niche. That's fine if so!
Type safety is one of the goals of the language I don't see a reason to throw it out of the window now. I see what you mean, but the type system is one of the things that makes gleam pragmatic. If you really need some missing OTP feature you can super easily step into Erlang using FFI and get it. That's one of the reasons the article doesn't call gleam pure.
Gleam does not sacrifice OTP compatibility for type safety. It picks both.
And what has this approach gotten them? A language as complex as c++ and haskell combined, but that still has runtime type errors. A typescript backlash is coming.
It uses the same primitives as Erlang, the difference is that it exposes type safe APIs instead of untyped ones which you would get from using the Erlang abstractions.
It implements the same protocols and does not have any interop shortcomings.
I believe their implementation was written to support static typing (since Gleam is a statically-typed language).
I agree with the part about reusing OTP but some of the server syntax of Erlang and Elixir is not good IMHO. I never liked using those handle_* functions. Give them proper names and you cover nearly all the normal usage, which is mutating the internal state of a process (an object in other families of languages.) That would be the pragmatic choice, to lure Java, C++ programmers.
Elixir gives you Agent, which is what you want, but for reasons, Agent is a bad choice.
What you're not seeing with the handle_* functions is all the extra stuff in there that deals with, for example, "what if the thing you want to access is unavailable?". That's not really something that for example go is able to handle so easily.
What would be the proper name to handle a call other than handle_call?
This is Elixir syntax, not Gleam:
Instead of
just let me write (note the new flavor of def) Possibly add a defsync / defasync flavor of function definition to declare when the caller has to wait for the result of the function.The idea is that I don't have to do the job of the compiler. It should add the boilerplate during the compilation to BEAM bytecode.
I know that there are a number of other possible cases that the handle_* functions can accommodate and this code does not, but this object-oriented-style state management is the purpose of almost all the occurrences of GenServers in the code bases I saw. Unfortunately it's littered by handle_* boilerplate that hides the purpose of the code and as all code, adds bugs by itself.
So: add handle_* to BEAM languages for maximum control but also add a dumbed down version that's all we need almost anytime.
Ok, I kind of see what you're saying, but IMHO, you're trying to hide the central, enabling abstraction of BEAM environments, which is sending messages to other processes.
If you really don't like the get_state above, I think it'd make more sense to just ditch it, and use GenServer.call(robot, :get_state) in places where you'd call robot.get_state(). Those three lines of definition don't seem to be doing you much good, and calling GenServer directly isn't too hard; I probably wouldn't write the underlying make_ref / monitor / send / receive / demonitor myself in the general case, but it can be useful sometimes.
In my experience with distributed Erlang, we'd have the server in one file, and the client in another; the exports for the client were the public api, and the handle_calls where the implementation. We'd often have a smidge of logic in the client, to pick the right pg to send messages to or whatever, so it useful to have that instead of just a gen_server:call in the calling code.
In the early days of Elixir what you are proposing here was popular[1], but over time the community largely decided it wasn't beneficial and I rarely see it any more.
[1]: https://github.com/sasa1977/exactor
IIRC the re-implementation was necessary for type-safety.
It is production ready and has been used for numerous non-trivial projects. Experimental in this context means there is expected to be API changes and feature additions in future.
Gleam's 1.0 release was in May and it's still adding major features.
JavaScript support looks interesting. Browsing the package repo, I don't see how to tell which packages are supported on Erlang's VM, when compiling to JavaScript, or both. JavaScript-specific documentation seems pretty thin so far?
You're right about the lack of FFI-specific docs, but Gleam is such a simple language that it's very workable.
I wrote Vleam[0], which allows writing Gleam inside Vue SFCs, and the experience was pretty good even without the docs.
You do have to sometime read the source of other Gleam packages to understand how things work, but again -- Gleam is so simple it's not too bad of an experience.
[0]: https://github.com/vleam/vleam
Most of the work for this has been done, the main missing piece is surfacing it in the UI, which someone will hopefully pick up soon.
This is a very concise overview! I have made a small example chat app [1] to explore two interesting aspects of gleam: BEAM OTP and compilation to javascript (typescript actually). If anyone is interested...
[1]: https://github.com/patte/gleam-playground
The `use` syntax is interesting - don't recall seeing anything similar before. But I'm struggling to understand how exactly it is executed and a glance at the Gleam docs didn't help.
Is the `use` statement blocking (in which case it doesn't seem that useful)? Or does it return immediately and then await at the point of use of the value it binds?
It is syntax sugar for CPS [1].
[1]: https://en.wikipedia.org/wiki/Continuation-passing_style
EDIT: I believe prior art is Koka's with statement: https://koka-lang.github.io/koka/doc/book.html#sec-with
Hmm, it definitely looks more interesting in combination with effect handlers. Still not sure I find it super compelling in Gleam vs just not using callbacks.
It’s a generalization of async/await syntax in languages like JavaScript or Swift. I like that it provides a generalized syntax that could be used for coroutines, generators, or async/await without adding any of those specifically to the language syntactically.
One level of callback nesting in a function is totally fine, two is a bit confusing, but if you have many async things going on do you really want 10, 15, 20 levels of nesting? What to do about loops?
I certainly greatly prefer async programming with async/await languages that keep the appearance of linear function execution to stacking my callbacks and having a ton of nesting everywhere
Sounds like the new “capabilities” stuff in Scala.
The equivalent in F# is let! (F# computation expressions are quite powerful); in rust the ? operator. Other languages have similar features.
It's syntactic sugar, but the readability is worth it
You can do something similar in OCaml (as an operator defined at the library level, not a specialized new syntax): https://github.com/yawaramin/letops/blob/6954adb65f115659740...
There's a great article by Erika on use, definitely recommended :) https://erikarow.land/notes/using-use-gleam
I think it's similar to koka's 'with'.
https://koka-lang.github.io/koka/doc/book.html#sec-with
Everything after the line containing '<-' happens in a callback.
Since it's a callback, I assume it's up to the function whether to call it, when to call it, and how many times to call it, so this can implement control statements.
I would guess that it also allows it to be async (when the callback isn't called until after an I/O operation).
It really reminds me of LiveScript's "back-calls" [1], which were a solution for callback hell in JS.
1: https://livescript.net/#:~:text=Backcalls%20are%20very%20use...
That was way more than a solution for callback hell. With some plumbing, you could get Continuation monad working! With no further support from the language, too. I really miss LiveScript, it's a shame its development stopped. If only it could emit TypeScript, it would still have a chance to fight back, I think.
Gleam looks nice but if an F# comparisons was added, I think that would come out ahead based on the authors priorities.
One thing I dislike with erlang based languages (both gleam and elixir) is that they use “<>” for string concatenation.
In F#, “<>” is the equivalent of “!=“. Postgres also uses <> for inequality so my queries and f# code have that consistency.
Ha, ok so I gotta give one of these "that's a really strange thing to get hung up on" responses.
Erlang and Elixir don't overload the `+` operator. In fact, they don't overload ANY operators. If you can forgive the syntactic choice of the operator itself (which I think it pretty fair considering Erlang predates Postgres by a decade and F# by two decades), this allows them to be dynamic while maintaining a pretty high level of runtime type safety. For example, one of the "subtle bugs" people refer to when criticizing dynamic languages (even strongly typed dynamic languages) is the following would work when both args are given strings or numbers:
Erlang/Elixir eliminate this particular subtle bug (and it goes beyond strings and numbers) since: will only work on numbers and raise if given strings.ML (which is the precursor to OCaml/f#), pascal, basic, and sql use <>. If you consider that <, <=, etc are used as comparison operators it makes sense for <> to be in that camp. I actually never thought of it that way.
Interesting table here highlighting old programming languages https://en.wikipedia.org/wiki/Relational_operator#Standard_r...
It doesn’t predate sql and certainly not it’s use in mathematics. There are other options for concatenation so this is an unfortunate error.
Shouldn’t copy Erlang, otherwise might as well use it.
>It doesn’t predate sql and certainly not it’s use in mathematics.
What do you mean by "it's use in mathematics"? To my knowledge <> was invented by Algol language creators to use it for inequality. There was no previous use in mathematics. And to my opinion, that was an unfortunate error.
Interesting, must have learned it so long ago… Pascal? that I conflated it with math class. Still ~1958 is rather venerable.
The plot thickens, apparently ++ is used for erlang. So I still find it a poor choice.
++ is for concatenating lists, it's not the only functional language that uses this.
Really though who cares? `=` is already misused in most programming languages.
When looking at new languages, getting the basics right is the first thing I look at. Clumsy string concatenation is a blocker in my business, which is like 75% of the code.
Actually in Elixir when doing string building you want to use "improper" lists which lets you very efficiently build up a string without doing any copying.
Oh ha, duh me, I did not consider it wasn't invented by Postgres.
Oh really? What's the operator for adding two floating point numbers then?
The solution to type confusion is not separate operators for every type, it's static types!
Ha, I was going to mention this but there is none. `+` is for both ints and floats. OCaml, which is statically typed, has a separate operators for ints and floats, though.
I don't want to get into it but Erlang is dynamic by design. There have been several attempts to bring static typing to it over the years which have failed. People are still trying, though!
One thing I hate about F# and SQL is that they use <> as a "not equals" operator. In Haskell, <> is the binary operator of any Semigroup instance.
> One thing I dislike with erlang based languages (both gleam and elixir) is that they use “<>” for string concatenation.
Erlang doesn't use <> for concatenation so it's odd to name it in this comment, like that language and its developers have anything to do with your complaint. If it upsets you so much, lay it at the feet of the actual groups that chose <> for concatenation instead.
I just assumed it was an erlang thing since elixir and gleam both do it. Now it seems even more odd that erlang doesn’t do it but they both chose it.
- in Haskell <> is binary operator of a Monoid
- in Elixir <> is Binary concatenation operator. Concatenates two binaries. This seems like it might be kind of a joke, actually, purposefully confusing "binary operator" with "an operator that takes two binaries" for humorous effect?
- in Gleam <> is string concatenation operator
As far as I can see it, they are taking inspiration from Haskell, where <> denotes the monoid binary operation, one concrete example being in the monoid of Lists binary operator being list concatenation, of which String is one example.
But really, <> for inequality is also kind of dumb and nonstandard idea (from mathematical notation perspective), originating from Algol. != which C popularized is more clear, and corresponds to the mathematical symbol, of course =/= would be even more close, but that is one more character.
ML originally used <> for inequality, following the standard (in CS) of Algol, and it was Haskell which deviated from that tradition. So F# uses still Algol tradition, but Haskell uses /= and C and others use !=, for more mathematical and logical notation.
Well binaries are <<>> so that's consistent at least. And <<>> is quotation marks in several languages, including French.
Guillemets are not the same and have their own symbols.
Yeah, ok. Go back to 1986 and tell the Erlang team to go use Unicode guillemets
Gleam is from the past few years.
« and » are also the hyperoperators in perl6/raku
https://docs.perl6.org/language/operators#Hyper_operators
https://docs.raku.org/language/operators#Hyper_operators
I don't like languages that use > a lot simply because if I accidentally paste a code snippet in my Bash shell it is likely to pipe to some file.
Also, <> was != in BASIC, I believe.
PS: Don't paste this comment in your shell.
F# inherits <> from ML, which inherits it from Algol, which invented it. But that was actually a bad idea, since it deviates from mathematical practice. To follow math, it would be better to use != as in C and those inspired by it, or /= as in Haskell. Or maybe even =/= if you really want to go for the mathy looking notation.
Elixir uses <> as an operator for concatenation of binaries, (which does form a monoid of course), not to be confused with how Haskell uses <> as a binary operator of a Monoid, but for sure inspired by it. And Gleam picked it up from them, probably, to use for a special case of a list monoid, String. And Haskell created <> for Monoid, because it would be too confusing to use multiplication sign for the binary operation like mathematicians do. It would not be ok in programming context.
Then Gleam (and others) use “|>” when piping with “|” would make more sense, except that’s a bit wise OR, not to be confused with “||” which is… string concatenation (in Postgres).
The author links to a blog post talking about railway oriented programming in f#.. it might be fair to assume they are aware of f#
All the more reason to include it in the comparison.
I converted the example on the Gleam home page [0] to F#:
The two are pretty similar, but I would give F# the nod on this one example because it doesn't actually have to create a list of 200,000 elements, doesn't require an explicit "main" function, and requires fewer brackets/parens.[0]: https://gleam.run/
The creation of a list in the Gleam example is a choice, you could replace 'list' with 'iterator' and it would be lazy.
[flagged]
Or maybe your "no one else is smart/brave enough to say it" wrapper detracted from your message.
Additionally, throwing yet more syntax/features at a language to make it pseudo-functional doesn't appeal to everyone. A grab bag of features doesn't an appealing language make (for some).
I wouldn’t call C# pseudo-functional. The ability to move from one approach to another approach as the problem spaces change is exactly what pragmatism is about. To be flexible and bending and to yield to what the developer needs to do. It is most definitely not for purists if that’s what you’re getting at, but then we are not talking about pragmatism to begin with.
Unless we mean to use the word “pragmatic” in the sense of “I like it, therefore it’s pragmatic” in which case we’ve entered circlejerk territory and the article should be flagged as such. But again, let’s be charitable and talk about what really is a pragmatic programming language.
As it stands, any competitors of F# are in more serious competition with C#. They just don’t know it. It’s worth discussing if we want to discuss pragmatic languages.
People forget their history lessons. This is exactly why and how C++ gained the massive adoption over C that it did. It allowed itself to be used as the Swiss knife of systems programming, and developers did what they wanted/needed to to get the job done. For every C89 (or Heaven forbid pre-standard C) purist, plentiful C++ programmers arose. And C# is entering that territory by stippling things on and incorporating them into its syntax. F# is excellent as a barometer in this regard. Good for testing the waters with a certain crowd and taking what works.
I don’t need to claim people aren’t “smart/brave” enough to point this out. That’s not what it’s about at all. It’s more about looking one or two steps ahead rather than looking at the here and now.
So, is Gleam pragmatic? Well, is it more pragmatic than C#?
C# is indeed adopting some functional techniques from F#, but C# is still so bogged down with imperative cruft that the resulting combination of styles is a mess.
Wow, this is a great overview. I’ve been playing with Gleam a bit and this was really helpful. I’ll definitely refer to this later.
I’d like to dig into the OTP library (I’m curious if anyone has worked with it much?) and create a state chart library with it, but I’m still firmly in the “I don’t totally get it” camp with a few parts of Gleam. I don’t deny that it’s pragmatic. Maybe it’s more so that I’m not up to speed on functional patterns in general. I was for years, but took a hiatus to write code for a game engine and supporting infrastructure. It was so Wild West, but I kind of liked it in the end. Lots of impure, imperative code, haha.
Most people use the OTP lib! There's this super useful intro repo: https://github.com/bcpeinhardt/learn_otp_with_gleam
Incredible, thank you so much! This is exactly what I need.
I've tried to get my head around functional programming and also OTP but I also just never got my head around it.
Functional programming seems too limiting and OTP seems more complicated than I would have hoped for a supposedly distributed concurrency system.
I'm sure it's just a skill issue on my part. Right now I'm way too rust-brained. I've heard lots of things about gleam being good for productivity but I don't feel unproductive writing web apps in Rust but I felt every unproductive trying to write a non-trivial web app in gleam
I agree. I've been trying to learn functional programming for years. My brain just doesn't get it. And I've actually built a non-trivial web app in Elm, and started trying to write one in Gleam and I was very very slow and unproductive. Eventually I gave up and wrote the whole thing in Go + TS for the frontend.
For Gleam I was trying to write the whole FE + BE in the same language - I really like that it can be compiled to JS, and I'm honestly sick of the whole React + seven thousand dependencies game, so I was using Lustre (an Elm-like library for Gleam). And again, I've programmed an app in Elm, after a lot of hair pulling, and in the end I didn't enjoy it that much.
I've gone through tutorials and I don't understand things like types having different wildly unrelated constructors, currying (I didn't notice much currying in Gleam but really disliked it in Elm, I cannot follow past the first or second arrow). For writing the front end of the app, I would make _zero_ progress unless referring to other Github projects (and it was hard to find any since Gleam was so new). Anyway, if someone has a book or something that can teach me this stuff it would be great. I want to use the OTP and a single language for FE/BE that's not JS. I'm not dumb, I've been programming since I was a little kid, but maybe I'm too stuck in imperative models.
Yeah FP can for sure take some getting used to before it clicks! I think a great resource for that is Gleam's Exercism track (https://exercism.org/tracks/gleam), not only will it teach you the language but by starting with small-ish exercises it can definitely help grokking FP concepts
And if you feel like you're stuck and need help Gleam's Discord is a great place to ask questions :)
tried gleam but the fact i have to manually serialize/deserialize things, pretty annoying, that doesn't seem very pragmatic
Isn’t manual ser/de pretty common? I like it personally. Being explicit at program boundaries usually means far fewer bugs inside the program. In JS I can pile whatever JSON I want into an object, but eventually I need to throw Zod or something at it to tame the crazy.
Maybe a generic “pile this data into this value and pretend it’s safe” tool might be nice for prototyping.
i dont think manual ser/de is common at all, and languages like dart where it was used is a massive pain point for people so much that they are adding macros to the language and the first macro they add is for serialization. whats not explicit about saying hey i have a struct this is the data i expect, serialize/deseralize in this shape, validation is a another but separate concern. in javascript you are not doing anything manually so i'm not sure why thats an example?
I'm a bit confused. How can you control how your data is serialized if not manually? Are there languages that use some kind of magically-figures-it-out layer that negotiates the appropriate serialization on the fly?
Many languages have some kind of macro or codegen system that allows serializing or deserializing based on type definitions. Eg (pseudocode):
Would give you something like:I see, thanks. I thought maybe we were talking about the choice of json vs something else being automatic and chosen at runtime.
C# (or more precisely .NET libraries) does it using reflection. Attributes let you adjust the behaviour.
Or with build-time source generation (because this specific pattern of reflection is AOT-unfriendly). It's not as convenient if you are using default serializer options, but if you don't - it ties together JsonTypeInfo<T> and JsonSerializerOptions, so it ends up being a slightly terser way to write it. I do prefer the Rust-style serde annotations however.
Sorry I wasn’t clear; I meant to use JavaScript as an example where it isn’t manual.
Despite it being easy to use, I find I inevitably wind up requiring a lot of ceremony and effort to ensure it’s safe. I’m not a huge fan of automatic serialization in that it appears to work fine when sometimes it shouldn’t/won’t. I agree that it’s a lot of effort though. I guess the question is if you want the effort up front or later on. I prefer up front, I guess.
You either trust the input or you don't. If you don't trust your input you need validation like Zod anyway. Parsing untrusted data without validation in Rust or Go is not much better than in JS. You get the basic types checked, but that's all. You need to validate at the boundaries with Rust or Go just the same as with JS. It seems to me that many bloggers of new trendy languages are not aware of validation. A value for name is a string, but how about the length?
That’s a good distinction. I almost always include validation in the process, but you’re right, it’s not inherent to serialization.
In the JavaScript space, Effect offers an awesome package for ser/de which integrates validation. I think it’s my favourite tool in the ecosystem, but I prefer it over options in many other languages as well.
I agree that the stdlib decoder functions aren't the most ergonomic, but I think people are aware it's a pain point and there is development in that are, these two packages for example:
https://hexdocs.pm/decode
https://hexdocs.pm/toy/
This is the biggest reason I cooled a bit on Gleam and whenever I want to do some backend stuff I'd much rather use Rust (using serde to convert to structs) or Elixir (put it in dynamic maps).
I wish Gleam would implement some kind of macro system, making a serde-like package possible.
This is one of the complaints people have with Elm too. Json.Encode/Decode is a pain
I understand why the `use` syntax is preferable for its generalizability to many different "callback style" things, but the whole construct of `use foo <- result.try(bar())` is so much worse than defining let* in ocaml and being able to write `let* foo = bar() in`...
What would you say makes it much worse?
> Running on the battle-tested Erlang virtual machine that powers planet-scale systems such as WhatsApp and Ericsson, Gleam is ready for workloads of any size.
Does a Gleam programmer in practice need to deal with Erlang? Do Erlang error messages leak through?
Pure Gleam will get you really far without having to touch any Erlang, I've done Gleam for almost a year now and there were very little cases where I needed to write Erlang code myself, usually there's already a library that deals with it for most common needs :)
Could you say something about the cases where you did need to write Erlang code?
What kind of cases? Were you already proficient in Erlang and its ecosystem?
> Could you say something about the cases where you did need to write Erlang code?
Sure! For one of my most used packages (https://github.com/giacomocavalieri/birdie) I needed to get the terminal width to display a nice output, that has to be implemented using FFI based on the specific runtime (erlang or js) so I had to write it in Erlang, that was just a couple of lines of code.
But now there's a Gleam package to do it, so if I were to rewrite it today I wouldn't even need to write Erlang for that and could just use that!
> What kind of cases?
Usually it is when you need some functionality that has to rely on specific things from the runtime (like IO operations, actors on the BEAM, async on the JS target, ...) and there's no package to do it already. Most of the common things (like file system operations and such) are already covered
> Were you already proficient in Erlang and its ecosystem?
Not at all :) I knew very little about Erlang (basically nothing behind the syntax), Gleam was my introduction to the BEAM ecosystem and it has worked out great so far!
Hope this is helpful, happy to share my experience here
Thanks!
That, ah, doesn't quite answer the question about error messages.
If you error from Erlang or Elixir you will get the error as those languages construct them, even if you call that code from Gleam. The Gleam build tool attempts to print them more nicely than Erlang does by default, but it cannot add additional information to them. Gleam runtime errors have more information attached to them.
In practice runtime errors in Gleam are rare. The one place you'll likely have to deal with poor Erlang errors is if you are writing Erlang code to create Gleam bindings to an existing Erlang library.
Doesn’t it compare mostly to F#, rather than Haskell or OCaml? The examples in the post really look like F# to me
Is there a way to implement matrix arithmetic with nice syntax (for instance, "A + B" to add two matrices A and B) in Gleam? The lack of ad-hoc polymorphism might paradoxically be a blessing.
newbie here, how does gleam compare to golang, rust and python?
It has some syntax similarities to Rust, but it has GC so there's no borrow checker (or any of the associated syntax). It is also fully immutable, unlike Rust. It leans heavily on sum types, just like Rust. Also expression-based syntax and some other things resemble Rust. However, it lacks Traits. Overall it looks Rust-ish but it's much simpler and has a functional focus.
With Go it shares a lazer focus on simplicity and preemptive channel-based concurrency. But of course for all the above reasons listed above it looks very different from Go in most other ways.
In many way its language choices are the opposite of Python (static types, immutability, massive concurrency is the norm).
[dead]
[flagged]
Does the C# runtime or language or so have any of the abilities that a BeamVM language can make use of? Afaik Gleam can do the same things you can do with Erlang, easily making a cluster of machines and easily running code on any of the machines in the cluster, load balanced, pattern matching on binary ...
Otherwise I don't understand what C# has to do with Gleam.
I believe the comparison to C# is incorrect. Given what the article highlights, the direct competitor to Gleam is going to be F#. By virtue of using .NET, it offers an order of magnitude better performance, significantly cheaper per-process/task cost and equally capable overall concurrency primitives. A much bigger ecosystem with many polished libraries too.
> significantly cheaper per-process/task cost
Can you elaborate? BEAM processes are very lightweight - what does CLR/.NET do to make them even cheaper?
I think a big difference is how C# uses stackless coroutines while anything on the BEAM uses stackful. How much cheaper is a good question however
.NET's Task<T> + the state machine box for it emitted by Roslyn start at about 100B of heap-allocated memory, as of .NET 8/9.
The state machines generated for those by F# don't seem to be far behind either (I tested this before replying, F#'s asynchronous computations aka async { } appear to be much less efficient however so the guidance is to avoid them in favor of task { } and .NET's regular tasks).
Notably, BEAMs processes come with their own per-process GC each, which is going to add a lot of additional cost every time a new process is spawned. In a similar vein, Go's goroutines pre-allocate quite a bit of memory for their virtual stacks (60 KiB?).
.NET's tasks, as sibling comment mentions, are stackless coroutines[0] so their memory usage is going to be much lower. They come with a different set of tradeoffs but overall their cost is going to be significantly cheaper because bytecode is JIT/AOT compiled to native, the GC has precise tracking of object liveness and because .NET does not perform BEAM-style preemptive userspace scheduling.
Instead, .NET employs work-stealing threadpool with hill-climbing thread count scaling to achieve optimal throughput. This way, when the workers cannot advance all submitted work items in time, additional threads are injected, which are then preempted by kernel thread scheduler. This means that even if other workers are busy, the work items will not wait in the queues indefinitely. This is a pathological case and usually the thread count varies between 1-2x physical core count.
This has a downside of achieving potentially worse scheduling fairness, and independent tasks allocating can and do affect each other w.r.t. GC pause impact. I believe this to be non-issue because this is more than compensated for by spending <10x CPU time vs BEAM on computing the same result, and significantly less memory (I don't have hard numbers but .NET is quite well behaved in terms of allocation traffic) too. At the end of the day, Task<T> is designed for much higher granularity of concurrency and parallelism so it would be quite unusable if it had greater cost.
If you're curious, I made an un-scientific and likely incorrect but maybe interesting comparison some time ago (it's in Ukrainian but the table is readable enough I hope):
https://gist.github.com/neon-sunset/8fcc31d6853ebcde3b45dc7a...
This calculates the CPU time and max MEM RSS usage required to spawn 1M tasks/coroutines/processes/futures that sleep for 5s and await their completion.
[0]: This might stop being true in a pure sense of the word in .NET 10 because the task handling is going to be completely changed by replacing state machines generated by a language that targets .NET with specially annotated methods, for which the runtime itself is going to implicitly emit state machines instead, allowing to pay the cost only at "true" suspend points. Reference: https://github.com/dotnet/runtimelab/blob/feature/async2-exp...
This is interesting! I learned quite a bit, thank you :)
One comment, though: it's not an apples-to-apples comparison! (I'm talking about your gist.) Specifically, in Elixir, you should create the state machine yourself (very easy in Elixir, thanks to pattern-matching definitions of functions) and schedule it across a few processes manually. You'd need ~100loc for this, and you'd get results much closer to C# and Rust.
What your comparison highlights is that the primitives behind the same async/await API can vary widely in their specifics. Goroutines and BEAM processes are much more lightweight than OS-level threads, but they are much more complex than just a compiled state machine, so they are heavier than coroutines. On the other hand, BEAM processes do the "preemptive userspace scheduling," which means they can be used for scheduling (an implementation of) coroutines. An interesting note about F# `async { }` - I thought this one also builds coroutines through CPS transform; it should be on par with Task regarding overhead.
It would be nice to expand the comparison: Java got virtual threads lately, I'm curious how that would fare - closer to C# or BEAM? Throw in Kotlin's coroutines (probably closer to C#) and Python's asyncio (or Lua native coroutines, but then you need to write the scheduler) for a good measure... We could see how concurrency primitives differ in overhead across more languages.
> One comment, though: it's not an apples-to-apples comparison! (I'm talking about your gist.) Specifically, in Elixir, you should create the state machine yourself (very easy in Elixir, thanks to pattern-matching definitions of functions) and schedule it across a few processes manually. You'd need ~100loc for this, and you'd get results much closer to C# and Rust.
Hmm, I can't see myself agreeing to this. Each state machine would have to be hand-rolled manually, or another abstraction for that would need to be introduced. It becomes a lot of manual work very quickly. Or it becomes a callback hell. Whichever you prefer, both are among reasons behind .NET pioneering async/await and its later adoption across other languages. I am unsure about scheduling cost too - significant amounts of application logic implemented in pure Erlang or Elixir will be subject to interpreter-tier performance which is unlikely to match the performance of JIT/AOT-compiled languages. BEAM has excellent allocation throughput thanks to per-process GCs, but not the raw speed of code execution.
The comparison was linked as an addendum to the discussion, didn't intend to make it the main focus :) But if you're interested in the context - the initial idea behind the comparison was that I got tired of hearing a colleague giving unjustified praise to Go. Especially when it comes to highly-granular interleaving of concurrent operations (for the lack of better term).
And while yes, BEAM processes and Goroutines belong to category of stackful coroutines with preemptive userspace scheduling, which has very different cost model when compared to how task continuations are handled by .NET's threadpool and task scheduler implementations, one of the most common ways of using `go func(...` in Golang is as if it was a task/future, where one or multiple goroutines are fired once to yield one or multiple results collected via channel/slice/some other collection + WaitGroup. In Go, the users do not have an alternative to this for highly granular ad-hoc concurrency save for re-implementing an async/await or fork/join-like API themselves that leverages a custom pool of goroutines and yield/join primitives.
So for all intents and purposes the numbers indicate the user experience when someone wants to spawn a lot of independently ran operations and then wait for them all to complete. This overhead is real, albeit pushing the task/goroutines/process count to 1 million is going to be very uncommon.
It's probably on the opposite end to what would idiomatic Erlang or Elixir code look like, but below is a popular pattern that relies on this:
This way, you can also map e.g. an array of elements into a sequence of tasks, which is then fed into Task.WhenAll which awaits all of them and returns an array of results. This is what the linked comparison measures. You can easily spawn thousands of requests awaited concurrently and the runtime will scale with this very well (also because Socket has efficient epoll/kqueue integration underneath).> On the other hand, BEAM processes do the "preemptive userspace scheduling,"
Perhaps you meant "processes are scheduled preemptively"? My understanding is that processes are the main scheduling unit of BEAM, much like Goroutines are in Go (which Go has an explicit mechanism for suspension mid-execution).
> It would be nice to expand the comparison: Java got virtual threads lately, I'm curious how that would fare - closer to C# or BEAM? Throw in Kotlin's coroutines (probably closer to C#) and Python's asyncio (or Lua native coroutines, but then you need to write the scheduler) for a good measure... We could see how concurrency primitives differ in overhead across more languages.
The Java way of solving this will be using its Structured Concurrency which is currently in an incubator. I did not add Python because Python example was attempted by a colleague and it was impossibly slow, and there are multiple ways of going about it. If you care about performance, especially in multi-tasking, then using Python at all is a terrible idea. I mostly focused on the languages that put concurrency and parallelism at their forefront, and without making it a proper automated comparison with controlled environment I don't think the numbers will be very useful, it's also more tedious than it looks unfortunately.
> An interesting note about F# `async { }` - I thought this one also builds coroutines through CPS transform; it should be on par with Task regarding overhead.
F#'s Async (and async { } blocks that express it) is the original source of where async/await comes from. It is implemented via F#'s "Computation Expressions"[0] as a form of do-notation. It is only afterwards that C# got its own async/await implementation. For all intents and purposes both achieve the same goal and both implement a stackless coroutine. F#, however, keeps its core netstandard2.0-compatible and overall runtime-agnostic making it unable to use the APIs that are available for newer targets.
As measured, it appears to be much preferable to use task { } blocks instead, which have significantly lower heap size impact, and are similar in their performance to C#'s async methods. It's important to note that async { } and task { } blocks have different semantics w.r.t. hot vs cold start, multiple awaits/result caching and explicit vs implicit cancellation propagation.
[0]: https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
Would the async experiment also imply better interop between .NET languages?
One issue I’ve heard from F# devs was the lack of ability to comfortably call C# asynchrony methods
> One issue I’ve heard from F# devs was the lack of ability to comfortably call C# asynchrony methods
This is no longer the case starting with F# 6: https://learn.microsoft.com/en-us/dotnet/fsharp/whats-new/fs... C# and F# can now transparently interoperate using the same main Task<T> type (in the past, this was done via community libraries for C#->F#, and F#->C# was always possible with |> Async.AwaitTask).
For advanced usage scenarios, there also exist https://github.com/fsprojects/FSharp.Control.TaskSeq and https://github.com/TheAngryByrd/IcedTasks
> Would the async experiment also imply better interop between .NET languages?
This is a good question. If a language already uses Task<T> and ValueTask<T>, the interop is already a solved problem (like with F#). However, this could allow to completely remove or at least significantly simplify the implementation for emitting state machines / resumable code / coroutines for asynchronous methods for any language that targets .NET 10+ (it will most likely get into 10 but is not guaranteed yet from what I've heard).
For example, a guest language could simply choose to emit calls to async methods with special modreq(?) attribute and the runtime would handle suspension, creating the closure for the state captured by asynchronous continuation and then subsequent resumption without any additional work of that guest language to be async-compatible. You could likely even write that by hand in IL with ILAsm. This is a significant improvement for a .NET as a hosting platform, and arguably makes it better than JVM at some high-level scenarios (because for low-level it already supports almost the entirety of what is expressible with C plus struct generics, and for FP it supports tail. prefix for call* opcodes for mandatory tailcalls required by recursive functions).
And for those in the camp of "function coloring considered harmful" (which I'm not a fan of but to each their own), you could even have a guest language that completely hides the fact that it performs such async calls underneath.
I'd rather use an external job scheduler and message queue, eg in kubernetes, rather than build it into the language runtime. With a message queue and horizontal pod autoscaling you can very quickly build an easy distributed workload with (at least to me) little effort in any language
Or you could do with almost zero effort in a BEAM language. And sometimes setting up a kuberbetes cluster is not easy. Suppose you want to have one part that is hosted on a cloud service and another that is on-prem.
It's certainly possible with kuberbetes, but you'll be fighting the kuberbetes paradigm. Plus you're checking in to this kuberbetes middleman for part A to understand the availability of part B, for example.
Kuberbetes is highly devex optimized for stateless services that are kinda cattle. Not all use cases fit so neatly into that rubric.
I would not say it's zero effort in BEAM. I wasn't able to figure out how to get it to work at all (I spent a weekend trying to build a distributed auth system, failed). But I would be able to spin up kubernetes, rabbitmq and a non BEAM language in not very long.
> Suppose you want to have one part that is hosted on a cloud service and another that is on-prem.
Is this hard with kubernetes? Seems pretty simple to add node taints and set up the appropriate VPN to me.
BEAM is not magic. It will be doing a lot of the same stuff that any other scheduler or message queue will do, so I would personally rather choose the more composable solution rather than have to manage BEAM and be forced to use one family of languages only.
I'm thinking if your company is already running kubernetes, adding BEAM on top is probably more effort than adding a message queue (given my limited understanding of BEAM)
Never said it was zero effort. It's almost zero effort. But you have to have deep knowledge of BEAM primitives if you intend on doing something crazy (still almost zero effort). If you don't know what you're doing or don't have experience in the BEAM you will probably get it wrong. Sometimes libraries haven't gotten it right so composability might not be there and you might have to write a sidecar genserver and supervisor (~40 LOC)
Good news is in elixir, though it is not easy to write (still low effort) most of it is pretty straightforward to read so a junior could probably successfully work through and understand a CR.
The other very nice thing about doing it all in the BEAM is that you can make distribution and availability guarantees part of your testing suite very easy, because its even part of you localdev flow. That also means your devs will write that test, instead of of assuming they understand how that kuberbetes operator works (they don't).
To add onto this, you actually write code when on the BEAM rather than yaml and feels like it takes less lines to accomplish the same things
What has C# got to do with... anything in this article?
It's not pragmatic if you have to import these basic libs:
```
import gleam/dict.{type Dict}
import gleam/int
import gleam/io
import gleam/result
import gleam/string
```
Why not?
What's wrong with a standard library the bits of which you want you choose to import?
I can understand having to import the "dirty" parts of the stdlib, like I/O, or the "heavy" parts, like Unicode or timezones. But why force someone to import every single type? Most functional languages have a prelude that covers the types every non-trivial program uses: booleans, numbers, strings, collections.
> But why force someone to import every single type?
That's not importing the types, it's importing a suite of functions related to the types.
https://hexdocs.pm/gleam_stdlib/gleam/int.html - gleam/int for example. The int type is already in the language and usable, this import brings in some specific functions that are related to operations on int.
Why not methods of the type?
The answer to "why not methods" would be because it doesn't have methods.
And why not?
Because they selected to make a functional, not OO, language based largely on BEAM which was designed for Erlang, a functional, not OO, language. Why would you make an OO language if your goal is to make a functional language?
Dogma not appreciated. Personally don't care if a lang is functional or OO, they aren't exclusive categories. For example, several OO langs have added functional features. Don't see why this one couldn't use static methods for its immutable types. Wouldn't hurt anything, would it?
As mentioned in this thread, having to import libraries to operate on basic types is suboptimal to say the least.
> Wouldn't hurt anything, would it?
One of Gleam's design goals is to not have multiple ways to do the same thing, so having to pick between using method chains or pipelines would work against that.
> having to import libraries to operate on basic types is suboptimal to say the least.
The language server will do this for you in a Gleam project.
I could imagine a method call done in a pipeline, but would have to work out the details. Maybe self/this or omit the variable name? Not sure how doable.
Folks recommended tools to alleviate Java verbosity back in the day as well. But you still have to read it—which unfortunately happens 100x more than writing.
Gleam dodges the problem by not having methods at all.
Totally. Gleam priorities reading over all else, and generally it is praised for being unusually easy to understand code written in it
It's not that it's wrong—at least I don't think so. It's that it's an example of a choice that is not pragmatic.
I suppose we should agree on what "pragmatic" even means, since it has become something of a cliché term in software engineering. To me, it roughly means "reflective of common and realistic use as opposed to possible or theoretical considerations".
So is having to import basic functionality a pragmatic design? I would argue no. Having to import basic functionality for integers, strings, and IO is not pragmatic in the sense that most realistic programs will use these things. As such, the vast majority of ordinary programs are burdened by extra steps that don't appear to net some other benefit.
Importing these very basic functionalities appeals to a more abstract or theoretical need for fine-grained control or minimalism. Maybe we don't want to use integers or strings in a certain code module. Maybe we want to compile Gleam to a microcontroller where the code needs to be spartan and freestanding.
These aren't pragmatic concerns in the context of the types of problems Gleam is designed to address.
To give a point of comparison, the Haskell prelude might be considered a pragmatic design choice, as can be seen from the article. It is a bundle of common or useful functionality that one expects to use in a majority of ordinary Haskell programs. One doesn't need to "import" the prelude; it's just there.
I don't personally find Gleam's design choice a bad one, and while GP was a bit flippant, I do agree that it is not an example of a pragmatic design choice.
Pragmatism is more than just giving people the quickest way to complete their task. There are other axes to consider, such as the simplicity of the compiler and the uniformity of the language experience. These contribute to the maintainability of the language itself and your own code also.
When the rule is "if you need a module, you must import it", and that applies equally to standard library modules, hex packages or your own internal modules, there are fewer mental overheads. The procedure is always the same. Incidentally, this also means that the Gleam language server can automatically add or remove import statements, which it now does [0].
Personally, I also find it pleasing that I can look at the top of a file and say "oh, this module appears to be doing some stuff with floating point math and strings". It often gives me an overview of what the module might be doing before I begin reading the detail.
[0] https://gleam.run/news/convenient-code-actions/#missing-impo...
I guess I don't entirely agree, but I do wonder why each import has to include 'gleam' in the path. Why can't it assume that the default path is 'gleam' and import libraries relative to that path. Like `import string` instead of having to do `import gleam/string`?
It's the namespace that belongs to the core team. It couldn't be just `string` etc as that would collide with existing Erlang modules.
Other Gleam libraries will use other namespaces.
The syntax doesn't look like it supports partial application? Big no-no. Also, no compilation to native code. Another big no-no.
https://tour.gleam.run/functions/function-captures/ https://tour.gleam.run/functions/pipelines/
These took me basically no time at all to find. Are you looking for something else for partial application?
Roc is a similar functional language that doesn't automatically curry functions and doesn't have partial application, the Roc FAQ has a few reasons: https://www.roc-lang.org/faq.html#curried-functions
EDIT: quick note, this is a tangent; Gleam does support partial application with `_` and it works with pipelines as well.
> This is not how it works in curried languages, however. In curried languages with a |> operator, the first expression still returns "Hello, World!" but the second one returns "World!Hello, " instead. This can be an unpleasant surprise for beginners, but even experienced users commonly find that this behavior is less useful than having both of these expressions evaluate to the same thing.
The upside (on the curried side) is that you can define `|>` as a normal function (even without lazy semantics, as in OCaml.) How much of an "upside" this is will vary, but note that this generalizes to many other operators that can be added. The rest is a matter of API design, i.e., the order of arguments and the use of named arguments (and/or other syntax sugar.) For example, in the case of the post's example:
You can get the "beginner friendly" semantics with just a little change to the `Str.concat` (assuming named args support, using OCaml syntax): In non-curried languages, this has to be a macro (or it needs to be built-in). If you already have macros in your language - that's good: you can easily make `|>` do things like `x |> func(arg1, _, arg2)` and more. However, if you make this a special case in the language, it will be hard to extend and impossible to generalize. So personally, I'd grade the options in order of power and convenience: There's also a special category of languages where pipelines are primitives (shells, jq), but that is outside the scope of this comment, since they are more than syntactic sugar for function application :)I don't even think curried functions by default is a good idea, but that article really made me support them. Richard Feldman is usually so reasonable, what happened? That's the worst argument I've seen in a while.