The article complains that async/await 'infects' all the code that touches it and forces the callers to use async/await too.
But isn't the same true with go channels ? If you want to asynchronously interact with a channel (that is, without blocking the main thread), you have to do it in a go block and the caller has to do the same and so on ?
Promises behave similarly - must wrap your code in promises all the way.
These constructs are alternatives to the good old callbacks, which force you to write your code inside callbacks, thus 'infecting' everything and leading to callback hell.
This 'cascade infection' effect is due to the inherent nature of things happening asynchronously, which contradicts the synchronous program flow inside a thread, so when the async event terminates, the program has to jump to a handler in order to process the results.
Haskell has <$> and the infrastructure of HKTs to stop this infectious propagation of IO, other languages do not, and their async/await colors do not isolate side-effectful actions from the rest pure parts of your codebase.
> Which ones? I think there's always some way to isolate, even if ugly.
Almost all of them? You need referential transparency (via laziness) too, otherwise your attempt at isolation will break at the first binding expression in a local scope for future processing elsewhere:
...
let arg = processData <$> ioAction
in ...
Do you want to wrap-and-call-later all of these cases into lambdas by hand? :)
> the article complains that async/await 'infects' all the code that touches it and forces the callers to use async/await too. But isn't the same true with go channels ?
I'm not familiar with go, but I don't think so: stackful coroutines abstract better than the stackless kind.
Yep in some way the programmer needs to express that the routine will have to _continue_ when IO completes (which will necessarily go up to the top of the stack in some way, unless you don't care about the result of the IO operation), or the runtime needs to block until IO completes.
In a big Go application, most of the time you're not writing code that runs in the "main" thread. If you're writing a UI, you put the UI code in its own goroutine, and pass messages to it. If you're writing a server, most of your code will be in your endpoint handlers, which run as their own goroutines.
In >95% of your code, it's fine to just write `foo_val <- foo_chan`, without spawning a goroutine. From a pragmatic standpoint, it's not really different from `foo_val = expensive_foo_calculation()`. This block of code is waiting for something else to finish, and the Go runtime is smart enough to decide whether or not this thread should be parked until that result is ready.
And, as a bonus, `foo_val = expensive_foo_calculation()` looks the same, even if the implementation launches 10 cpu-bound goroutines and reads from 30 files to do the work.
That was the most useless async/await post I've seen. The only useful bit is that the Go implementation isn't similarly painful.
First of all, it gets the function color problem backwards. Async await forces 'coloring' of execution to be async. But the desired number of colors is one which is what you have without async/await.
The way Go solves this is by making all 'threads'/goroutines async context without saying so and there's no way to make them not that. Effectively they all started gray-purple or whatever color that was to begin with. It would be as if all the Rust developers went all in and said "Everyone let's only do async."
The problem isn't promises/futures etc, they work fine as can be seen in Java with their CompletionStage. You can even use Executors with thread pools without function-coloring. I never understood the need or desire for async/await keywords (and the corresponding 'rest-of-program' transformation that happens under the hood. That Rust adopted it is the main reason I won't consider entering the ecosystem unless it somehow gets sorted out e.g. with two library ecosystems, basically bifurcating the language.
You need or want async/await in any program that has UI on the main thread, because you do not have the luxury of blocking the main thread if it’s running UI. It’s fine to have blocking code on background threads, but blocking the main thread will cause the dreaded hour-glass or beachball cursor and render your app totally unresponsive. For GUI programming you at least want async/await or an equivalent to model UI event handlers that dispatch work to the background, and then can resume work on the main thread to update UI when the background task is complete.
Swift for example is transitioning from using a lot of callbacks and manual thread dispatch everywhere to using async/await and while the infection aspect is annoying from time to time, needing to deal with continuations/callbacks manually tends to be just as infectious, which was the old way. Even worse is manually wiring up message passing infrastructure inside the app. I wonder how Go UI libraries deal with this? I wrote some X11 apps in Go back in the day and had a bad time whenever I blocked the main thread waiting for a response from a background worker but maybe today there’s better abstractions in the native UI libraries.
The other big downside to threading is the mental overhead of needing to consider the memory model, worry about parallel memory access of objects causing problems, and needing to review code with a microscope in case someone is introducing memory access violations or the even worse deadlock/contention cases. Some programs really benefit from large shared data structures and those are fraught to share across threads and I think that’s where multi-threading gets its somewhat deserved reputation for being annoying.
Go is wonderful for the things it’s built in tools and semantics are well-suited to handle: request/response (where the UI lives in some other process that talks to Go; Go is always the “background” thread pool) or run-till-completed jobs that just print logs as their UI. It is kind of horrible for other stuff. I personally found the channel management and concurrency situation inside the Kubernetes source code really hard to follow since that’s a kind of program that’s all about long lived data shared structures & systems communicating with each other; it would probably be more understandable in Erlang or something.
It is quite annoying for older codebases in C# that have a lot of existing sync code, but for new code it doesn't matter all that much. You use async methods for IO and DB access and in many applications this means that most of the methods will be async.
I only played around with it a bit a long time ago, but I didn't find Go concurrency as simple and easy as it is often sold. It felt very low level, which is fine for the design goals of Go, but also meant that I was still left with doing some harder parts myself.
Concurrency in C# with async/await is pretty easy for the straightfoward cases that make up most of a typical application. You do have to keep to a few rules and it certainly has very dangerous footguns, but those are minimized if you use consistently use async methods instead of sync.
The only real landmine for c# is the default thread synchronization. If it was inverted, the language would be much better off. I think they also made a mistake in removing the OOTB method for throwing away the sync context, but I guess there are many third party libs that provide it.
.NET Core web apps have no synchronization context anymore, which is exactly what you're asking for if I understand you correctly. There is no need to call ConfigureAwait there.
I think this is different for GUI apps, but I have no experience with that.
Ron Pressler always was an advocate for blocking code and even joined oracle to add virtual threads to the JVM, thus invalidating the performance argument of the async/await/non-blocking crowd.
I really wish young developers would be taught about the actor model and communicating sequential processes before falling for the false promises of async/await-land.
And I wish JavaScript runtimes had a way of expressing continuations /blocking threads on their eventloop.
That performance argument never was a real issue for most applications anyway. You rarely have context switching as a bottleneck in a run of the mill web app, usually it's suboptimal queries, accidentally quadratic naive algos. I think even memory access patterns are more of an issue if you are compute heavy. Considering that "performance" matters at all for the applications purpose. Not everything is a high throughput load balancer.
And somehow all these programmers who never care about performance because “computers are fast” become micro-optimizers, willing to restructure every line of code to save a few KB of RAM and rare handful of ms for a context switch.
Async is equivalent to the io monad. It is in itself a monad, I know, but I’m saying it’s just like the io monad. It pollutes everything it touches. It’s literally only effective when paired with IO. So it’s actually in many ways identical.
And that’s a good thing.
In attempting to avoid the pollution u end up implementing the imperative shell/functional core pattern.
Most programmers don’t know that pattern. But for those in the know, the pollution is a good thing.
If you programmed in Haskell you know what’s up. The way you avoid the io monad from polluting everything is part of what makes the program so modular. Async does the same thing. Literally.
await b(await a())
The above is roughly equivalent to this in haskell:
a >>= b
How do you avoid pollution? The answer to this question makes your program better.
"X is just a monad" isn't a useful statement, because lots of types are monads (e.g. lists, hash maps, and nullable pointers).
An important difference between async/await and Haskell's `IO a` is that it's possible for asynchronous code to invoke sync code, and in some languages (such as Rust) vice-versa. So it acts more like a monad transformer, providing operations `IO a -> AsyncIO a` and `AsyncIO a -> IO a`.
The main challenge of async/await is that unskilled people who don't understand threads try to use async/await as a substitute, which leads to bizarre articles like "what color are your functions".
I said async functions can be treated as the IO monad. And I also said that I realize that promises are themselves monads but that wasn’t my point. The point was to use async functions as coloring in the same way Haskell does it with the IO monad.
> The above is roughly equivalent to this in haskell:
It's not equivalent.
> How do you avoid pollution?
Haskell has <$> and the infrastructure of HKTs to stop this infectious propagation of IO, other languages do not, and their async/await colors do not isolate side-effectful actions from the rest pure parts of your codebase.
I said roughly equivalent. Async functions pollute and represent io in the same way the io monad does.
The io monad does not isolate io from your pure code. It’s infectious just like an async function.
It’s the abstractions and ways to stop the infection that makes the code pure. You don’t even need hkts to do this. Most languages don’t have a type representing this infection. The infection propagates everywhere without anyone realizing it. The IO monad explicitly tells the developer that the infection is occurring.
I’m saying that async functions do the same thing as the io monad.
The <$> operator in Haskell is just sugar for patterns to stop the pollution from occurring. You can implement it in typescript too. It just won’t be as general as that operator is defined across functors. In typescript you would define a function across only promises.
> I’m saying that async functions do the same thing as the io monad.
> The <$> operator in Haskell is just sugar for patterns to stop the pollution from occurring.
No they don't. Async functions aren't IO actions in Haskell terms, and for the latter argument of <$>, you need referential transparency (via laziness) too, otherwise your attempt at "sugaring" your async functions will break at the first binding expression in a local scope for future processing elsewhere:
...
let arg = processData <$> ioAction
in ...
Do you want to wrap-and-call-later all of these cases into lambdas by hand? :) Show me an example of that being done in a type-safe way in typescript, and I'll point you at the layers that will break composition at the next binding.
If You want to redefine the meaning of roughly equivalent then that’s your prerogative. There’s an isomorphism I’m referring to here and if you fail to see it that’s not my problem.
As for the rest of your argument, the point is to not use async functions locally in the context of pure logic. The pattern is imperative shell, functional core.
Adding a property-changing prefix to "equivalent" makes it non-equivalent, I thought you would understand it if you were using the word "isomorphism".
> the point is to not use async functions locally in the context of pure logic. The pattern is imperative shell, functional core.
The point is that IO actions aren't `async defs`, because async defs don't have two important properties to hold eqivalence to IO actions in Haskell. I'm not sure why you're trying to cherry pick arguments to see your argument fit into the slots that don't accept coloring keywords where they don't belong to: seamless composition.
You’re just playing pedantic games. By roughly equivalent I mean isomorphic. Do you not get it? Isomorphism isn’t equivalency. Sure thanks for pointing the obvious out. Why don’t we get with the program rather than state pedantic details?
IO actions aren’t equivalent to async defs. I never said that. I said roughly equivalent which means isomorphic.
I’m not sure why you’re trying to say I’m cherry picking my argument when I am the one dictating the point here. I made the first statement and you responded to it and you started out your previous response by trying to turn the conversation to your point.
Bro I made the point. I’m not changing the point. You need to not change the topic. In the very beginning I said functional core imperative shell. That’s the point.
I guess the io monad doesn’t prevent people from writing shit code in Haskell. You’re weaving in and out of io constantly with almost everything polluted with IO. Pure functions are scattered randomly in a patchwork of compositions without delineation between purity and IO. You don’t see that there needs to be a layer between the two.
> I said roughly equivalent which means isomorphic.
"roughly equivalent" isn't the definition of isomorphic, and I hinted which properties a type system and the runtime have to support for that isomorphism to be manifested in a language implementation, which isn't there for all of the mainstream languages, unless you're willing to provide that conversion by hand.
> when I am the one dictating the point here. I made the first statement and you responded to it and you started out your previous response by trying to turn the conversation to your point.
You're simply wrong, that happens.
> In the very beginning I said functional core imperative shell. That’s the point.
That terminology only exists as a coping mechanism for those on the mainstream languages. In Haskell everything is functional composition, and `IO a` is neither exempt from it, nor is made into a special case. When you realise this I'll congratulate you on becoming less ignorant.
I’m not continuing this further. The thread has turned from discussion to conflict and we are both at fault. I’m ending it here and pray that dang doesn’t come along and flag the whole thing. Good day to you sir.
It’s eye opening if you get it. I realize this thread is childish and arrogant but that’s largely orthogonal to the epiphany you gain from grokking Haskell.
Haskell has significantly more powerful abstraction capabilities than your average async/await language though, making coloring less of a problem.
Also in Haskell you can only perform IO (aside from unsafe IO I guess) inside the IO monad, potentially making the abstraction worthwhile, this is not the case in many other languages.
Async programming is great. Coroutines are a powerful tool, both for expressing your ideas more clearly and for improving performance in IO-heavy systems.
async/await syntax may not be the best design for async programming though. Consider example in Julia:
function foo(x)
@async print(x) # some IO function
end
function bar(x)
@sync foo(x)
end
`foo()` returns an asynchronous `Task`, `bar()` awaits this task, and you can invoke `bar()` from whatever context you want. Now look at the Python version with async/await keywords:
async def foo(x):
print(x) # some IO function
def bar(x):
await foo(x)
# SyntaxError: 'await' outside async function
Oops, we can't make `bar()` synchronous, it MUST be `async` now, as well as all functions that invoke `bar()`. This is what is meant my "infectious" behavior.
Maybe we can wrap it into `asyncio.run()` then and stop the async avalance?
def bar(x):
asyncio.run(foo(x))
bar(5)
Yes, it works in synchronous context. But path to asynchronous context is now closed for us:
async def baz(x):
bar(x)
await baz(5)
# RuntimeError: asyncio.run() cannot be called from a running event loop
So in practice, whenever you change one of your functions to `async`, you have to change all its callers up the stack to also be `async`. And it hurts a lot.
Can we have asynchronous programming in Python without async/await. Well, prior to Python 3.5 we used generators, so it looks like at least techically it's possible.
The tradeoff in Go as I understand it is that you can't know whether the runtime will opt for single threaded concurrency or parallelism? That seems like it could be a headache, but there will always be a headache somewhere in concurrent programming. Perhaps the Go headache is smaller than the C# one in this case.
Doing blocking IO together with UI code is pretty bad in general though. Disks are certainly not quick enough to have File.Delete(...) be blocking unless you know the disk isn't on a network server which is aboard a satellite leaving the solar system or whatever edge case you'll invariably run into.
> Doing blocking IO together with UI code is pretty bad in general though.
Non-blocking IO without multithreading doesn't require async/await though, all operating systems have had non-blocking IO functions for decades, they just never made it into language stdlibs.
Yes, and blocking is usually faster. It's entirely correct for code to be blocking by default because it shouldn't be assuming that there is any interactivity going. If I want to make a console utility that scans some files then I don't need to worry about whether a UI is being repainted. So I completely agree with treating asynchronous as the odd case, and blocking as default which is not what JS does but it's what e.g. .NET IO does, and if you want to have a responsive UI together with IO you can often just combine a processing thread that blocks, and a UI thread that doesn't. The case where you'd want proper async IO is when you want to wait for 100 IO tasks each with unknown duration, where each is really async at the OS level anyway but your IO API doesn't expose that. Doing 100 threads isn't really a good option.
> What will the state of your UI be when that happens?
If you can't do anything else until you know whether it was a success or failure, then you ensure that. E.g. you disable every single button that allows the user to do something else before the previous operation completes.
Basically the theory is usually that you can allow the UI to "read" the program state while a "write" operation is still in flight. Typically this results in the user being able to for example scroll a document so it re-renders correctly etc. After the in-flight operation succeeds/fails, you can show the user the message if required, then enable new operations to happen. But the UI never stopped pumping messages so it was always responsive at least.
> you disable every single button that allows the user to do something else before the previous operation completes.
Wow you mean the whole program becomes unresponsive? Crazy!
To address your main point, yes, scrolling, hover, etc can continue to work. But now you genuinely have two things your program is doing at once, and these must be coordinated.
A gui framework typically handles this, with a separate thread (or separate OS process). So your thread that responds to events can block while the render/refresh continues doing its thing.
With this design the problem goes away. Instead of writing code that disables the ui, issues a callback, waits to respond, you just literally write:
If (!file.delete()) {
Showerror()
}
This is the kind of code you can read, and put breakpoints on.
> Wow you mean the whole program becomes unresponsive? Crazy!
Yes a normal single threaded GUI normally becomes unresponsive if the user invokes blocking IO on the main (UI) thread. By "unresponsive" in this context I mean "does not process messages the message queue".
That the user can't e.g. perform a certain operation is in this context not the same kind of "unresponsive". It responds (it could even tell him that he can't do it, or why he can't). It would be unresponsive if it gave the appearance that he could do something, but when he tries to, the UI doesn't respond and start the operation he requests.
> Instead of writing code that disables the ui, issues a callback, waits to respond, you just literally write: If (!file.delete()) { Showerror() }
That's typically how I'd write code regardless of whether it's explicitly async. "Disabling everything" usually isn't necessary, what you disable is of course the operations thare logically forbidden to perfom until the first operation completes. In a perfect world you don't have those. But often, you do.
I had lots of pending requests, the goal was to have as many as possible (since the whole job took about 30 min) without freezing the UI.
When the callback happens there is work to do. The pattern is to do this work immediately.
Then there were as many as [not] possible bits of work to do simultaneously. Since the amount of work per job is unpredictable deliberately making the amount of simultaneous jobs unpredictable is insanity.
Synchronously I can do [say] 50 requests per second, parse 55 and have a buffer.
The solution to the riddle is not to limit the number of requests by 90% and extend the task to take 5 hours while not using 90% of the resources. Then UI freezes only become less frequent, they don't go away.
Instead I store all data from all callbacks in an array along with a description and use a setInterval to parse a configurable number of responses per second while adjusting the new requests to the size of the backlog.
The only original insight of this blog post is that it's nice to be able to use `sleep()` without async/await in Go, which is also true for Rust and Rust has Async.
It does not stem from async/await that Javascript doesn't have sleep()
If I want my code to actually halt, I can either make myself red and use await on your red function, or I resign myself to put everything after the sleep in a .then()
I just found out you can sync sleep in JavaScript. I’m using it to implement multi-process sync file locks that need to interoperate with a large non-async framework (eslint).
I misread your comment. It wouldn't make sense to have a non-async sleep in a browser, as it is an event-loop based primarily single-threaded JavaScript runtime.
To me, the title seems a bit extreme, but I think of it as really just synchronous programming.
True threaded programming is difficult. I find modern closure syntax, where closures can access parent contexts, to be the most effective way to write concurrent stuff.
In either case, you still need to worry about things like thread contention/locks and whatnot.
Those of us, of a certain age, can remember “refCon” (reference context) parameters. I haven’t had to use one of those, in ages.
I think it’s nice for server stuff but on a desktop app it’s a pain to deal with. MS definitely went overboard with a lot of APIs being async only. A lot of people don’t seem to understand that async/await is still multithreading so in a desktop app they tend to mess up. Not sure how it’s in mobile.
Parent didn't say that "async/await" is based on multithreading, they said "it is multithreading", which is definitely correct. It is a form of cooperative multithreading with the statically enforced restriction that you can only yield when the thread stack contains a single stack frame.
It is true that an async/await task doesn't map 1:1 to an OS thread, but that's neither here nor there.
Async is orthogonal to multithreading. The async runtime's threading model is an implementation detail. e.g. Node is single threaded. In Rust the Tokio async runtime has a configurable threading model.
The article is mostly focused on Dart and C# - maybe you're referring to one of those specific implementations
Again, thread does not imply OS thread. That's only one possible instance of threading. A general thread of execution is just a sequence of continuations executed one after another. This is exactly what an async/await task is.
The fact that multiple lightweight threads map on an heavier weight OS thread like in Node or in Tokio or whatever is neither important nor novel. M:N threading has been a thing for a very long time.
A specialized async/await runtime is a bit different from the typical M:N runtime (which usually tries to transparently mimic the preemptive posix thread medal), but conceptually there isn't much difference.
> . A general thread of execution is just a sequence of continuations executed one after another.
you're using a definition of "thread" that is quite abstract and not at all what is generally meant by most people when discussing these things
> multiple lightweight threads map on an heavier weight thread like in Node
describing Node as implementing M:N threading, while correct in an abstract sense, is not really useful or again, how most people would describe it.
> A specialized async/await runtime is a bit different from the typical M:N runtime... but conceptually there isn't much difference.
sure, conceptually, but again, you're using definitions in a very idiosyncratic and abstract manner. Which is your right, but it's not very persuasive and it's out of touch with how most people talk about these things.
Well, technically Node is N:1. But concretely, what significant difference you see from a async/await runtime and threaded runtime, other than the former requiring yield points to be syntactically marked in code?
I find the developer experience is quite different. Which is actually quite important. In the async runtimes I am familiar with I find managing shared resources and locking much easier.
But maybe I'm missing something here. Do you know of an async runtime and a threaded runtime that do not have significant differences?
Take boost.asio: it is a generic event loop (that can run on one or more OS threads): on top of asio you can run old school manual continuation passing code, promise/future based code, async code using C++20 coroutines (or a macro hack), or more classically multithreaded code using boost.context. You can write the same logical code in any style and the transformation is fairly mechanical.
async/await allows multiple stacks to be active at once within a single thread. It's not a form of multi-threading, which implies the presence of a thread scheduler.
Worse than null-terminated strings in C? Worse than null being return on failure of dynamic memory allocation? Worse than nullability of columns in sql which Tony Hoare (the author) called "my Billion-dollar mistake"? Worse than the Knight Capital update bug that caused a $440million loss in 45minutes meaning Knight went out of business and was taken over? Worse than the innovative design of Therac-25 that caused deaths and serious injuries by giving patients 100x the intended doses of radiation? Worse than the Fujitsu Horizon system that lead to deaths due to stress and suicide, innocent people being put in jail etc...?
I could go on but you get my point.
What a staggeringly stupid headline in service of clickbait.
The billion dollar mistake refers to pointers being nullable. Old school type systems not supporting the equivalent of Maybe<T>, basically. It's not specific to columns in sql.
Async/Await is a relatively recent development and the author is free to argue it's bad paradigm that makes simple things way harder than they need to. It wouldn't be the first time our industry jumped on a very silly bandwagon! Calling something "the worst" is fine. It's not a literal claim and it doesn't deserve your scorn.
> Worse than the Fujitsu Horizon system that lead to deaths due to stress and suicide, innocent people being put in jail etc...?
Horizon had really bad bugs, yes - but it wasn't the software that caused the cover-up or the miscarriages of justice: it was the management of (then-)privatized Post Office Ltd that decided they could not afford the risk of losing big government contracts if any word got-out that the system was making fundamental ledger errors. Most of everything else can be blamed on the sheer separation between the devs and the actual end-users: these problems could have been caught and fixed if the Horizon tech-support people weren't 100% subordinate to upper-management: if I learned that our support people were using phone-scripts that were as bad as the PO's I'd threaten to resign).
(though I'd argue the real fundamental problem here wasn't technical, nor managerial, but a simple consequence of the UK's entrenched class-system: subpostmasters generally don't read Classics at Oxbridge, which means in the eyes of the establishment that they're probably the ones at fault)
I'm not mad at the ICL/Fujitsu devs - they were severely understaffed (I gather it was literally just 4 people?), but I am disappointed (and in a state of disbelief) that they evidently didn't have any devs competent enough to know how to design a transactionally-safe retail POS system in the first place (and P.O.S. is the word...): hiring and retaining good people would have avoided this episode entirely (...though no-doubt something else would have led to a similar management scandal eventually - it's in the nature of almost all large UK businesses at this point.
The net effect of these "syntactic sugar" diatribes has been many lifetimes of wasted developer effort in migrating between programming languages and frameworks, and countless products/investment dollars fizzling into oblivion because the developers couldn't stop worrying about how pretty the code looks.
If your program starts spanning multiple machines or awaits user input CPS becomes a topic. At first it's scary, after a while you just assume everything is async.
I think the idea behind c# async await was that you could use multple threads without having to worry about the details. It seemed redundant to me to have awaits in a web server. That runs multiple threads already, why does my code have to be async now? I hated it too.
It's not about syntax. There is a huge difference in implementation and semantics between stackful coroutines (which go uses) and stackless coroutines (which most languages with async/await use).
For all practical purposes goroutines behave as separate threads with blocking calls. The fact that they are multiplexed on a few system threads is an implementation detail.
Otherwise you could say that using system threads directly is also asynchrnous programming. After all, your thread gets suspended on system calls (including synchronization primitives) and is resumed upon their completion.
I don't think there is a huge difference: you can implement stackful coroutines via heap allocated frames a-la scheme that look a lot like separately suspended stackless coroutines. Conversely you can combine chains of stackless coroutines waiting on each other in a single object (I think rust is for example capable of this in principle).
Semantically the biggest difference is that stackless coroutines typically require yield points to be marked syntactically in code.
Each thread is tied to an OS thread, which is tied to a CPU core/hyper-thread. You get like ~6000 threads on a modern OS and CPU.
Your program needs one million threads that sleep for 2 seconds, read some data and then finish. Guess what? Your execution is going to take hours, or get some kind of exception that you run out of threads because after the first ~6000 threads are taken, your OS can no longer give threads to anything else.
With Green threads, the threads are fake aka virtual and controlled by the language's runtime, be it JVM, CLR, Go's runtime etc. Runtime is usually smart enough to recognize sleeps and, while waiting for something, schedule another thread in its space.[1] So now all one million threads start near instantly and work almost all in parallel.
Languages with async/await do usually give you a way to bridge between async and sync code, but generally through a function (like this one [1] in Python) rather than allowing you to await from sync code. The problem is that sync code isn't being scheduled by the event loop. You need to use a function that's aware of the event loop's internals and can schedule your coroutine, and use a rendezvous or another synchronization primitive to wake up your sync code when the coroutine is done.
One could imagine a language where an await from sync code was syntactic sugar for such a function call, but generally the async/await syntax serves as a way to deliberately segregate your sync and async code. So that would defeat the purpose. At that point it might make more sense to design the language like Go and make everything async. (Preemptive runtimes like Java's virtual threads are another option.)
There was a good blog post I want to link here where the author argued that async/await is making explicit the fundamental property of some functions being expensive, as a counterpoint to TFA, but I haven't been able to find it again. But they made the case that expensive operations are fundamentally viral, and async/await was only making this explicit and wasn't unjustified overhead as some argue.
No, it cant. These 2 are semantically not eqivalent, because in async version caller of f1() is resumed only once fetch has completed. In your callback version it will be resumed immediately.
Also think how this should be translated:
function f1() {
x = await fetch('something');
return x + 1;
}
Only in poorly designed code you have problems like this where Async Await goes viral. It can be avoided by spiting libraries into an IO part that uses Async/Await and a protocol part that is sans-io. Using async code to access data that is already in the application memory is inefficient and should be implemented synchronously with regular functions instead of asynchronously.
This seems a no true Scotsman argument. In actual real applications it is not always possible to split IO from non I/O parts and the virality of async prevents composition and encapsulation without significant refactoring.
> In actual real applications it is not always possible
This depends on what are your expectation. IO operations must suspend to wait for data read and writes, therefore it is not possible to avoid Async/Await. In other cases you might have multiple tasks depending on a specific IO operation, for example, one connection to a database that is used by multiple HTTP sessions, here it is also not possible to avoid Async/Await because those sessions are bound to an IO operation.
The real problem however comes when tasks that can complete synchrously are implemented with an asynchronous interface, for example
This is a poor design because a socket read can pull multiple messages from the kernel buffers into user space and there should be a way to consumed them without Await. Many libraries however don't for watch this problem and that results in the everything is async madness.
"the syntax for kicking off dozens of IO requests and collecting all the results is trivial."
Please show me this trivial code, assuming I want to process 12 requests in parallel at most (and always processing 12 at the same time until there's non left to process).
I'd argue that it is a lot easier than doing it with threads!
Promises are a primitive, and async and await keywords in JavaScript is just syntactic sugar around promises (it is sugar around similar constructs in other languages). A promise being just a long running task that will return a result eventually. Being able to grab a promise as an object and pass it around is super useful at times, and it is something I end up using a lot in my JS/TS code.
Because it is a language primitive that is also expressed in the type system, more complex systems can safely be built up around it, in the same way that it is easier to build safe(r) complex systems up around threads in languages that have threads as a primitive. (Rust being a great example here of bringing threads into the language, Java being another early example, though their early attempts were not perfect since we've learned a lot since 1995!)
Async/await and promises are a great example of a technology that makes doing easy stuff easy, and makes hard stuff possible.
tl;dr people need to stop complaining that other multitasking/threading paradigm looks different than their preferred one, each has plusses and minuses and one isn't "better" than others, they just serve different purposes.
In C# you'd use a Channel for this, which is pretty easy to use. But of course that is built on top of async/await, with those alone it is far from trivial to implement your specific case.
Programmers who are not aware of the pitfalls will use the sync version and build apps that have terrible ux, so it’s better to not give them the option unless there’s a really good reason to, so what’s the use case here?
Yeah that's why in nodejs I made functions as async unless I know really well that it won't need await. Though that seems bad, it's not the worst due to how good async/await actually is.
Unpopular opinion, it makes me want to have a setting where all function is async by default, and all function calls are await by default. With `nowait` and `sync` as the opposite
I used to write a lot of asynchronous servers in C up until a couple of decades ago. I found it easy. Most people didn't. We have better ways of doing things today.
If it's the same CSP I'm thinking of, then yes, but it's only simpler because it relies on enough people on the team having a good grasp of these parts of CS. Based on my own experiences in uni I can tell that courses on formal-methods and the like are probably the least-popular: being taken by a tiny minority of students - it follows then that only a tiny minority of software-writing professionals will have the requisite level of understanding to apply these approaches to their day-job - and those that do are likely already employed within an organization that relies on these formal-methods (e.g. safety-critical avionics, Wall St. quants, etc) which in-turn will help attracts other highly-capable people.
...now contrast those imagined employers with the rest of the software industry: unsexy line-of-business application developers, SaaS dev contract shops, the places where people who couldn't get jobs at Google or OpenAI might end-up working; and also consider the larger-still community of non-professional software writers (people doing VBA in Excel to anyone who simply wants to learn how to make an interactive website for themselves).
CSP is not going to help this latter group. And, in fact, it's this very latter group which drives programming-language design because that market is 100x the size of the formal-methods-ivory-tower people (who are probably using gratis open-source tooling anyway).
Compared to CSP, async/Await is something that you can demonstrate to someone with almost zero experience writing software, who probably can't even afford the time to try to understand how it works, but the mechanics of putting the `await` keyword in the right place can be taught in a few hours and suits 95% of their needs.
-----
If languages like C# or JavaScript were designed only to suit people like you or me then the ecosystems around those languages wouldn't be anywhere near as big, nor those languages anywhere near as decently supported and maintained. If the "price" for that is putting-up with a handful of language-warts then I'm happy to make that trade. I've still got Z3 for everything else :)
Coroutines via user-mode stack-switching worked just fine for decades. Async/await is really only needed on limited runtime environments like Javascript or WASM where stack switching isn't an option (and at the cost that the compiler needs to turn sequential code into a switch-case state machine under the hood, which then introduces all sorts of problems, from 'function coloring' to requiring special support in debuggers).
Everything above machine code is about hiding complexity. That per se is not an issue. Programming in CPS form is madness, async/await is only slightly better than that.
What is your program going to work on while it waits for the task? Usually nothing. You need to read some data before analyzing it, etc.
While you wait the OS can switch to other threads.
The only question here is whether you want to return that time to the operating system or to your language runtime.
> they’re just hiding the complexity
async/await feels helpful because you can write normal code again! If else, for loops, etc.
Oh wait that’s also what the UNIX operating system does. It abstracts away all this blocking and context switching so you can write normal code.
> If adding async to a function is too much
The authors point is a good one. You essentially have two languages and classes of functions. The regular version and the async version. Lots of duplication and a tendency for everything to become async.
while we’re waiting for the OS to ‘save us’ from async/await, let’s not ignore the fact that writing code that doesn’t hang, crash, or block the main thread is a skill for a reason.
Hang implies there is something you are not responding to.
Let me ask again. What are you imagining your main thread should be doing while it is waiting for essential data?
responding to new inputs means changing state. But your program is already in another. Two separate program execution states are best described by two separate threads.
> crash
Crash early, crash often. If invariants can’t be maintained, don’t pretend otherwise.
"Ah, the ‘crash early, crash often’ mantra — truly inspiring. I guess when your program explodes because it can’t handle concurrency, we should just sit back, crack open a cold one, and toast to ‘maintaining invariants.’
And sure, let’s talk about state. If handling multiple states at once is ‘extremely difficult,’ then yes, async/await might not be the best for anyone who panics when their program has to juggle more than one thing. But that's kind of the point.
Async/await is like giving you a leash for your concurrency so you don’t need to wrangle state machines on a pogo stick. But hey, if you’re happier living on the edge of crashville because ‘the OS scheduler will save me,’ who am I to interrupt your Zen?”
https://archive.is/bDczv
The article complains that async/await 'infects' all the code that touches it and forces the callers to use async/await too.
But isn't the same true with go channels ? If you want to asynchronously interact with a channel (that is, without blocking the main thread), you have to do it in a go block and the caller has to do the same and so on ?
Promises behave similarly - must wrap your code in promises all the way.
These constructs are alternatives to the good old callbacks, which force you to write your code inside callbacks, thus 'infecting' everything and leading to callback hell.
This 'cascade infection' effect is due to the inherent nature of things happening asynchronously, which contradicts the synchronous program flow inside a thread, so when the async event terminates, the program has to jump to a handler in order to process the results.
In the end it's a matter of taste imo..
It's the same for many things. Java throws, const-correctness in C++, etc, etc.
As some other comment said, it's like the Haskell IO monad and that's OK, because it lets you isolate and be aware of the implications of that code.
Haskell has <$> and the infrastructure of HKTs to stop this infectious propagation of IO, other languages do not, and their async/await colors do not isolate side-effectful actions from the rest pure parts of your codebase.
https://hackage.haskell.org/package/base-4.20.0.1/docs/Prelu...
> other languages do not
Which ones? I think there's always some way to isolate, even if ugly.
> Which ones? I think there's always some way to isolate, even if ugly.
Almost all of them? You need referential transparency (via laziness) too, otherwise your attempt at isolation will break at the first binding expression in a local scope for future processing elsewhere:
> the article complains that async/await 'infects' all the code that touches it and forces the callers to use async/await too. But isn't the same true with go channels ?
I'm not familiar with go, but I don't think so: stackful coroutines abstract better than the stackless kind.
Yep in some way the programmer needs to express that the routine will have to _continue_ when IO completes (which will necessarily go up to the top of the stack in some way, unless you don't care about the result of the IO operation), or the runtime needs to block until IO completes.
In a big Go application, most of the time you're not writing code that runs in the "main" thread. If you're writing a UI, you put the UI code in its own goroutine, and pass messages to it. If you're writing a server, most of your code will be in your endpoint handlers, which run as their own goroutines.
In >95% of your code, it's fine to just write `foo_val <- foo_chan`, without spawning a goroutine. From a pragmatic standpoint, it's not really different from `foo_val = expensive_foo_calculation()`. This block of code is waiting for something else to finish, and the Go runtime is smart enough to decide whether or not this thread should be parked until that result is ready.
And, as a bonus, `foo_val = expensive_foo_calculation()` looks the same, even if the implementation launches 10 cpu-bound goroutines and reads from 30 files to do the work.
That was the most useless async/await post I've seen. The only useful bit is that the Go implementation isn't similarly painful.
First of all, it gets the function color problem backwards. Async await forces 'coloring' of execution to be async. But the desired number of colors is one which is what you have without async/await.
The way Go solves this is by making all 'threads'/goroutines async context without saying so and there's no way to make them not that. Effectively they all started gray-purple or whatever color that was to begin with. It would be as if all the Rust developers went all in and said "Everyone let's only do async."
The problem isn't promises/futures etc, they work fine as can be seen in Java with their CompletionStage. You can even use Executors with thread pools without function-coloring. I never understood the need or desire for async/await keywords (and the corresponding 'rest-of-program' transformation that happens under the hood. That Rust adopted it is the main reason I won't consider entering the ecosystem unless it somehow gets sorted out e.g. with two library ecosystems, basically bifurcating the language.
You need or want async/await in any program that has UI on the main thread, because you do not have the luxury of blocking the main thread if it’s running UI. It’s fine to have blocking code on background threads, but blocking the main thread will cause the dreaded hour-glass or beachball cursor and render your app totally unresponsive. For GUI programming you at least want async/await or an equivalent to model UI event handlers that dispatch work to the background, and then can resume work on the main thread to update UI when the background task is complete.
Swift for example is transitioning from using a lot of callbacks and manual thread dispatch everywhere to using async/await and while the infection aspect is annoying from time to time, needing to deal with continuations/callbacks manually tends to be just as infectious, which was the old way. Even worse is manually wiring up message passing infrastructure inside the app. I wonder how Go UI libraries deal with this? I wrote some X11 apps in Go back in the day and had a bad time whenever I blocked the main thread waiting for a response from a background worker but maybe today there’s better abstractions in the native UI libraries.
The other big downside to threading is the mental overhead of needing to consider the memory model, worry about parallel memory access of objects causing problems, and needing to review code with a microscope in case someone is introducing memory access violations or the even worse deadlock/contention cases. Some programs really benefit from large shared data structures and those are fraught to share across threads and I think that’s where multi-threading gets its somewhat deserved reputation for being annoying.
Go is wonderful for the things it’s built in tools and semantics are well-suited to handle: request/response (where the UI lives in some other process that talks to Go; Go is always the “background” thread pool) or run-till-completed jobs that just print logs as their UI. It is kind of horrible for other stuff. I personally found the channel management and concurrency situation inside the Kubernetes source code really hard to follow since that’s a kind of program that’s all about long lived data shared structures & systems communicating with each other; it would probably be more understandable in Erlang or something.
It is quite annoying for older codebases in C# that have a lot of existing sync code, but for new code it doesn't matter all that much. You use async methods for IO and DB access and in many applications this means that most of the methods will be async.
I only played around with it a bit a long time ago, but I didn't find Go concurrency as simple and easy as it is often sold. It felt very low level, which is fine for the design goals of Go, but also meant that I was still left with doing some harder parts myself.
Concurrency in C# with async/await is pretty easy for the straightfoward cases that make up most of a typical application. You do have to keep to a few rules and it certainly has very dangerous footguns, but those are minimized if you use consistently use async methods instead of sync.
The only real landmine for c# is the default thread synchronization. If it was inverted, the language would be much better off. I think they also made a mistake in removing the OOTB method for throwing away the sync context, but I guess there are many third party libs that provide it.
.NET Core web apps have no synchronization context anymore, which is exactly what you're asking for if I understand you correctly. There is no need to call ConfigureAwait there.
I think this is different for GUI apps, but I have no experience with that.
I think Medium is worse
Ha, I have to agree with you. Fluff all the way down.
Here are more arguments against async/await: https://www.youtube.com/watch?v=449j7oKQVkc
Ron Pressler always was an advocate for blocking code and even joined oracle to add virtual threads to the JVM, thus invalidating the performance argument of the async/await/non-blocking crowd.
I really wish young developers would be taught about the actor model and communicating sequential processes before falling for the false promises of async/await-land.
And I wish JavaScript runtimes had a way of expressing continuations /blocking threads on their eventloop.
That performance argument never was a real issue for most applications anyway. You rarely have context switching as a bottleneck in a run of the mill web app, usually it's suboptimal queries, accidentally quadratic naive algos. I think even memory access patterns are more of an issue if you are compute heavy. Considering that "performance" matters at all for the applications purpose. Not everything is a high throughput load balancer.
And somehow all these programmers who never care about performance because “computers are fast” become micro-optimizers, willing to restructure every line of code to save a few KB of RAM and rare handful of ms for a context switch.
Of all things, python went with async instead of gevent!
Async is equivalent to the io monad. It is in itself a monad, I know, but I’m saying it’s just like the io monad. It pollutes everything it touches. It’s literally only effective when paired with IO. So it’s actually in many ways identical.
And that’s a good thing.
In attempting to avoid the pollution u end up implementing the imperative shell/functional core pattern.
Most programmers don’t know that pattern. But for those in the know, the pollution is a good thing.
If you programmed in Haskell you know what’s up. The way you avoid the io monad from polluting everything is part of what makes the program so modular. Async does the same thing. Literally.
The above is roughly equivalent to this in haskell: How do you avoid pollution? The answer to this question makes your program better."X is just a monad" isn't a useful statement, because lots of types are monads (e.g. lists, hash maps, and nullable pointers).
An important difference between async/await and Haskell's `IO a` is that it's possible for asynchronous code to invoke sync code, and in some languages (such as Rust) vice-versa. So it acts more like a monad transformer, providing operations `IO a -> AsyncIO a` and `AsyncIO a -> IO a`.
The main challenge of async/await is that unskilled people who don't understand threads try to use async/await as a substitute, which leads to bizarre articles like "what color are your functions".
I said async functions can be treated as the IO monad. And I also said that I realize that promises are themselves monads but that wasn’t my point. The point was to use async functions as coloring in the same way Haskell does it with the IO monad.
> The above is roughly equivalent to this in haskell:
It's not equivalent.
> How do you avoid pollution?
Haskell has <$> and the infrastructure of HKTs to stop this infectious propagation of IO, other languages do not, and their async/await colors do not isolate side-effectful actions from the rest pure parts of your codebase.
https://hackage.haskell.org/package/base-4.20.0.1/docs/Prelu...
I said roughly equivalent. Async functions pollute and represent io in the same way the io monad does.
The io monad does not isolate io from your pure code. It’s infectious just like an async function.
It’s the abstractions and ways to stop the infection that makes the code pure. You don’t even need hkts to do this. Most languages don’t have a type representing this infection. The infection propagates everywhere without anyone realizing it. The IO monad explicitly tells the developer that the infection is occurring.
I’m saying that async functions do the same thing as the io monad.
The <$> operator in Haskell is just sugar for patterns to stop the pollution from occurring. You can implement it in typescript too. It just won’t be as general as that operator is defined across functors. In typescript you would define a function across only promises.
"Roughly equivalent" isn't equivalent at all.
> I’m saying that async functions do the same thing as the io monad.
> The <$> operator in Haskell is just sugar for patterns to stop the pollution from occurring.
No they don't. Async functions aren't IO actions in Haskell terms, and for the latter argument of <$>, you need referential transparency (via laziness) too, otherwise your attempt at "sugaring" your async functions will break at the first binding expression in a local scope for future processing elsewhere:
Do you want to wrap-and-call-later all of these cases into lambdas by hand? :) Show me an example of that being done in a type-safe way in typescript, and I'll point you at the layers that will break composition at the next binding.If You want to redefine the meaning of roughly equivalent then that’s your prerogative. There’s an isomorphism I’m referring to here and if you fail to see it that’s not my problem.
As for the rest of your argument, the point is to not use async functions locally in the context of pure logic. The pattern is imperative shell, functional core.
Adding a property-changing prefix to "equivalent" makes it non-equivalent, I thought you would understand it if you were using the word "isomorphism".
> the point is to not use async functions locally in the context of pure logic. The pattern is imperative shell, functional core.
The point is that IO actions aren't `async defs`, because async defs don't have two important properties to hold eqivalence to IO actions in Haskell. I'm not sure why you're trying to cherry pick arguments to see your argument fit into the slots that don't accept coloring keywords where they don't belong to: seamless composition.
You’re just playing pedantic games. By roughly equivalent I mean isomorphic. Do you not get it? Isomorphism isn’t equivalency. Sure thanks for pointing the obvious out. Why don’t we get with the program rather than state pedantic details?
IO actions aren’t equivalent to async defs. I never said that. I said roughly equivalent which means isomorphic.
I’m not sure why you’re trying to say I’m cherry picking my argument when I am the one dictating the point here. I made the first statement and you responded to it and you started out your previous response by trying to turn the conversation to your point.
Bro I made the point. I’m not changing the point. You need to not change the topic. In the very beginning I said functional core imperative shell. That’s the point.
I guess the io monad doesn’t prevent people from writing shit code in Haskell. You’re weaving in and out of io constantly with almost everything polluted with IO. Pure functions are scattered randomly in a patchwork of compositions without delineation between purity and IO. You don’t see that there needs to be a layer between the two.
> You’re just playing pedantic games.
I see you've been cultured by typescript and js.
> I said roughly equivalent which means isomorphic.
"roughly equivalent" isn't the definition of isomorphic, and I hinted which properties a type system and the runtime have to support for that isomorphism to be manifested in a language implementation, which isn't there for all of the mainstream languages, unless you're willing to provide that conversion by hand.
> when I am the one dictating the point here. I made the first statement and you responded to it and you started out your previous response by trying to turn the conversation to your point.
You're simply wrong, that happens.
> In the very beginning I said functional core imperative shell. That’s the point.
That terminology only exists as a coping mechanism for those on the mainstream languages. In Haskell everything is functional composition, and `IO a` is neither exempt from it, nor is made into a special case. When you realise this I'll congratulate you on becoming less ignorant.
I’m not continuing this further. The thread has turned from discussion to conflict and we are both at fault. I’m ending it here and pray that dang doesn’t come along and flag the whole thing. Good day to you sir.
This thread is a good reminder not to try to become a Haskell Programmer
> This thread is a good reminder not to try to become a Haskell programmer
Many people say the same when they see pro players in their game at the NFL's Super Bowl. Others get excited and pursuit the career.
It’s eye opening if you get it. I realize this thread is childish and arrogant but that’s largely orthogonal to the epiphany you gain from grokking Haskell.
Haskell has significantly more powerful abstraction capabilities than your average async/await language though, making coloring less of a problem.
Also in Haskell you can only perform IO (aside from unsafe IO I guess) inside the IO monad, potentially making the abstraction worthwhile, this is not the case in many other languages.
It’s the same in typescript if your io is exclusively always called from async functions.
That's not Haskell AFAIK. Do you mean
also known as monadic bind of a and b?Yeah typo. Corrected.
There seem to be two distinct topics:
1) async programming vs. threading
2) infectious async/await syntax
Async programming is great. Coroutines are a powerful tool, both for expressing your ideas more clearly and for improving performance in IO-heavy systems.
async/await syntax may not be the best design for async programming though. Consider example in Julia:
`foo()` returns an asynchronous `Task`, `bar()` awaits this task, and you can invoke `bar()` from whatever context you want. Now look at the Python version with async/await keywords: Oops, we can't make `bar()` synchronous, it MUST be `async` now, as well as all functions that invoke `bar()`. This is what is meant my "infectious" behavior.Maybe we can wrap it into `asyncio.run()` then and stop the async avalance?
Yes, it works in synchronous context. But path to asynchronous context is now closed for us: So in practice, whenever you change one of your functions to `async`, you have to change all its callers up the stack to also be `async`. And it hurts a lot.Can we have asynchronous programming in Python without async/await. Well, prior to Python 3.5 we used generators, so it looks like at least techically it's possible.
> Can we have asynchronous programming in Python without async/await.
Gevent exists: https://sdiehl.github.io/gevent-tutorial/
The tradeoff in Go as I understand it is that you can't know whether the runtime will opt for single threaded concurrency or parallelism? That seems like it could be a headache, but there will always be a headache somewhere in concurrent programming. Perhaps the Go headache is smaller than the C# one in this case.
Doing blocking IO together with UI code is pretty bad in general though. Disks are certainly not quick enough to have File.Delete(...) be blocking unless you know the disk isn't on a network server which is aboard a satellite leaving the solar system or whatever edge case you'll invariably run into.
> Doing blocking IO together with UI code is pretty bad in general though.
Non-blocking IO without multithreading doesn't require async/await though, all operating systems have had non-blocking IO functions for decades, they just never made it into language stdlibs.
Yes, and blocking is usually faster. It's entirely correct for code to be blocking by default because it shouldn't be assuming that there is any interactivity going. If I want to make a console utility that scans some files then I don't need to worry about whether a UI is being repainted. So I completely agree with treating asynchronous as the odd case, and blocking as default which is not what JS does but it's what e.g. .NET IO does, and if you want to have a responsive UI together with IO you can often just combine a processing thread that blocks, and a UI thread that doesn't. The case where you'd want proper async IO is when you want to wait for 100 IO tasks each with unknown duration, where each is really async at the OS level anyway but your IO API doesn't expose that. Doing 100 threads isn't really a good option.
But most windows 98 era Ui programs were 1 or 2 threads…. You just handle events in order in a loop.
It works and is far more responsive than what we have today.
> Disks are certainly not quick enough to have File.Delete(...) be blocking
What if you invoke a delete and then it fails and you want the user to respond? What will the state of your UI be when that happens?
> What will the state of your UI be when that happens?
If you can't do anything else until you know whether it was a success or failure, then you ensure that. E.g. you disable every single button that allows the user to do something else before the previous operation completes. Basically the theory is usually that you can allow the UI to "read" the program state while a "write" operation is still in flight. Typically this results in the user being able to for example scroll a document so it re-renders correctly etc. After the in-flight operation succeeds/fails, you can show the user the message if required, then enable new operations to happen. But the UI never stopped pumping messages so it was always responsive at least.
> you disable every single button that allows the user to do something else before the previous operation completes.
Wow you mean the whole program becomes unresponsive? Crazy!
To address your main point, yes, scrolling, hover, etc can continue to work. But now you genuinely have two things your program is doing at once, and these must be coordinated.
A gui framework typically handles this, with a separate thread (or separate OS process). So your thread that responds to events can block while the render/refresh continues doing its thing.
With this design the problem goes away. Instead of writing code that disables the ui, issues a callback, waits to respond, you just literally write:
If (!file.delete()) { Showerror() }
This is the kind of code you can read, and put breakpoints on.
> Wow you mean the whole program becomes unresponsive? Crazy!
Yes a normal single threaded GUI normally becomes unresponsive if the user invokes blocking IO on the main (UI) thread. By "unresponsive" in this context I mean "does not process messages the message queue". That the user can't e.g. perform a certain operation is in this context not the same kind of "unresponsive". It responds (it could even tell him that he can't do it, or why he can't). It would be unresponsive if it gave the appearance that he could do something, but when he tries to, the UI doesn't respond and start the operation he requests.
> Instead of writing code that disables the ui, issues a callback, waits to respond, you just literally write: If (!file.delete()) { Showerror() }
That's typically how I'd write code regardless of whether it's explicitly async. "Disabling everything" usually isn't necessary, what you disable is of course the operations thare logically forbidden to perfom until the first operation completes. In a perfect world you don't have those. But often, you do.
> Wow you mean the whole program becomes unresponsive? Crazy!
It is using all resources to do what it was told to.
Results depend on the magnitude of the task and the hardware available with a very large overlap where the difference doesn't matter at all.
If blowing up complexity everywhere to solve a problem you probably wont have is a good thing is left as an exercise for the reader.
I agree. The right thing to do is to wait for the task to finish. I wrote that first line in jest,
I’m making fun of the notion that blocking = slow = unresponsive.
Precisely!
I had lots of pending requests, the goal was to have as many as possible (since the whole job took about 30 min) without freezing the UI.
When the callback happens there is work to do. The pattern is to do this work immediately.
Then there were as many as [not] possible bits of work to do simultaneously. Since the amount of work per job is unpredictable deliberately making the amount of simultaneous jobs unpredictable is insanity.
Synchronously I can do [say] 50 requests per second, parse 55 and have a buffer.
The solution to the riddle is not to limit the number of requests by 90% and extend the task to take 5 hours while not using 90% of the resources. Then UI freezes only become less frequent, they don't go away.
Instead I store all data from all callbacks in an array along with a description and use a setInterval to parse a configurable number of responses per second while adjusting the new requests to the size of the backlog.
But then it isn't really async anymore.
The only original insight of this blog post is that it's nice to be able to use `sleep()` without async/await in Go, which is also true for Rust and Rust has Async.
It does not stem from async/await that Javascript doesn't have sleep()
You're returning a promise, that's a red function. It's async
https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
Sleep() should halt, this returns immediately
If I want my code to actually halt, I can either make myself red and use await on your red function, or I resign myself to put everything after the sleep in a .then()
I just found out you can sync sleep in JavaScript. I’m using it to implement multi-process sync file locks that need to interoperate with a large non-async framework (eslint).
What the fuck?
>Thrown in one of the following cases:
>If the current thread cannot be blocked (for example, because it's the main thread).
I guess that's why it's not so widely used
It's not permitted on the browser main thread, but works fine on the Node main thread.
I misread your comment. It wouldn't make sense to have a non-async sleep in a browser, as it is an event-loop based primarily single-threaded JavaScript runtime.
I still don't know why that isn't just provided for you.
The function coloring post[1] covered this very nicely some time ago. It's definitely still an issue in any "monadic" approach to IO.
People who use progressive languages[2] will be using effect systems in a year or two, and this problem will go away.
[1]: https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
[2]: Unison, OCaml, Scala, and maybe more I don't know about.
I'd argue that the pyramid of doom callback hell is easily a lot worse than async/await.
To me, the title seems a bit extreme, but I think of it as really just synchronous programming.
True threaded programming is difficult. I find modern closure syntax, where closures can access parent contexts, to be the most effective way to write concurrent stuff.
In either case, you still need to worry about things like thread contention/locks and whatnot.
Those of us, of a certain age, can remember “refCon” (reference context) parameters. I haven’t had to use one of those, in ages.
I think it’s nice for server stuff but on a desktop app it’s a pain to deal with. MS definitely went overboard with a lot of APIs being async only. A lot of people don’t seem to understand that async/await is still multithreading so in a desktop app they tend to mess up. Not sure how it’s in mobile.
Although it seems async/await is based on multithreading, this is not the case. To learn about this, read this blog post: https://blog.stephencleary.com/2013/11/there-is-no-thread.ht... Please let me know what you think.
Parent didn't say that "async/await" is based on multithreading, they said "it is multithreading", which is definitely correct. It is a form of cooperative multithreading with the statically enforced restriction that you can only yield when the thread stack contains a single stack frame.
It is true that an async/await task doesn't map 1:1 to an OS thread, but that's neither here nor there.
> It is a form of cooperative multithreading
This is not correct.
Async is orthogonal to multithreading. The async runtime's threading model is an implementation detail. e.g. Node is single threaded. In Rust the Tokio async runtime has a configurable threading model.
The article is mostly focused on Dart and C# - maybe you're referring to one of those specific implementations
Again, thread does not imply OS thread. That's only one possible instance of threading. A general thread of execution is just a sequence of continuations executed one after another. This is exactly what an async/await task is.
The fact that multiple lightweight threads map on an heavier weight OS thread like in Node or in Tokio or whatever is neither important nor novel. M:N threading has been a thing for a very long time.
A specialized async/await runtime is a bit different from the typical M:N runtime (which usually tries to transparently mimic the preemptive posix thread medal), but conceptually there isn't much difference.
> . A general thread of execution is just a sequence of continuations executed one after another.
you're using a definition of "thread" that is quite abstract and not at all what is generally meant by most people when discussing these things
> multiple lightweight threads map on an heavier weight thread like in Node
describing Node as implementing M:N threading, while correct in an abstract sense, is not really useful or again, how most people would describe it.
> A specialized async/await runtime is a bit different from the typical M:N runtime... but conceptually there isn't much difference.
sure, conceptually, but again, you're using definitions in a very idiosyncratic and abstract manner. Which is your right, but it's not very persuasive and it's out of touch with how most people talk about these things.
Well, technically Node is N:1. But concretely, what significant difference you see from a async/await runtime and threaded runtime, other than the former requiring yield points to be syntactically marked in code?
I find the developer experience is quite different. Which is actually quite important. In the async runtimes I am familiar with I find managing shared resources and locking much easier.
But maybe I'm missing something here. Do you know of an async runtime and a threaded runtime that do not have significant differences?
Take boost.asio: it is a generic event loop (that can run on one or more OS threads): on top of asio you can run old school manual continuation passing code, promise/future based code, async code using C++20 coroutines (or a macro hack), or more classically multithreaded code using boost.context. You can write the same logical code in any style and the transformation is fairly mechanical.
async/await allows multiple stacks to be active at once within a single thread. It's not a form of multi-threading, which implies the presence of a thread scheduler.
The event loop behind async await is completely equivalent to thread scheduler.
An event loop is one possible way to implement an async/await executor, but by far not the only way, or even necessarily the most widely used.
Either we have vastly different definitions for event loop or my imagination is very limited.
The code after an await often gets executed in a different thread based on my logging.
The worst thing to happen to programming you say?
Worse than null-terminated strings in C? Worse than null being return on failure of dynamic memory allocation? Worse than nullability of columns in sql which Tony Hoare (the author) called "my Billion-dollar mistake"? Worse than the Knight Capital update bug that caused a $440million loss in 45minutes meaning Knight went out of business and was taken over? Worse than the innovative design of Therac-25 that caused deaths and serious injuries by giving patients 100x the intended doses of radiation? Worse than the Fujitsu Horizon system that lead to deaths due to stress and suicide, innocent people being put in jail etc...?
I could go on but you get my point.
What a staggeringly stupid headline in service of clickbait.
The billion dollar mistake refers to pointers being nullable. Old school type systems not supporting the equivalent of Maybe<T>, basically. It's not specific to columns in sql.
Async/Await is a relatively recent development and the author is free to argue it's bad paradigm that makes simple things way harder than they need to. It wouldn't be the first time our industry jumped on a very silly bandwagon! Calling something "the worst" is fine. It's not a literal claim and it doesn't deserve your scorn.
> Worse than the Fujitsu Horizon system that lead to deaths due to stress and suicide, innocent people being put in jail etc...?
Horizon had really bad bugs, yes - but it wasn't the software that caused the cover-up or the miscarriages of justice: it was the management of (then-)privatized Post Office Ltd that decided they could not afford the risk of losing big government contracts if any word got-out that the system was making fundamental ledger errors. Most of everything else can be blamed on the sheer separation between the devs and the actual end-users: these problems could have been caught and fixed if the Horizon tech-support people weren't 100% subordinate to upper-management: if I learned that our support people were using phone-scripts that were as bad as the PO's I'd threaten to resign).
(though I'd argue the real fundamental problem here wasn't technical, nor managerial, but a simple consequence of the UK's entrenched class-system: subpostmasters generally don't read Classics at Oxbridge, which means in the eyes of the establishment that they're probably the ones at fault)
I'm not mad at the ICL/Fujitsu devs - they were severely understaffed (I gather it was literally just 4 people?), but I am disappointed (and in a state of disbelief) that they evidently didn't have any devs competent enough to know how to design a transactionally-safe retail POS system in the first place (and P.O.S. is the word...): hiring and retaining good people would have avoided this episode entirely (...though no-doubt something else would have led to a similar management scandal eventually - it's in the nature of almost all large UK businesses at this point.
The net effect of these "syntactic sugar" diatribes has been many lifetimes of wasted developer effort in migrating between programming languages and frameworks, and countless products/investment dollars fizzling into oblivion because the developers couldn't stop worrying about how pretty the code looks.
If your program starts spanning multiple machines or awaits user input CPS becomes a topic. At first it's scary, after a while you just assume everything is async.
CPS: https://en.wikipedia.org/wiki/Continuation-passing_style
I think the article confuses "spinlock" with "busy-wait", which made me confused too.
I think the idea behind c# async await was that you could use multple threads without having to worry about the details. It seemed redundant to me to have awaits in a web server. That runs multiple threads already, why does my code have to be async now? I hated it too.
Because of I/O.
Without any kind of async (this includes green threads) you run out of (OS) threads very fast.
This is not a black/white decision whether Async makes sense for every API all the time or never.
It's a solution to a specific problem that occurred (and still occurs) a lot.
The article was just about the syntax though, because they are still using asynchronous programming via coroutines (or goroutines ;-))
It's not about syntax. There is a huge difference in implementation and semantics between stackful coroutines (which go uses) and stackless coroutines (which most languages with async/await use).
For all practical purposes goroutines behave as separate threads with blocking calls. The fact that they are multiplexed on a few system threads is an implementation detail.
Otherwise you could say that using system threads directly is also asynchrnous programming. After all, your thread gets suspended on system calls (including synchronization primitives) and is resumed upon their completion.
I don't think there is a huge difference: you can implement stackful coroutines via heap allocated frames a-la scheme that look a lot like separately suspended stackless coroutines. Conversely you can combine chains of stackless coroutines waiting on each other in a single object (I think rust is for example capable of this in principle).
Semantically the biggest difference is that stackless coroutines typically require yield points to be marked syntactically in code.
> you run out of (OS) threads very fast.
What does run out mean?
Each thread is tied to an OS thread, which is tied to a CPU core/hyper-thread. You get like ~6000 threads on a modern OS and CPU.
Your program needs one million threads that sleep for 2 seconds, read some data and then finish. Guess what? Your execution is going to take hours, or get some kind of exception that you run out of threads because after the first ~6000 threads are taken, your OS can no longer give threads to anything else.
With Green threads, the threads are fake aka virtual and controlled by the language's runtime, be it JVM, CLR, Go's runtime etc. Runtime is usually smart enough to recognize sleeps and, while waiting for something, schedule another thread in its space.[1] So now all one million threads start near instantly and work almost all in parallel.
[1]https://www.youtube.com/watch?v=bOnIYy3Y5OA
Desktop OS may struggle at thousands of thread. Linux can handle many more just fine.
> Your program needs one million threads that sleep for 2 seconds, read some data and then finish.
I have yet to see this problem, but yeah I agree that millions is about when there will be problems.
Although it seems async/await is based on multiple threads, this is not the case. To learn about this, read this blog post: https://blog.stephencleary.com/2013/11/there-is-no-thread.ht... Please let me know what you think.
A quick fix to the problem would be to allow synchronous functions to use await.
But using `await` is fundamentally what makes a function async
What problem would arise if synchronous functions could use await?
Languages with async/await do usually give you a way to bridge between async and sync code, but generally through a function (like this one [1] in Python) rather than allowing you to await from sync code. The problem is that sync code isn't being scheduled by the event loop. You need to use a function that's aware of the event loop's internals and can schedule your coroutine, and use a rendezvous or another synchronization primitive to wake up your sync code when the coroutine is done.
One could imagine a language where an await from sync code was syntactic sugar for such a function call, but generally the async/await syntax serves as a way to deliberately segregate your sync and async code. So that would defeat the purpose. At that point it might make more sense to design the language like Go and make everything async. (Preemptive runtimes like Java's virtual threads are another option.)
There was a good blog post I want to link here where the author argued that async/await is making explicit the fundamental property of some functions being expensive, as a counterpoint to TFA, but I haven't been able to find it again. But they made the case that expensive operations are fundamentally viral, and async/await was only making this explicit and wasn't unjustified overhead as some argue.
[1] https://docs.python.org/3/library/asyncio-task.html#asyncio....
Couldn't the browser simply internally turn this:
Into this:No, it cant. These 2 are semantically not eqivalent, because in async version caller of f1() is resumed only once fetch has completed. In your callback version it will be resumed immediately.
Also think how this should be translated: function f1() { x = await fetch('something'); return x + 1; }
If they await, they’re no longer synchronous functions
The difference would be that you still can call the function without await.
So this would be possible:
Instead of having to async each and every function and await each and every function call: Which is what TFA complains about and what indeed is a pain in the ass.Only in poorly designed code you have problems like this where Async Await goes viral. It can be avoided by spiting libraries into an IO part that uses Async/Await and a protocol part that is sans-io. Using async code to access data that is already in the application memory is inefficient and should be implemented synchronously with regular functions instead of asynchronously.
This seems a no true Scotsman argument. In actual real applications it is not always possible to split IO from non I/O parts and the virality of async prevents composition and encapsulation without significant refactoring.
> In actual real applications it is not always possible
This depends on what are your expectation. IO operations must suspend to wait for data read and writes, therefore it is not possible to avoid Async/Await. In other cases you might have multiple tasks depending on a specific IO operation, for example, one connection to a database that is used by multiple HTTP sessions, here it is also not possible to avoid Async/Await because those sessions are bound to an IO operation.
The real problem however comes when tasks that can complete synchrously are implemented with an asynchronous interface, for example
This is a poor design because a socket read can pull multiple messages from the kernel buffers into user space and there should be a way to consumed them without Await. Many libraries however don't for watch this problem and that results in the everything is async madness.Tl;dr the author doesn't understand why having "async operation running in the background" is a useful primitive in a language.
Well, for one example, it means the syntax for kicking off dozens of IO requests and collecting all the results is trivial.
Also I'm tired of people saying "async is infectious!" as if it is something clever.
Having concurrency be part of the type system is a good thing!
"the syntax for kicking off dozens of IO requests and collecting all the results is trivial."
Please show me this trivial code, assuming I want to process 12 requests in parallel at most (and always processing 12 at the same time until there's non left to process).
Something like this, depending on how you want your input and output to be supplied.
I would not call this trivial. :)
I'd argue that it is a lot easier than doing it with threads!
Promises are a primitive, and async and await keywords in JavaScript is just syntactic sugar around promises (it is sugar around similar constructs in other languages). A promise being just a long running task that will return a result eventually. Being able to grab a promise as an object and pass it around is super useful at times, and it is something I end up using a lot in my JS/TS code.
Because it is a language primitive that is also expressed in the type system, more complex systems can safely be built up around it, in the same way that it is easier to build safe(r) complex systems up around threads in languages that have threads as a primitive. (Rust being a great example here of bringing threads into the language, Java being another early example, though their early attempts were not perfect since we've learned a lot since 1995!)
Async/await and promises are a great example of a technology that makes doing easy stuff easy, and makes hard stuff possible.
tl;dr people need to stop complaining that other multitasking/threading paradigm looks different than their preferred one, each has plusses and minuses and one isn't "better" than others, they just serve different purposes.
In C# you'd use a Channel for this, which is pretty easy to use. But of course that is built on top of async/await, with those alone it is far from trivial to implement your specific case.
> Tl;dr the author doesn't understand why having "async operation running in the background" is a useful primitive in a language.
Isn't that covered in the paragraphs starting with?
My beef is with Web Crypto. Why are encrypt and decrypt async?
Because otherwise the main thread will be blocked while those operations take place and the whole ui will freeze
You'll find this convention in other JS crypto stuff as well, like libraries for password hashing. Eg:
https://www.npmjs.com/package/@node-rs/argon2
https://www.npmjs.com/package/bcrypt
Though the bcrypt package does provide an additional sync API. (You should be using argon2 though.)
Yes, but you can offer both sync and async methods. Web Crypto only has async.
Programmers who are not aware of the pitfalls will use the sync version and build apps that have terrible ux, so it’s better to not give them the option unless there’s a really good reason to, so what’s the use case here?
Cause they can be slow.
And we're still awaiting any of their benefits.
People think too much about these things.
That's why they pay us the big bucks.
No "they" pay you to ship features.
Non paywall
https://freedium.cfd/https%3A%2F%2Fandrewzuo.com%2Fasync-awa...
Yeah that's why in nodejs I made functions as async unless I know really well that it won't need await. Though that seems bad, it's not the worst due to how good async/await actually is.
Unpopular opinion, it makes me want to have a setting where all function is async by default, and all function calls are await by default. With `nowait` and `sync` as the opposite
Obligatory link to “What Color is your Function?”
https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
The author is simply unskilled and lacks critical knowledge.
I was thinking that yesterday afternoon in dealing with some Python, but then found that
import asyncio import something_else
...
asyncio.run(something_else.that_runs_asynchronously(x, z, z))
took care of this.
I love async and await its one of the best things to happen in programming.
Simple solution to concurrency.
I appreciate its hard to grasp at first but it’s literally second nature now I rarely need to even give it much thought.
CSP is simpler. Simpler is easier to get right.
I used to write a lot of asynchronous servers in C up until a couple of decades ago. I found it easy. Most people didn't. We have better ways of doing things today.
> CSP is simpler. Simpler is easier to get right.
If it's the same CSP I'm thinking of, then yes, but it's only simpler because it relies on enough people on the team having a good grasp of these parts of CS. Based on my own experiences in uni I can tell that courses on formal-methods and the like are probably the least-popular: being taken by a tiny minority of students - it follows then that only a tiny minority of software-writing professionals will have the requisite level of understanding to apply these approaches to their day-job - and those that do are likely already employed within an organization that relies on these formal-methods (e.g. safety-critical avionics, Wall St. quants, etc) which in-turn will help attracts other highly-capable people.
...now contrast those imagined employers with the rest of the software industry: unsexy line-of-business application developers, SaaS dev contract shops, the places where people who couldn't get jobs at Google or OpenAI might end-up working; and also consider the larger-still community of non-professional software writers (people doing VBA in Excel to anyone who simply wants to learn how to make an interactive website for themselves).
CSP is not going to help this latter group. And, in fact, it's this very latter group which drives programming-language design because that market is 100x the size of the formal-methods-ivory-tower people (who are probably using gratis open-source tooling anyway).
Compared to CSP, async/Await is something that you can demonstrate to someone with almost zero experience writing software, who probably can't even afford the time to try to understand how it works, but the mechanics of putting the `await` keyword in the right place can be taught in a few hours and suits 95% of their needs.
-----
If languages like C# or JavaScript were designed only to suit people like you or me then the ecosystems around those languages wouldn't be anywhere near as big, nor those languages anywhere near as decently supported and maintained. If the "price" for that is putting-up with a handful of language-warts then I'm happy to make that trade. I've still got Z3 for everything else :)
Uhhhh, just use Elixir? This is a solved problem.
tl;dr author discovered blue/red problem, then went on to discover coroutines and likes coroutines better. Same old discussion, nothing new.
[flagged]
[flagged]
Coroutines via user-mode stack-switching worked just fine for decades. Async/await is really only needed on limited runtime environments like Javascript or WASM where stack switching isn't an option (and at the cost that the compiler needs to turn sequential code into a switch-case state machine under the hood, which then introduces all sorts of problems, from 'function coloring' to requiring special support in debuggers).
Everything above machine code is about hiding complexity. That per se is not an issue. Programming in CPS form is madness, async/await is only slightly better than that.
> non-blocking
What is your program going to work on while it waits for the task? Usually nothing. You need to read some data before analyzing it, etc.
While you wait the OS can switch to other threads.
The only question here is whether you want to return that time to the operating system or to your language runtime.
> they’re just hiding the complexity
async/await feels helpful because you can write normal code again! If else, for loops, etc.
Oh wait that’s also what the UNIX operating system does. It abstracts away all this blocking and context switching so you can write normal code.
> If adding async to a function is too much
The authors point is a good one. You essentially have two languages and classes of functions. The regular version and the async version. Lots of duplication and a tendency for everything to become async.
> a skill issue.
I think you don’t understand process scheduling.
while we’re waiting for the OS to ‘save us’ from async/await, let’s not ignore the fact that writing code that doesn’t hang, crash, or block the main thread is a skill for a reason.
> hang
Hang implies there is something you are not responding to.
Let me ask again. What are you imagining your main thread should be doing while it is waiting for essential data?
responding to new inputs means changing state. But your program is already in another. Two separate program execution states are best described by two separate threads.
> crash
Crash early, crash often. If invariants can’t be maintained, don’t pretend otherwise.
"Ah, the ‘crash early, crash often’ mantra — truly inspiring. I guess when your program explodes because it can’t handle concurrency, we should just sit back, crack open a cold one, and toast to ‘maintaining invariants.’
And sure, let’s talk about state. If handling multiple states at once is ‘extremely difficult,’ then yes, async/await might not be the best for anyone who panics when their program has to juggle more than one thing. But that's kind of the point.
Async/await is like giving you a leash for your concurrency so you don’t need to wrangle state machines on a pogo stick. But hey, if you’re happier living on the edge of crashville because ‘the OS scheduler will save me,’ who am I to interrupt your Zen?”