> I consider [having a big benefit at 100% vs an 80/20 rule] a characteristic of type systems in general; a type system that you can rely on is vastly more useful than a type system you can almost rely on, and it doesn’t take much “almost” to greatly diminish the utility of a given type system.
This! This is why I don't particularly care for gradual typing in languages like Python. It's a lot of extra overhead but you still can't really rely on it for much. Typescript types are just barely over the hump in terms of being "always" enough reliable to really lean on it.
I agree with the 100% rule. The problem with Typescript is how many teams allow “any”. They’ll say, “We’re using TypeScript! The autocomplete is great!” And on the surface, it feels safe. You get some compiler errors when you make a breaking change. But the any’s run through the codebase like holes in Swiss cheese, and you never know when you’ll hit one, until you’ve caused a bug in production. And then they try to deal with it by writing more tests. Having 100% type coverage is far more important.
In my rather small code base I’ve been quite happy with ”unknown” instead of any. It makes me use it less because of the extra checks, and catches the occasional bug, while still having an escape hatch in cases of extensive type wrangling.
The other approach, having an absolutist view of types, can be very constraining and complex even for relatively simple domain problems. Rust for instance is imo in diminishing returns territory. Enums? Everyone loves them, uses them daily and even write their own out of joy. OTOH, it took years of debate to get GATs implemented (is it now? I haven’t checked), and not because people like and need them, but because they are a necessary technicality to do fundamental things (especially with async).
Typescript's --strict is sometimes a very different ballgame from default. I appreciate why in a brownfield you start with the default, but I don't understand how any project starts greenfield work without strict in 2025. (But also I've fought to get brownfield projects to --strict as fast as possible. Explicit `any` at least is code searchable with the most basic grep skills and gives you a TODO burndown chart for after the fastest conversion to --strict.)
Typescript's --strict still isn't technically Sound, in the functional programming sense, but that gets back to the pragmatism mentioned in the article of trying to get that 80/20 benefit of enough FP purity to reap as many benefits without insisting on the investment to get 100% purity. (Arguably why Typescript beat Flow in the marketplace.)
And yet, type annotations in Python are a tremendous improvement and they catch a lot of bugs before they ever appear. Even if I could rely on the type system for nothing it would still catch the bugs that it catches. In fact, there are places where I rely on the type system because I know it does a good job: pure functions on immutable data. And this leads to a secondary benefit: because the type checker is so good at finding errors in pure functions on immutable data, you end up pushing more of your code into those functions.
It may be the exact opposite. You can't express (at least you shouldn't try to avoid Turing tarpit-like issues) all the desired constraints for your problem domain using just the type system (you need a readable general purpose programming language for that).
If you think your type system is both readable and powerful then why would you need yet another programming language? (Haskell comes to mind as an example of such language--don't know how true it is). The opposite (runtime language used at compile time) may also be successful eg Zig.
Gradual typing in Python provides the best of both worlds: things that are easy to express as types you express as types. On the other hand, you don't need to bend over backwards and refactor half your code just to satisfy your compiler (Rust comes to mind). You can choose the trade off suitable for your project and be dynamic where it is beneficial. Different projects may require a different boundary. There is no size fits all.
P.S. As I understand it, the article itself is about "pragmatism beats purity."
On the other hand, if you think of a programming language as a specialized tool then you choose the tool for the job and don’t reach for your swiss army knife to chop down a tree.
The problem with gradually typed languages is that there are few such trees that should be chopped by their blunt blades. At least Rust is the best for a number of things instead of mediocre at all of them.
One counterpoint to this is local self exploratory programming. For that a swiss army knife is ideal, but in those cases who cares about functional programming or abstractions?
I mostly agree with the sentiments in this article. I was once an extremely zealous FP acolyte but eventually I realized that there are a few lessons worth taking from FP and applying to more conventional programming:
1. Pure functions are great to use when you have the opportunity.
2. Recursion works great for certain problems and with pure functions.
3. Higher order functions can be a useful shorthand (but please only use them with pure functions).
Otherwise, I think simple, procedural programming is generally a better default paradigm.
> Otherwise, I think simple, procedural programming is generally a better default paradigm.
I think this is almost the opposite conclusion from the one TFA (and in particular the longer form article linked elsewhere here), which is more like: most standard imperative programming is bad and standard (pure) FP is at least slightly better, but people generally don't draw the right conclusions about how to apply FP lessons to imperative languages.
I was hoping this article would be a little more concrete, but it seems that it largely is talking about the takeaways about functional programming in a philosophical, effort-in vs value-out kind of way. This is valuable, but for people unfamiliar with functional programming I'm not sure that it gives much context for understanding.
I agree with the high-level, though. I find that people (with respect to programming languages) focus far too much on the specific, nitpicky details (whether I use the `let` keyword, whether I have explicit type annotations or type inference). I find the global, project-level benefits to be far more tangible.
It's a shame that this is not the link that was submitted because that (long) article was a really interesting read that gave me some food for thought and also articulated a bunch of things rather clearly that I've been thinking in a similar form for a while. I'm not sure that I agree with all of it, though (I still prefer maps, folds etc. over explicit loops in many cases, but do agree that this is less important than the overall code architecture).
How does this work out for functions in the middle of the call stack? Can the objects a function creates be mutated by functions they call? Phrased differently, can functions modify their input parameters? If a function returns one of their input parameters (modified or not), does that mean the calling function can no longer mutate it?
Maybe I'm discarding this too readily, but I don't think this idea of "local mutability" has much value -- if an object is mutable, the compiler and runtime has to support mutation and many optimizations are impossible because every object is mutable somewhere during their lifetime (and for objects created in main, they're mutable for the lifetime of the program).
If we include as a axiom for the purposes of this conversation that we must be able to refactor out any part of the "constructor" and that the refactored function must have the same mutation "privileges" as the original creator, which I think is fairly reasonable, this leads you in the direction of something like Rust I think, which can construct objects and lend bits and pieces of it out in a controlled manner to auxiliary functions, but the value being returned can still have constraints on it.
Local mutability is fantastic and practical... In a language like Haskell where the type system tracks exactly what values are mutable and all mutation is precisely scoped by functions that freeze the value they generate in a way that prevents leaking.
In a language that isn't so precise, it's a lot harder to get value from the idea.
You can get value out of local mutability in languages like Scala or Kotlin. Variables can be declared mutable in the scope of a function, and their value can then later be assigned to a non-mutable variable, for example. Collections also come in mutable and immutable variants (although this has some pitfalls in Kotlin).
I have to say I don’t understand your point! The parents comment is both clear and a reasonable, common approach of programming
> Can the objects a function creates be mutated by functions they call?
No
> can functions modify their input parameters?
No
> If a function returns one of their input parameters (modified or not), does that mean the calling function can no longer mutate it?
No. Because the called function isn’t allowed to mutate its inputs, there’s no problem for the caller to mutate it. It’s irrelevant whether the input was also an output of the called function as it cannot mutate it anyway.
I suppose you can get into race conditions between caller and callee if your language provides asynchronicity and no further immutability guarantees. Still, you eliminated a whole lot of potential bugs
I mean, this isn't that different from any number of things you do in real life? You took a car to some destination. It is assumed you didn't change the engine. Took it to a mechanic, maybe they did?
More, many modifications are flat out expected. You filled out a job application, you probably don't expect to change the job it is for. But you do expect that you can fill out your pieces, such that it is a modifiable document while you have it. (Back to the car example, it is expected that you used fuel and caused wear on the tires.)
As annoying as they were to deal with, the idea of having "frozen" objects actually fits really well with how many people want to think of things. You open it to get it ready for use. This will involve a fair bit of a setup. When done, you expect that you can freeze those and pass off to something else.
Transactions can also get into this. Not surprising, as they are part of the vocabulary we have built on how to deal with modifications. Same for synchronization and plenty of other terms. None of them go away if you just choose to use immutable objects.
I realized I don’t understand idea of pure functions anymore. If a function fetches a web page it is not pure because it modifies some state. But if function modified EAX register it is still pure. How creating a socket or changing a buffer is different from changing a register value considering that in all cases outside observers would never know?
Let’s put your uncertainty to rest: at the extreme, any function execution spends both time and energy, both of which are observable side-effects.
So yes, you’re right that there no such thing as an absolutely pure function. “Pure” always assumes that all dependencies are pure themselves. Where it’s a reasonable assumption and whether it’s still useful or not depends on your program: assuming an API call to be pure is certainly not reasonable for many use cases, but it is reasonable enough to be useful for others.
If a function is pure, it can take in and output 100% unmodifiable values. There will never be any side-effects in a pure function.
In other words, if you need to modify the contents of a variable for a function to run, that's not a pure function. Taking something in and outputting something just like it but with some modifications is allowed, so long as the input is unmodified.
Does that make more sense? You can't modify anything inside a function that originated from outside of the function's scope.
I think they understand that, and are referring to more nuanced side effects. Logging, for an example, is a side effect, same with even using a date function. Hitting an API endpoint without cache may be functional if the response never changes, but do you want that? Usually we want a cache, which is skirting idempotency. The closer you look, the more side effects you see
web development today is literally just massive massive mutation operations on databases.
Functional programming can't stop it, it just sort of puts a fence around it. The fence makes sense if it's just 10% of your program that you want to fence off. But the database is literally the core of our application then it's like putting a fence around 90% of the house and you have 10% of pure functional programming.
Most operations are located at the database level. That's where the most bugs occur and that's where most operations occur. You can't really make that functional and pure.
This high level stuff is actually wrong. At the highest level web apps are NOT functional.
I get where he's coming from but he missed the point. Why do functional paradigms fail or not matter so much at the micro level? Because web applications are PRIMARILY not functional by nature. That's why the stuff doesn't matter. You're writing and changing for loops into recursion/maps in the 10% of your pure code that's fenced off from the 90% core of the application.
You want to make functional patterns that are applicable at the high level? You need to change the nature of reality to do that. Make it so we don't need to mutate a database ever and create a DSL around that is functional. SQL is not really functional. Barring that you can keep a mutating database but create a DSL around it that hides the mutation.
There must be something wrong in the way FP is taught if the takeaway that people have is that it prevents or is somehow opposed to mutation.
On the one hand you have bunch of FP languages that don't care in the least bit about "purity" (i.e. being side-effect free) or are more pragmatic around it, such as various LISPs, OCaml, Scala or even parts of the JS ecosystem. And on the other hand, there's a lot of research and discussion in e.g. the Haskell community about how to properly deal with side effects. The idea here is not that side effects are bad, but that we want to know where they happen and combine them safely.
> The idea here is not that side effects are bad, but that we want to know where they happen and combine them safely.
Yeah, the idea is not that people gathering together in groups more than two and/or past the 21:00 is bad, but that we want to know where it happens and ensure safety for all. Now, your papers, please or we'll apply the type checker (we'll apply it to y'all anyhow, of course, but we'd like you to cooperate with the inference process).
I don't understand why people get so angry when a compiler points out that their code is broken. Is it better if runs and does the wrong thing instead?
I am fine with compiler pointing out broken code. I am not fine with people saying "The code with side effects must be segregated from the pure code, with the typechecker in place to maintain this separation, and we must also keep CONSTANT VIGIL against introducing any more effectful code than strictly necessary — but of course we don't think that side effects are bad, haha. Why, some of my best friends are side effects, I am not a functional purist" or something like that.
Hell, you can write imperative spaghetti in Haskell if you want. I've done it. People will just keep suggesting you fix it, because it's so much more obvious how bad it is when you can so quickly and easily use the type system to guide the process of fixing it.
No you missed the point. I completely get the meaning of segregating IO/mutation away from pure logic.
And my point is, what is the purpose of all of this is 90% of what your app does is mutation and side effects? Functional shell, imperative core indeed, but the shell is literally just thin layer of skin. The imperative core is a massive black hole.
Functional programming can't save you from black hole.
Obviously the answer to "this doesn't solve my problems" is "don't use it then". If your problem domain literally is nothing but API calls and DB updates, then you may not benefit from this.
OTOH, in my experience a lot of people underestimate how much pure business logic exists (or can be extracted) in many applications. In apps I've worked on I've found a lot of value in isolating these parts more cleanly. The blogbook series by the author of TFA (linked further upthread) goes into some detail about how to do that even without going fully down the "pure functional programming" rabbithole.
>Obviously the answer to "this doesn't solve my problems" is "don't use it then". If your problem domain literally is nothing but API calls and DB updates, then you may not benefit from this.
This is like 99% of web development today. And web development is like 99% of development. It's all very IO heavy and mutation heavy. You can't run from it.
It's why FP is mostly ineffective on the smaller scale because you're already walled off from doing anything that matters in web. Your server is stateless anyway so anything you do in this arena doesn't even matter.
You've claimed this several times now and I fundamentally disagree. In my 11 years of experience working across some 7 companies, web development has always been more than just 99% side effects. Obviously your experiences may be different, but this generalisation is silly.
My 15 years of experience web development, gaming, embedded systems, and High performance computation has always been 99% side effects for web development exclusively. 99 is an exaggeration but it is not far off. For web it's more like 70 to 80. My point is, it's the overwhelming majority and anyone who is smart and experienced would know this.
There are few places where functional can really be the entire stack and unfortunately those in those places the technological ecosystem surrounding those areas are just not well equipped for FP for historical reasons. For example CUDA in HPC or AI. It's a very functional process of inputs and outputs. Or gaming, which is also very CPU and graphic intensive. Both are great candidates for FP, but the ecosystem surrounding them is focused on C++.
Web is the opposite. It's literally a style of meta programming. Your web server is stateless. It mostly just does authentication and functions as a thin security and routing layer. You are creating a program that takes a route path and translates it into ANOTHER high level language that's used against a database and that's web in a nutshell. That database is doing most of the work in your system. That is the main thing you are programming. So you write server programs that write programs for databases and those servers spend 20% of their time constructing strings in the query language and 80% of the time waiting for the query to complete. Your database is doing all the computation and the server just waits on it.
Why else do you think we don't use rust or C++ for servers? Why can we use a slow ass language like python to do server work? Because the database is the one doing all the work and the servers are just waiting.
If you're going purely stateless with no db, the only thing I can think of in web is chat. But then chat is very IO based which is also not suited for FP. I guess Big data number crunching streams can be thought of as "functional" but those things don't really make up the majority of web either. Maybe video streaming? I feel decoding videos is very library based to be honest.
>Obviously your experiences may be different, but this generalization is silly.
No the only thing silly here is your opinion. I doubt our experiences are that different. In fact I'm willing to bet, if you lay out your experience you'll see it's almost entirely as I said. Servers are meta programmers weaving code in a query language that is forwarding computations to databases which are inherently mutating and IO by definition. All your servers do is just wait for external databases to finish and databases are the engines that drive the web. Databases are mutating IO freaks of nature which makes them not amenable to FP and as a consequence most of web is not amenable to FP either.
You can have any constellation of micro service architecture it doesn't matter. If you forward a call to a server, and that server forwards a call to another server and all those servers are in the end waiting on something that does the actual computations. Usually a database.
I know this because I've worked on things where functional programming is almost 99% applicable even on the smallest scales. Basically if you're programming the computation itself. Intensive high CPU, and high GPU computations which is more rare in web. You'll see for these applications most of your code is a giant composition of pure functions. It becomes so modular that it's almost like legos if you try to follow the FP paradigm in this arena. It just sucks that it's mostly C++, though rust is making big moves in this area.
You'll basically never encounter HPC in web. But you would know this if you had the experience you claim to have. And you would know that at the macro scale, web is not functional... It's imperative and mutation based.
Easy. No one knows this but In science and therefore reality proof is impossible. Proof is the domain of mathematics and logic.
But disproof is possible in reality. It can be done by a single counter example.
So if I’m wrong about your experience. Why don’t you lay out a single example in web development in your experience where what I said is utterly incorrect because I literally cannot even imagine a counter example.
One counter example is all it takes to disprove my “bet”. Even better if the example is extremely common. But this is up to you to you decide if you want to debate this. I’m certainly willing to tell you about my entire experience and all I’m asking for is a single counter example from you.
* Take a complicated tree structure representing a document management system and transform it into some different tree structure, e.g. by filtering out elements or fields due to missing permissions, or by rewriting it in the way that the elasticsearch index expects it, or a bunch of other reasons.
* Generate a TOC of an HTML document by parsing all the header elements. More generally, anything that's related to parsing (something which I've had to do several times).
* Run a complex (> 50 LOC) validation on some user input.
* Generate a word document or PDF. Conversely, parse a word document or PDF.
* Financial computations, such as calculating totals of an order made up of multiple line items with discounts, taxes, additional charges, etc.
(I'm purposely leaving out more non-standard applications, such as pre-LLM chatbots, or an entire mathematical problem solver that doesn't even use a database, but just so you know, they exist too.)
There's also the much more immediate counterpoint that it's of course possible to write web applications in Haskell (or OCaml, Scala, ...), otherwise all these frameworks wouldn't exist.
First one looks like meta programming. IO. You are programming something else.
Toc programming seems trivial. I put a loop here instead of a map and my html parsing library does all the work. Parsing in general is usually handled by a library. Programmers don’t do much work here tbh. If you’re writing your own parser combinators.. well that’s rare.
Validation again that falls in the authentication and security layer. This lives in the 20 percent.
Word to pdf sounds is a legit use but also it lives in the 20 percent.
Financial computations also legit but it lives in the 20 percent. The more line items you have the more you have to either shift it to the database to compute or some number crunching async job. That’s IO and compute.
Chatbots are high IO.
Math solver works but honestly I went through my entire career only writing one of those for fun. Non standard but these are 20 percent.
> There's also the much more immediate counterpoint that it's of course possible to write web applications in Haskell (or OCaml, Scala, ...), otherwise all these frameworks wouldn't exist.
You’re talking about web frameworks right? The kind that lives on top of servers?
The modern day method of coding these servers is to be non blocking as much as possible. Meaning not much room to do functional calculations as the more calculations you do the more you block a thread. You are forwarding the tasks via IO to task runners or the database whether or not you use a framework from Haskell or scala or whatnot.
I still can’t really agree with you. Like writing code to convert a tree or word to pdf or parsing is just rare enough to live in the 20 percent. Most of the time you are writing code that writes query code.
Agree to disagree. We can end it here if you want.
You started out by saying 99% of web code is IO, then casually dropped down to 70-80%, as if that was the same thing. 20-30% of a huge codebase is still a significant amount.
I've never said IO is irrelevant to web applications. But that in my experience, those 20-30% or whatever can be important, complex and time intensive (the Word generation was actually one od the most time consuming parts of the last application I worked on) enough that fencing them off from the side effecting code in some way can pay off (one way to do this
without going fully PFP is outlined here: https://www.jerf.org/iri/post/2025/fp_lessons_purity/#fp-pur...).
> First one looks like meta programming. IO. You are programming something else.
I don't understand what you're saying. I was working on a document management system whose contents were stored in tree structures. These structures routinely had to be transformed into other structures. That's a very algorithmic task where the IO can be fenced off very easily (and should be fenced off if only for the sake of easier unit testing). This was literally one of the most common things happening in the codebase.
> Chatbots are high IO.
in the sense that every useful program has IO. Otherwise, there's a whole bunch of transformations and intent recognition and skill dispatch and what not that happens between the request and the response, and that can happen entirely in memory on one machine (it can also be offloaded wholly or in part to an external service like Amazon Lex, it depends on the use case, but even in the case where we used Lex we ran a bunch of manual transforms on the output for postprocessing).
> The modern day method of coding these servers is to be non blocking as much as possible.
Recent developments (e.g. virtual threads coming to Java) seem to indicate otherwise. Blocking code and non-blocking code are both valid for different types of scenarios. And offloading tasks asynchronously to a task runner doesn't mean that the code running those tasks doesn't have to be written.
---
I really do think we've just worked on very different types of applications, and I just reiterate my point from above (and from the blogpost): There are enough use cases where clearly fencing off IO from algorithmic code pays off. If it doesn't for your use case, then don't do it.
Would you consider TLA+ functional? It sounds like the tension you're describing might be how most distributed consensus protocols are implemented as imperative code, and part of the Raft excursion involved writing a TLA+ proof of the protocol.
> I consider [having a big benefit at 100% vs an 80/20 rule] a characteristic of type systems in general; a type system that you can rely on is vastly more useful than a type system you can almost rely on, and it doesn’t take much “almost” to greatly diminish the utility of a given type system.
This! This is why I don't particularly care for gradual typing in languages like Python. It's a lot of extra overhead but you still can't really rely on it for much. Typescript types are just barely over the hump in terms of being "always" enough reliable to really lean on it.
I agree with the 100% rule. The problem with Typescript is how many teams allow “any”. They’ll say, “We’re using TypeScript! The autocomplete is great!” And on the surface, it feels safe. You get some compiler errors when you make a breaking change. But the any’s run through the codebase like holes in Swiss cheese, and you never know when you’ll hit one, until you’ve caused a bug in production. And then they try to deal with it by writing more tests. Having 100% type coverage is far more important.
In my rather small code base I’ve been quite happy with ”unknown” instead of any. It makes me use it less because of the extra checks, and catches the occasional bug, while still having an escape hatch in cases of extensive type wrangling.
The other approach, having an absolutist view of types, can be very constraining and complex even for relatively simple domain problems. Rust for instance is imo in diminishing returns territory. Enums? Everyone loves them, uses them daily and even write their own out of joy. OTOH, it took years of debate to get GATs implemented (is it now? I haven’t checked), and not because people like and need them, but because they are a necessary technicality to do fundamental things (especially with async).
Typescript's --strict is sometimes a very different ballgame from default. I appreciate why in a brownfield you start with the default, but I don't understand how any project starts greenfield work without strict in 2025. (But also I've fought to get brownfield projects to --strict as fast as possible. Explicit `any` at least is code searchable with the most basic grep skills and gives you a TODO burndown chart for after the fastest conversion to --strict.)
Typescript's --strict still isn't technically Sound, in the functional programming sense, but that gets back to the pragmatism mentioned in the article of trying to get that 80/20 benefit of enough FP purity to reap as many benefits without insisting on the investment to get 100% purity. (Arguably why Typescript beat Flow in the marketplace.)
> you still can't really rely on it for much
And yet, type annotations in Python are a tremendous improvement and they catch a lot of bugs before they ever appear. Even if I could rely on the type system for nothing it would still catch the bugs that it catches. In fact, there are places where I rely on the type system because I know it does a good job: pure functions on immutable data. And this leads to a secondary benefit: because the type checker is so good at finding errors in pure functions on immutable data, you end up pushing more of your code into those functions.
It may be the exact opposite. You can't express (at least you shouldn't try to avoid Turing tarpit-like issues) all the desired constraints for your problem domain using just the type system (you need a readable general purpose programming language for that).
If you think your type system is both readable and powerful then why would you need yet another programming language? (Haskell comes to mind as an example of such language--don't know how true it is). The opposite (runtime language used at compile time) may also be successful eg Zig.
Gradual typing in Python provides the best of both worlds: things that are easy to express as types you express as types. On the other hand, you don't need to bend over backwards and refactor half your code just to satisfy your compiler (Rust comes to mind). You can choose the trade off suitable for your project and be dynamic where it is beneficial. Different projects may require a different boundary. There is no size fits all.
P.S. As I understand it, the article itself is about "pragmatism beats purity."
On the other hand, if you think of a programming language as a specialized tool then you choose the tool for the job and don’t reach for your swiss army knife to chop down a tree.
The problem with gradually typed languages is that there are few such trees that should be chopped by their blunt blades. At least Rust is the best for a number of things instead of mediocre at all of them.
One counterpoint to this is local self exploratory programming. For that a swiss army knife is ideal, but in those cases who cares about functional programming or abstractions?
I mostly agree with the sentiments in this article. I was once an extremely zealous FP acolyte but eventually I realized that there are a few lessons worth taking from FP and applying to more conventional programming: 1. Pure functions are great to use when you have the opportunity. 2. Recursion works great for certain problems and with pure functions. 3. Higher order functions can be a useful shorthand (but please only use them with pure functions). Otherwise, I think simple, procedural programming is generally a better default paradigm.
> Otherwise, I think simple, procedural programming is generally a better default paradigm.
I think this is almost the opposite conclusion from the one TFA (and in particular the longer form article linked elsewhere here), which is more like: most standard imperative programming is bad and standard (pure) FP is at least slightly better, but people generally don't draw the right conclusions about how to apply FP lessons to imperative languages.
I was hoping this article would be a little more concrete, but it seems that it largely is talking about the takeaways about functional programming in a philosophical, effort-in vs value-out kind of way. This is valuable, but for people unfamiliar with functional programming I'm not sure that it gives much context for understanding.
I agree with the high-level, though. I find that people (with respect to programming languages) focus far too much on the specific, nitpicky details (whether I use the `let` keyword, whether I have explicit type annotations or type inference). I find the global, project-level benefits to be far more tangible.
This is the conclusion of https://jerf.org/iri/blogbooks/functional-programming-lesson... . The concreteness precedes it, this is just the wrap up and summary.
It's a shame that this is not the link that was submitted because that (long) article was a really interesting read that gave me some food for thought and also articulated a bunch of things rather clearly that I've been thinking in a similar form for a while. I'm not sure that I agree with all of it, though (I still prefer maps, folds etc. over explicit loops in many cases, but do agree that this is less important than the overall code architecture).
I see. This is indeed the in-depth breakdown I was looking for, thank you.
I've always thought that there should be mutability of objects within the function that created them, but immutability once the object is returned.
Ultimately one of the major goals of immutability is isolation of side effects.
How does this work out for functions in the middle of the call stack? Can the objects a function creates be mutated by functions they call? Phrased differently, can functions modify their input parameters? If a function returns one of their input parameters (modified or not), does that mean the calling function can no longer mutate it?
Maybe I'm discarding this too readily, but I don't think this idea of "local mutability" has much value -- if an object is mutable, the compiler and runtime has to support mutation and many optimizations are impossible because every object is mutable somewhere during their lifetime (and for objects created in main, they're mutable for the lifetime of the program).
If we include as a axiom for the purposes of this conversation that we must be able to refactor out any part of the "constructor" and that the refactored function must have the same mutation "privileges" as the original creator, which I think is fairly reasonable, this leads you in the direction of something like Rust I think, which can construct objects and lend bits and pieces of it out in a controlled manner to auxiliary functions, but the value being returned can still have constraints on it.
Local mutability is fantastic and practical... In a language like Haskell where the type system tracks exactly what values are mutable and all mutation is precisely scoped by functions that freeze the value they generate in a way that prevents leaking.
In a language that isn't so precise, it's a lot harder to get value from the idea.
You can get value out of local mutability in languages like Scala or Kotlin. Variables can be declared mutable in the scope of a function, and their value can then later be assigned to a non-mutable variable, for example. Collections also come in mutable and immutable variants (although this has some pitfalls in Kotlin).
I have to say I don’t understand your point! The parents comment is both clear and a reasonable, common approach of programming
> Can the objects a function creates be mutated by functions they call?
No
> can functions modify their input parameters?
No
> If a function returns one of their input parameters (modified or not), does that mean the calling function can no longer mutate it?
No. Because the called function isn’t allowed to mutate its inputs, there’s no problem for the caller to mutate it. It’s irrelevant whether the input was also an output of the called function as it cannot mutate it anyway.
I suppose you can get into race conditions between caller and callee if your language provides asynchronicity and no further immutability guarantees. Still, you eliminated a whole lot of potential bugs
I mean, this isn't that different from any number of things you do in real life? You took a car to some destination. It is assumed you didn't change the engine. Took it to a mechanic, maybe they did?
More, many modifications are flat out expected. You filled out a job application, you probably don't expect to change the job it is for. But you do expect that you can fill out your pieces, such that it is a modifiable document while you have it. (Back to the car example, it is expected that you used fuel and caused wear on the tires.)
As annoying as they were to deal with, the idea of having "frozen" objects actually fits really well with how many people want to think of things. You open it to get it ready for use. This will involve a fair bit of a setup. When done, you expect that you can freeze those and pass off to something else.
Transactions can also get into this. Not surprising, as they are part of the vocabulary we have built on how to deal with modifications. Same for synchronization and plenty of other terms. None of them go away if you just choose to use immutable objects.
Note that this post is the end of a larger series of posts, collected in one page here: https://jerf.org/iri/blogbooks/functional-programming-lesson...
I realized I don’t understand idea of pure functions anymore. If a function fetches a web page it is not pure because it modifies some state. But if function modified EAX register it is still pure. How creating a socket or changing a buffer is different from changing a register value considering that in all cases outside observers would never know?
Let’s put your uncertainty to rest: at the extreme, any function execution spends both time and energy, both of which are observable side-effects.
So yes, you’re right that there no such thing as an absolutely pure function. “Pure” always assumes that all dependencies are pure themselves. Where it’s a reasonable assumption and whether it’s still useful or not depends on your program: assuming an API call to be pure is certainly not reasonable for many use cases, but it is reasonable enough to be useful for others.
If a function is pure, it can take in and output 100% unmodifiable values. There will never be any side-effects in a pure function.
In other words, if you need to modify the contents of a variable for a function to run, that's not a pure function. Taking something in and outputting something just like it but with some modifications is allowed, so long as the input is unmodified.
Does that make more sense? You can't modify anything inside a function that originated from outside of the function's scope.
I think they understand that, and are referring to more nuanced side effects. Logging, for an example, is a side effect, same with even using a date function. Hitting an API endpoint without cache may be functional if the response never changes, but do you want that? Usually we want a cache, which is skirting idempotency. The closer you look, the more side effects you see
It so happens that this was the topic of one of the posts in this series: https://jerf.org/iri/post/2025/fp_lessons_purity/#purity-is-...
I'm assuming from your post you haven't come from there, and we just coincidentally picked a similar example...
Purity is relative to a given level of abstraction?
The author addresses this nicely in an earlier part of the blog book: https://jerf.org/iri/post/2025/fp_lessons_purity/
Dear Jerf, You articles are amazing. Thanks for the insights in this series.
web development today is literally just massive massive mutation operations on databases.
Functional programming can't stop it, it just sort of puts a fence around it. The fence makes sense if it's just 10% of your program that you want to fence off. But the database is literally the core of our application then it's like putting a fence around 90% of the house and you have 10% of pure functional programming.
Most operations are located at the database level. That's where the most bugs occur and that's where most operations occur. You can't really make that functional and pure.
This high level stuff is actually wrong. At the highest level web apps are NOT functional.
I get where he's coming from but he missed the point. Why do functional paradigms fail or not matter so much at the micro level? Because web applications are PRIMARILY not functional by nature. That's why the stuff doesn't matter. You're writing and changing for loops into recursion/maps in the 10% of your pure code that's fenced off from the 90% core of the application.
You want to make functional patterns that are applicable at the high level? You need to change the nature of reality to do that. Make it so we don't need to mutate a database ever and create a DSL around that is functional. SQL is not really functional. Barring that you can keep a mutating database but create a DSL around it that hides the mutation.
There must be something wrong in the way FP is taught if the takeaway that people have is that it prevents or is somehow opposed to mutation.
On the one hand you have bunch of FP languages that don't care in the least bit about "purity" (i.e. being side-effect free) or are more pragmatic around it, such as various LISPs, OCaml, Scala or even parts of the JS ecosystem. And on the other hand, there's a lot of research and discussion in e.g. the Haskell community about how to properly deal with side effects. The idea here is not that side effects are bad, but that we want to know where they happen and combine them safely.
> The idea here is not that side effects are bad, but that we want to know where they happen and combine them safely.
Yeah, the idea is not that people gathering together in groups more than two and/or past the 21:00 is bad, but that we want to know where it happens and ensure safety for all. Now, your papers, please or we'll apply the type checker (we'll apply it to y'all anyhow, of course, but we'd like you to cooperate with the inference process).
I don't understand why people get so angry when a compiler points out that their code is broken. Is it better if runs and does the wrong thing instead?
I am fine with compiler pointing out broken code. I am not fine with people saying "The code with side effects must be segregated from the pure code, with the typechecker in place to maintain this separation, and we must also keep CONSTANT VIGIL against introducing any more effectful code than strictly necessary — but of course we don't think that side effects are bad, haha. Why, some of my best friends are side effects, I am not a functional purist" or something like that.
Nobody is forcing you to use Haskell.
Hell, you can write imperative spaghetti in Haskell if you want. I've done it. People will just keep suggesting you fix it, because it's so much more obvious how bad it is when you can so quickly and easily use the type system to guide the process of fixing it.
No you missed the point. I completely get the meaning of segregating IO/mutation away from pure logic.
And my point is, what is the purpose of all of this is 90% of what your app does is mutation and side effects? Functional shell, imperative core indeed, but the shell is literally just thin layer of skin. The imperative core is a massive black hole.
Functional programming can't save you from black hole.
Obviously the answer to "this doesn't solve my problems" is "don't use it then". If your problem domain literally is nothing but API calls and DB updates, then you may not benefit from this.
OTOH, in my experience a lot of people underestimate how much pure business logic exists (or can be extracted) in many applications. In apps I've worked on I've found a lot of value in isolating these parts more cleanly. The blogbook series by the author of TFA (linked further upthread) goes into some detail about how to do that even without going fully down the "pure functional programming" rabbithole.
>Obviously the answer to "this doesn't solve my problems" is "don't use it then". If your problem domain literally is nothing but API calls and DB updates, then you may not benefit from this.
This is like 99% of web development today. And web development is like 99% of development. It's all very IO heavy and mutation heavy. You can't run from it.
It's why FP is mostly ineffective on the smaller scale because you're already walled off from doing anything that matters in web. Your server is stateless anyway so anything you do in this arena doesn't even matter.
> This is like 99% of web development today.
You've claimed this several times now and I fundamentally disagree. In my 11 years of experience working across some 7 companies, web development has always been more than just 99% side effects. Obviously your experiences may be different, but this generalisation is silly.
My 15 years of experience web development, gaming, embedded systems, and High performance computation has always been 99% side effects for web development exclusively. 99 is an exaggeration but it is not far off. For web it's more like 70 to 80. My point is, it's the overwhelming majority and anyone who is smart and experienced would know this.
There are few places where functional can really be the entire stack and unfortunately those in those places the technological ecosystem surrounding those areas are just not well equipped for FP for historical reasons. For example CUDA in HPC or AI. It's a very functional process of inputs and outputs. Or gaming, which is also very CPU and graphic intensive. Both are great candidates for FP, but the ecosystem surrounding them is focused on C++.
Web is the opposite. It's literally a style of meta programming. Your web server is stateless. It mostly just does authentication and functions as a thin security and routing layer. You are creating a program that takes a route path and translates it into ANOTHER high level language that's used against a database and that's web in a nutshell. That database is doing most of the work in your system. That is the main thing you are programming. So you write server programs that write programs for databases and those servers spend 20% of their time constructing strings in the query language and 80% of the time waiting for the query to complete. Your database is doing all the computation and the server just waits on it.
Why else do you think we don't use rust or C++ for servers? Why can we use a slow ass language like python to do server work? Because the database is the one doing all the work and the servers are just waiting.
If you're going purely stateless with no db, the only thing I can think of in web is chat. But then chat is very IO based which is also not suited for FP. I guess Big data number crunching streams can be thought of as "functional" but those things don't really make up the majority of web either. Maybe video streaming? I feel decoding videos is very library based to be honest.
>Obviously your experiences may be different, but this generalization is silly.
No the only thing silly here is your opinion. I doubt our experiences are that different. In fact I'm willing to bet, if you lay out your experience you'll see it's almost entirely as I said. Servers are meta programmers weaving code in a query language that is forwarding computations to databases which are inherently mutating and IO by definition. All your servers do is just wait for external databases to finish and databases are the engines that drive the web. Databases are mutating IO freaks of nature which makes them not amenable to FP and as a consequence most of web is not amenable to FP either.
You can have any constellation of micro service architecture it doesn't matter. If you forward a call to a server, and that server forwards a call to another server and all those servers are in the end waiting on something that does the actual computations. Usually a database.
I know this because I've worked on things where functional programming is almost 99% applicable even on the smallest scales. Basically if you're programming the computation itself. Intensive high CPU, and high GPU computations which is more rare in web. You'll see for these applications most of your code is a giant composition of pure functions. It becomes so modular that it's almost like legos if you try to follow the FP paradigm in this arena. It just sucks that it's mostly C++, though rust is making big moves in this area.
You'll basically never encounter HPC in web. But you would know this if you had the experience you claim to have. And you would know that at the macro scale, web is not functional... It's imperative and mutation based.
> In fact I'm willing to bet, if you lay out your experience you'll see it's almost entirely as I said.
I don't know how to respond to something like that. You can trust me that I know and understand my own experience better than you do.
Easy. No one knows this but In science and therefore reality proof is impossible. Proof is the domain of mathematics and logic.
But disproof is possible in reality. It can be done by a single counter example.
So if I’m wrong about your experience. Why don’t you lay out a single example in web development in your experience where what I said is utterly incorrect because I literally cannot even imagine a counter example.
One counter example is all it takes to disprove my “bet”. Even better if the example is extremely common. But this is up to you to you decide if you want to debate this. I’m certainly willing to tell you about my entire experience and all I’m asking for is a single counter example from you.
* Take a complicated tree structure representing a document management system and transform it into some different tree structure, e.g. by filtering out elements or fields due to missing permissions, or by rewriting it in the way that the elasticsearch index expects it, or a bunch of other reasons.
* Generate a TOC of an HTML document by parsing all the header elements. More generally, anything that's related to parsing (something which I've had to do several times).
* Run a complex (> 50 LOC) validation on some user input.
* Generate a word document or PDF. Conversely, parse a word document or PDF.
* Financial computations, such as calculating totals of an order made up of multiple line items with discounts, taxes, additional charges, etc.
(I'm purposely leaving out more non-standard applications, such as pre-LLM chatbots, or an entire mathematical problem solver that doesn't even use a database, but just so you know, they exist too.)
There's also the much more immediate counterpoint that it's of course possible to write web applications in Haskell (or OCaml, Scala, ...), otherwise all these frameworks wouldn't exist.
First one looks like meta programming. IO. You are programming something else.
Toc programming seems trivial. I put a loop here instead of a map and my html parsing library does all the work. Parsing in general is usually handled by a library. Programmers don’t do much work here tbh. If you’re writing your own parser combinators.. well that’s rare.
Validation again that falls in the authentication and security layer. This lives in the 20 percent.
Word to pdf sounds is a legit use but also it lives in the 20 percent.
Financial computations also legit but it lives in the 20 percent. The more line items you have the more you have to either shift it to the database to compute or some number crunching async job. That’s IO and compute.
Chatbots are high IO.
Math solver works but honestly I went through my entire career only writing one of those for fun. Non standard but these are 20 percent.
> There's also the much more immediate counterpoint that it's of course possible to write web applications in Haskell (or OCaml, Scala, ...), otherwise all these frameworks wouldn't exist.
You’re talking about web frameworks right? The kind that lives on top of servers?
The modern day method of coding these servers is to be non blocking as much as possible. Meaning not much room to do functional calculations as the more calculations you do the more you block a thread. You are forwarding the tasks via IO to task runners or the database whether or not you use a framework from Haskell or scala or whatnot.
I still can’t really agree with you. Like writing code to convert a tree or word to pdf or parsing is just rare enough to live in the 20 percent. Most of the time you are writing code that writes query code.
Agree to disagree. We can end it here if you want.
You started out by saying 99% of web code is IO, then casually dropped down to 70-80%, as if that was the same thing. 20-30% of a huge codebase is still a significant amount.
I've never said IO is irrelevant to web applications. But that in my experience, those 20-30% or whatever can be important, complex and time intensive (the Word generation was actually one od the most time consuming parts of the last application I worked on) enough that fencing them off from the side effecting code in some way can pay off (one way to do this without going fully PFP is outlined here: https://www.jerf.org/iri/post/2025/fp_lessons_purity/#fp-pur...).
> First one looks like meta programming. IO. You are programming something else.
I don't understand what you're saying. I was working on a document management system whose contents were stored in tree structures. These structures routinely had to be transformed into other structures. That's a very algorithmic task where the IO can be fenced off very easily (and should be fenced off if only for the sake of easier unit testing). This was literally one of the most common things happening in the codebase.
> Chatbots are high IO.
in the sense that every useful program has IO. Otherwise, there's a whole bunch of transformations and intent recognition and skill dispatch and what not that happens between the request and the response, and that can happen entirely in memory on one machine (it can also be offloaded wholly or in part to an external service like Amazon Lex, it depends on the use case, but even in the case where we used Lex we ran a bunch of manual transforms on the output for postprocessing).
> The modern day method of coding these servers is to be non blocking as much as possible.
Recent developments (e.g. virtual threads coming to Java) seem to indicate otherwise. Blocking code and non-blocking code are both valid for different types of scenarios. And offloading tasks asynchronously to a task runner doesn't mean that the code running those tasks doesn't have to be written.
---
I really do think we've just worked on very different types of applications, and I just reiterate my point from above (and from the blogpost): There are enough use cases where clearly fencing off IO from algorithmic code pays off. If it doesn't for your use case, then don't do it.
Would you consider TLA+ functional? It sounds like the tension you're describing might be how most distributed consensus protocols are implemented as imperative code, and part of the Raft excursion involved writing a TLA+ proof of the protocol.
https://github.com/ongardie/raft.tla