Reducers, or understanding the shape of functions

Swift is a peculiar language in that it combines many “paradigms” – styles of programming – to create a sort of melting pot where developers can pick and choose which style they prefer. One such paradigm that I’m very interested in is “functional programming”. Wikipedia states:

In computer science, functional programming is a programming paradigm where programs are constructed by applying and composing functions. It is a […] paradigm in which function definitions […] map values to other values, rather than a sequence of imperative statements which update the running state of the program.

Applying and composing functions. Sounds a bit abstract, doesn’t it? What does that actually mean? In this article, I want to show you concrete examples of this paradigm in Swift through reducers, and teach you a core skill in functional programming: understanding the shape of functions.

Functional methods in Swift

As I said before, Swift has deep roots in functional programming. This can be best seen in its standard library, where it offers higher-order functions notably on Sequences.

A first example of this is the .map { } method. Say we want to go from an array of Car to an array of Driver:

There is an important distinction between the two. In the imperative way, we are going step-by-step telling Swift exactly what mutations we want to happen when and where. Meanwhile, in the functional way, we are simply describing the transformation we want to happen, by giving the .map { } a closure in which we produce a new Driver when given a Car.

Even better we can make this even more succinct and descriptive by providing Driver with a special initializer:

Now, we’re truly describing what we want to happen: we want to map each Car in the array into a Driver. As simple as that. No weird mutations, which means no space for error.

A similar thing can be said for the .filter { } method. Instead of mapping over each element of a sequence, we instead include only certain elements from a sequence, based on a condition given some element. It’ll be clearer when I show it to you.

Say we want to create a new Driver array, which only contains the adventurous drivers.

Once again, in the imperative method, we’re going line by line, conditionally appending the driver if their isAdventurous property is set to true. Meanwhile once again in the functional way we’re just describing the condition given a driver, freeing us from having to perform error-prone mutations ourselves. Even better, thanks to SE-0249 - Key Path Expressions as Functions, we can shorten it to this:

Clean, right? I think that you now have a good understanding on how map and filter are part of the foundational building blocks of functional programming. They are tools which offer us transformations on basic types like arrays and dictionaries.

Going back to the Wikipedia quote, it seems like we’ve become quite good at applying functions. So the logical next step would be composing functions, right?

The Composition Dilemma

Let’s look at the signature for both map and filter on sequences:

func map<T>(_ transform: (Element) -> T) -> [T]

func filter(_ isIncluded: (Element) -> Bool) -> [Element]

Both of the methods seem to return a new array — map returns an array of some generic type T, and filter returns an array with elements of the same type Element. Since they’re both extensions on sequences, we should be able to just chain these two together.

Say we wanted to go straight from an array of cars to an array of adventurous drivers. Intuitively, we can do this:

Looks nice and functional, don’t you think? However, there is a hidden problem lurking in this piece of code. The issue is, these two map and filter methods have no idea about each other’s existence. Therefore, map allocates a whole new array, applies its transformation, and returns the array — and then filter allocates another whole new array, filters in the elements which pass the condition, and returns the second array. We’ve allocated two arrays in an operation which really should only require one.

Let me translate this exact code into imperative code so that you can see what I mean:

We’re allocating two arrays which will be appended all the elements of our original sequence, twice.

“So what?”, I hear you say. “Why should I care? Allocating two arrays with a couple of elements is no big deal.” Sure, it doesn’t make a difference with today’s hardware if you have a couple of elements each with a handful of properties.

But once you start getting into the thousands / tens of thousands of elements, and each element has their own arrays with sequences with hundreds of elements — that’s when you start noticing stutters and performance drops in your software. And last time I checked, nobody likes performance drops.

Reducers to the rescue

First, a short introduction to reducers. The reduce function is the third foundational building block of functional programming, alongside map and filter. I like to call reduce the “Swiss army knife of functions”, because it can do just about anything — including mapping and filtering. Confused? Let me explain.

Instead of showing the .reduce([]) { } method in action like I did for map and filter, I’m going to show you the shape of a reducer. By shape, I mean the relation between its parameters and return value:

As you see, a reducer takes two input parameters, of type A and B, and then reduces the two into a new value of type A. At its core, that’s all there is to a reducer.

So how could that possibly help us in our composition dilemma with map and filter?

Well, in the case of map, we could think of A as the new array we want to create, and B as an element of the old array which we shall transform. Then, the reducer will iterate on each element of the old array, updating the new array each time as we transform each element. Something like this:

This code snippet is definitely a lot to take in. Essentially, for each element in the old array, we’re reducing the new array and the current element, by adding the transformed element to the accumulation.

This means we can define map by only using a reducer:

Now that we’ve settled that reduce can be used as a generalization over map, let’s see how the same can be said for filter.

Let’s remind ourselves how we defined a reducer:

Much like with map, we can think of A as the new array we want to create. However, instead of B being the element of the old array which will be transformed, B is the element of the old array which will conditionally be added to A.

We’re reducing the new array and the current element not by adding the transformed element, but instead by only adding the element if it meets our condition. This means that we can also define filter by only using a reducer:

Now we’ve demonstrated that reduce can also be used as a generalization over filter.

Before we go any further, there’s a pattern I’ve noticed in both the reducer... functions:

This is actually quite an essential part to our reducers, since they define how we want to reduce the two parameters. Let’s extract this into its own function for reusability’s sake.

Interestingly, this function itself is a reducer — you can clearly see the shape of (A, B) -> A (with just the two parameters swapped).

Back to the composition dilemma. How could we restore performance without falling back into the messy mutation-filled world of imperative programming? We could try combining the two map and filter reducers we created up above:

Not bad! Now, thanks to the reducer, we’re back to not creating an intermediate array between the mapping and filtering. However, we have in a sense lost some composition — we’re back to doing imperative control flow, and every time we want to add a map or filter to it we’ll have to carefully place it to make sure we’re not making any logic errors. Can’t there be some way to generalize all our logic here into simple, composable functions?

From control flow to functions

An important skill to have when doing functional programming is being able to translate imperative control flow to a composition of functions. To do this, you need to identify certain patterns, and see how you can generalise over them using functions. Let’s look at that reducer from the last code snippet:

We can divide this code into three main sections:

  1. Mapping over an element
  2. Filtering in an element
  3. Returning the accumulation, with the element appended to it or not.

Finally, the whole closure needs to have the shape (A, B) -> A. This means that whichever general functions we create need to ultimately return a function of type (A, B) -> A.

1. Map

Let’s look at the mapping over an element first. Mapping over an element here means applying a function to it which takes it from A -> B, and then reducing that element with the accumulation, as seen in the reducerMap function up above. So what do we need to create such a general map function? We need:

We return:

Tip: The rest of the article will use a concept called function currying. Keep on reading, I’ll try to explain as clearly as possible, but if still you feel really lost I highly advise you to check out this article.

The thing is, we want to keep this map function as generic and flexible as possible, meaning that we don’t want to force the caller to have to give all three of these things at once. The caller should be able to only give the function that makes each element go from A -> B, and then much later give the reducer function, and then even later plug in the actual accumulation and elements. It’s keeping this flexibility in mind which enables you to create really powerful compositions of functions.

That’s where function currying comes in. With function currying, you can give a function a single argument, and then that function will return a whole new function which wants an argument, and so on. This allows you to be very flexible, because you can store a function with 1 out of 3 arguments filled in, or maybe store another function with 3 out of 4 arguments filled in. Let me give you an example:

This looks a nice little function, except for the fact that I’m completely stuck with it. There’s nothing else I can do, other than use different arguments. Meanwhile, if I were to use function currying, a whole world of possibilities opens up:

This looks ugly on purpose – I really wanted to show you how at every step we’re returning a new function (or closure), ultimately returning the addition once we’ve gathered all the parameters. We can do the same thing as with the previous function, while instead calling the new function every time with a single parameter:

But we can do some more interesting things too:

Calling add(2) has created a new function which takes 4 “parameters” instead of 5. Interestingly, the result will always be at least 2 (assuming only positive integers were input), because we’ve hard-coded the fact that the first parameter is always 2. We can by extension do the same thing with the second parameter:

I hope you now understand the power of function currying, and why we’ll be using it for our generic functions, such as map.

Let’s remind ourselves what we need:

And what we return:

With this, we can start making a function signature. We’ll have three types: A, the original type of some element; B, the new type of said element after being mapped; and C, the accumulation which we’ll be reducing with the mapped element.

First off, we have the first parameter:

Looks simple for now. Next, we need the reducer function. Remember, reducers have the shape (X, Y) -> X, with X being the accumulation and Y being the element. This reducer function will obviously need to work on elements of type B and accumulation of type C, so it will have this signature:

This seems to make sense. If you’re getting confused by the whole “shape of reducers” concept feel free to scroll back up and re-read the “Reducers to the rescue” section.

Finally, we return the function which’ll take the actual accumulation and element. Think back to the . reduce([]) { ..., ..., in } method on sequences — that closure itself, with the brackets and the two parameters, is the function which takes the actual accumulation and element. The accumulation will still be of type C, that still hasn’t changed. However, we still haven’t mapped over the elements yet, they’ve just been given to us by that closure. This means that the elements are still of type A. So, the signature ends up looking like this:

Pretty intense, right? It helps to read it out loud in English:

“Map is a function which takes a function from A to B, and returns a new function. This new function takes a reducer which takes C and B and returns C, and it returns a new reducer which takes C and A and also returns C.”

Quite the mouthful, I know. But if you’re able to at least partly understand this, you’ve already made massive strides in understanding the shape of functions. Enough talking, let’s implement this function. First off, the first thing we do is return a new function which takes a reducer of type (C, B) -> C as a parameter. Unfortunately, due to Swift’s syntax, we have to make the signature slightly uglier:

Next, now that we have our reducer, we need to return another new function, which takes a C and an A, aka the actual accumulation and element:

Finally, in this nested function, we need to return a new C. How do we return a new C? Well, our reducer parameter is a function which takes C, and B, and returns C. So using it should work quite nicely. However, the element parameter is of type A; we can convert it to an element of type B by using our transform parameter. Nifty, right?

And there you go! You’ve successfully created what fancy functional programming people like to call transducers. However, that isn’t a term of art in Swift, so I prefer calling them higher-order reducers. Similarly to higher-order functions which take a function and return a new function, higher-order reducers take a reducer and return a new reducer.

2. Filter

Filtering an element is actually very similar to mapping over one — except instead of transforming each element with a function, you instead decide whether with a function whether the element should be reduced into the accumulation or not. That means that we’ll only have two generic types, A and C, since we won’t be doing any transformations. We’ll still be taking one parameter, this time a function which takes an A, and returns a Bool, deciding whether the element should be reduced or not. So already our function signature will look like this:

In English, this reads out as:

“Filter is a function which takes a function from A to Bool, and returns a new function. This new function takes a reducer which takes C and A and returns C, and it returns a new reducer which also takes C and A and returns C.”

Implementing it will feel quite similar to map:

And there’s another higher-order reducer you can add to your tool-belt.

3. Return the accumulation

So how about returning the accumulation, with / without the element appended to it? Well it turns out we’ve already taken care of that logic in our new higher-order reducers. Think about it, that second parameter in the functions, the one we call “reducer” — that’s the function which would append the element to the accumulation! In the map function, we call it on every single transformed element, while in the filter function, we only call it if the isIncluded function returns true.

A reducer function who’s job is to append elements to accumulations and return the accumulation…rings a bell, doesn’t it? Turns out it’s the append function we defined way up above because we saw it as a reoccurring pattern:

It seems to me like we have all the chess pieces in place now. We have:

Once we give our appending reducer to the higher-order reducer, we’ll get back a new reducer which we can pass right into the .reduce([], ...) method! Sounds to me like it’s time for some composition.

Composition craze, or function fever

Indeed, composition is the cornerstone of functional programming. It’s the reason why we made those functions the way we did, so that then they can all fit together like Lego blocks. Before we do anything rash, let’s remind ourselves of the function signature for the .reduce method on sequences:

func reduce<Result>(_ initialValue: Result,_ nextPartialResult: (Result, Element) -> Result) -> Result

What interests us is the second parameter, of type (Result, Element) -> Result. I don’t know about you, but that looks like to reducer to me! And, as we just saw, once we give a higher-order reducer like map a mapping function and then a reducer like append, we get back a reducer with the exact same shape as (Result, Element) -> Result. So we should be able to plug the result right in. Let’s use our mapping cars to drivers example to try it out. We’re going head-first into function currying here, so scroll back up to that section if you need a refresher.

Let’s do this backwards, and think about what we want the end result to be:

Since map is a curried function, we can go step-by-step, giving it one parameter at a time. First off is the transform closure, which in this case transforms a Car into a Driver. In the beginning, we defined an initializer on Driver which takes a Car, so we can just use that.

I wonder, what’s the type of the carToDriver function?

This may seem pretty intense, but if you look carefully, you might recognise its shape. It’s just the return type of map function with A, B, C replaced by Car, Driver, and [Driver].

Let’s plug in the next parameter. As you see above, it expects a function that does ([Driver], Driver) -> [Driver]. Looks to me like the perfect opportunity for the append function:

And there we have our reducer! We can now plug that reducer into the .reduce([], ...) method:

Let’s collapse everything we did into one:

Beautiful, isn’t it? All we’ve done is compose a few functions together, and now we get the full power of map. We can of course do the same thing with filter:

Great! We’ve regained the full power of filter by once again simply composing a few functions together.

But wait a minute, I’m interested in the signature of that first line, where reducer is of type ([Driver], Driver) -> [Driver]. Scroll up a little bit to where we defined the carToDriver function using map. Doesn’t carToDriver expect a parameter with that exact same type signature?

That means even more composition for us. Indeed, we can chain map and filter as much we want, since they both want a reducer as a parameter, but also both return a reducer.

We’ve finally come to the point where we’ve fully regained the power of declaratively chaining map and filter instead of imperative control flow. However, since we’re in a reduce function, no intermediate arrays between map and filter will be created!

If you understand what’s going on, why the map and filter are chain-able, and why they return a reducer, then you’ve truly understood the shape of functions. You can consider yourself proud.

However, the shape of functions and function currying isn’t the only thing that matters, is it? This is Swift after all! What about aesthetics?

Piping down the drain

An attractive feature of method chaining like in the first example of the code snippet above, is how easy it is to read. Left and right, up and down, nicely aligned, not too many parenthesis. A language designer’s dream. Is there a way to restore some of that readability when using the higher-order reducers?

The main obstacle to readability here is the heavy use of parenthesis. It may not seem to bad now, but once you have five or six map and filters chained together, aesthetics go down the drain. Isn’t it ironic that the solution to this is piping?

What are the components of a function call? There is the name of the function itself, and then there is the parameter given to it, usually surrounded by parenthesis. So, we could simply give these two components separately, and then defer the use of parenthesis to some underlying function. This is the concept behind backwards piping. With backwards piping, you have the function on the left, an operator (usually <|) in the middle, and the argument on the right.

For instance, the cos(_:) function returns the cosine of any number:

As you can see, we are piping the number 5 into the function cos . One might think this doesn’t make much difference here, but it has a drastic effect when used with the higher-order reducers:

This looks much cleaner in my opinion, and restores the left-to-right and up-and-down readability that we crave for.

Now, although aesthetics are of course very important to Swift, there is something else which is as important: performance.

In-and-out

Let’s think back for a moment to the reason we even started working on these higher-order reducers. It was so that we don’t create intermediate arrays between our map and filter — and that’s something we’ve accomplished, and accomplished very nicely might I add. However, something even worse is going on: we’re creating intermediate arrays between each element.

Another essential skill to have in addition to understanding the shape of functions, is a more general one: being able to navigate around the standard library source code. The standard library lies at the very basis of our own code, but is also a great example of what performant and efficient Swift code looks like. I highly advise anyone who doesn’t do so regularly to surf around the repo and get familiar with it.

I digress. What we want to understand is why an intermediate array is created after each iteration. For that, we’re going into the public/core/SequenceAlgorithms.swift and look at the implementation for reduce(_:_:):

As you can see, we’re just looping over all the elements in the sequence, reducing each element into the accumulator thanks to the reducer function named nextPartialResult, which of course has the shape (A, B) -> A.

However, there’s one line which is the source of our misfortune:

Every time we loop over an element, we create a new sequence thanks to the nextPartialResult reducer, and then reassign it to the current accumulation. Performance-wise, this isn’t great. Creating and reassigning arrays can be costly operations, which is the exact opposite of what we want.

Thankfully, the Core Team understands this too, and has thus gifted us with a more efficient and performant alternative:

Generally speaking, this implementation is almost identical to the one before. However, there is one big difference: the reducer parameter named updateAccumulatingResult. As you can see, its shape is seemingly different, from(A, B) -> A to (inout A, B) -> Void. But what is inout anyways?

In Swift, function parameters are constants, which means that we can’t change them, only copy them or pass them around. However, when a parameter is marked with inout, you’re allowed to mutate it in-place, and those changes are reflected in the value given buy the caller of the function.

As an example, we can look at our append function:

Because we can’t mutate the accumulation in-place, we need to return a new one with the new element appended to it. However, with inout, we don’t need to return a new accumulation anymore, we just need to mutate the one given to us. This also means that we return Void now.

These two functions basically do the same thing — they add an element to an array. While the first returns a new array with the element appended to it, the second just appends the element directly.

Remember, our append function is a reducer. So, more generally, we can say that any function of type (A, B) -> A, can also be expressed as (inout A, B) -> Void.

Ok, so now that we understand what inout does, why would this second implementation of reduce(_:_:), called reduce(into:_:) be better for our use-case? Well, since the second one uses inout, it only ever needs to allocate a single array, since that array is then mutated in-place by the reducer parameter as the function loops through each element. Neat, right?

However, we now need to change our higher-order reducers. While the first reduce(_:_:) function expects a reducer of type (A, B) -> A as its second parameter, our new reduce(into:_:) function expects a reducer of type (inout A, B) -> Void. Therefore, our higher-order reducers need to be adapted for this. Let’s remind ourselves what they look like:

As we just saw, a reducer of type (A, B) -> B can also be expressed as (inout A, B) -> Void. So, we should just replace the first with the second whenever we see it:

Now, using our new definition of append of type (inout A, B) -> Void, we can do the exact same things as before:

Only now we’re only ever allocating one array! This is a great boost for performance indeed.

Finishing thoughts

You’ve made it. Quite long, I know, I know. But look how far we’ve come! We started off with a jumbled mess of mutations which don’t scale nor compose, and have ended with a single succinct line of code where we describe what we want to do with our array.

Nevertheless, these higher-order reducers aren’t the programming invention of the century, nor are they essential to the success of your code. No, the real point of learning about them was the journey.

Throughout this article, you’ve learned to understand that functions don’t just have some parameters and an implementation — they have a shape. A shape such as (A, B) -> A, or (A) -> B, or even a more fancy one like (inout A, B) -> Void.

And it’s through understanding this shape of functions that we can learn to apply and compose functions together, in ways which empower incredibly expressive yet efficient code. Thanks for reading!

This article was greatly inspired by Mike Choi’s article called Transducers in Swift. Check it out here, and definitely give him a follow on Twitter and GitHub.