The goal of furrr is to simplify the combination of purrr’s family of mapping functions and future’s parallel processing capabilities. A new set of future_map_*() functions have been defined, and can be used as (hopefully) drop in replacements for the corresponding map_*() function.

The code draws heavily from the implementations of purrr and future.apply and this package would not be possible without either of them.

What has been implemented?

The full range of map(), map2(), pmap(), walk(), imap(), modify(), and invoke_map() functions have been implemented.

This includes strict versions like map_dbl() through future_map_dbl() and predicate versions like map_at() through future_map_at().


You can install the released version of furrr from CRAN with:

And the development version from GitHub with:


furrr has been designed to function identically to purrr, so that you can immediately have familiarity with it.

The default backend for future is a sequential one. This means that the code will run out of the box, but it will not be in parallel. The design of future makes this incredibly easy to change so that your code does run in parallel.

If you are still skeptical, here is some proof that we are running in parallel.

Progress bars

Who doesn’t love progress bars? For multiprocess, multicore, and multisession plans, you can activate a progress bar for your long running task with .progress = TRUE. Note that these are still a bit experimental so feedback is welcome. You should get a nice progress bar that looks like this:

A more compelling use case

This example comes from a Vignette from rsample. The vignette performs a 10 fold cross validation with 10 repeats of a GLM on the attrition data set. If you want all the details with explanation, see the vignette.

The vignette example runs pretty quickly on its own, so to make things more…interesting we are going to use 20 fold CV with 100 repeats.

Set up an rsample split tibble of 20 fold CV with 100 repeats.

The model formula below is going to be used in the GLM.

For each split, we want to calculate assessments on the holdout data, so a function was created to allow us to apply the model and easily extract what we need from each split.

Finally, purrr was used to map over all of the splits, apply the model to each, and extract the results.

First in sequential order…

Then in parallel…

We don’t get a 4x improvement on my 4 core Mac, but we do get a nice 2x speed up without doing any hard work. The reason we don’t get a 4x improvement is likely because of time spent transfering data to each R process, so this penalty will be minimized with longer running tasks and you might see better performance (for example, 100 fold CV with 100 repeats gave 122 seconds sequentially and 48 seconds in parallel). The implementation of future_lapply() does include a scheduling feature, which carried over nicely into furrr and efficiently breaks up the list of splits into 4 equal subsets. Each is passed to 1 core of my machine.

A few notes on performance

Data transfer

It’s important to remember that data has to be passed back and forth between the cores. This means that whatever performance gain you might have gotten from your parallelization can be crushed by moving large amounts of data around. For example, if instead of returning a results data frame in the above example, we returned the larger glm model object for each split, our performance drops a bit.

Luckily, the glm model is relatively small, so we don’t experience much loss, but there are model objects out there that can be 10’s of MBs in size. For models like those, I would advise wrapping up the work you want each core to do into a function, and only returning the actual performance metric you are looking for. This might mean a little bit more work on your side, but it results in smaller objects, and faster performance.

This performance drop can especially be prominent if using future_pmap() to iterate over rows and return large objects at each iteration.

Progress bars

Progress bars are best used when iterating over relatively few long running tasks. For instance, they are great when training over hyperparameters of a deep learning model, but I would not suggest them when iterating over the rows of a 100k row data frame. I’ve used every trick that I know to make them have minimal performance impact, but you will see degredation when using them with lots of elements to iterate over.

What has not been implemented (yet)?

Found a bug?

Feel free to open an issue, and I will do my best to work through it with you!