Javier Casas

A random walk through computer science

Patterns of Functional Programming: Functional Core - Imperative Shell

One of the ideas of functional programming is having pure functions, functions that have no side effects. But writing programs made exclusively from functions without side effects can't be useful in the real world, because programs have to affect the real world somehow. Inevitably, this means some parts of our programs must be effectful for the program to be useful.

As I indicated on a previous article Codata in action, there are some differences between Object Oriented and Functional Programming. One is Codata vs Data. Another is the approach at side effects. An Object Oriented program approaches effects and the complexity they entail by cutting the overall effect in smaller pieces, and assigning each piece to an object. This doesn't reduce complexity, but offers a way to attack it one piece at a time. On the other hand, Functional Programming approaches side effects and complexity with immutability and minimization. A Functional Program tries to minimize side effects so that handling them is easier.

One of the pattern that derives from minimizing side effects and maximizing immutability is Functional Core-Imperative Shell, as described by the talk by Gary Bernhardt. This is an architectural pattern that can be used to guide the general design of an application. Let's dissect it in order to understand it.

Functional Core

We want our application to be as pure functional and side-effect free as possible. With this in mind, we design the core of the application to be pure functional, and we maximize the proportion of this core in comparison with the shell. We make the business logic purely functional. We make the template rendering purely functional. We make the decision logic purely functional. We make all we can purely functional. This has several effects:

Effect 1: no mocking required

We can test most of the application with simple unit tests that don't require assuming anything about the environment where the application will run. You don't need to have a database, or mock one. You don't need to mock the network socket, or the file system, or external APIs. In fact, most of the mocking of the tests just goes away. Most of the tests of your application become simple unit tests that don't rely on anything external. Simple unit tests that test pure functions. Simple tests of the style of: "if call the function with this parameter, it shall return with this result".

Effect 2: fast and lightweight testsuite

If most of your tests are unit tests that don't depend on anything external, you just gained a lot. Unit tests that don't have to wait for the hard disk to read or write files, or for the operating system to return, or for the network to complete a round trip, are the fastest available. Even better, if the unit tests can't affect each other because the functionality is pure, there is no way the tests can change behavior because of being run in a different ordering. Or even if run in parallel. So that badass processor with a bazillion cores you just bought will be happy to make your testsuite run even faster by running the tests in parallel.

Effect 3: no externally-induced surprises

How many times have you seen lots of tests fail because an external component has changed? How many flaky tests have you had that would fail on the network being slow, the disk overloaded, or the database too slow to start? By not depending on any external components, you don't get surprises when these external components don't behave exactly how you want them to behave.

Imperative Shell

But our application can't be pure functional and still affect the external world in any useful way. But, once we have extracted most of it to pure functions, all that remains is a thin shell of impure code to interact with the external world. This thin shell creates more effects:

Effect 4: minimal integration and e2e tests

Integration and e2e tests tend to be flaky, slow, and full of problems. After all, when you write a Selenium e2e test that goes to the website, logs in, does some action and logs out, you are actually testing a lot of things. You are testing the integration of your testsuite with Selenium, the connection of Selenium to your browser, the ability of your browser to read and process websites, the network connection from your browser to the server, the network connection from the server to the database, the overload factor of your server and database, and, on top of that, your code. Any of these many things you test can make your test fail, but you are only interested in failures related to your code. This is why most of the literature out there suggests minimizing e2e tests in order to minimize unexpected test failures.

But, if the impure logic of your application, the logic that interacts with the external world, is minimized, the amount of tests needed to achieve good coverage is also minimized. This also helps isolating failures in tests. If something is wrong with the database, only the few e2e tests will fail, while all the unit tests will pass.

Example: a simple CLI utility

As indicated previously, this is an architectural design pattern, and it often involves other implementation design patterns to make it work. In this case, we will design a small CLI utility that will receive some parameters on the command line, and, depending on those parameters, read and write files.

Functional Core 1: Operating on the data of the files

As we indicated, the Functional Core should not read or write files, or interact with the external world in any way other than by receiving parameters and returning results. This means for our CLI utility, the functional core can't directly work with files. Instead, it has to work with in-memory representations of them. For files, we can go with simple Strings or other byte-level representation of files. The functional core will receive one or several Strings representing the input files, some parameters received from the command line, and will produce another String, representing the file to be written.

mainOperation :: CLIOptions -> String -> String
mainOperation options input = (lots of advanced processing with fancy algorithms go here)

This has the immediate benefit of testability. Testing mainOperation is simple, as it is testing a pure function. It may hide lots of complexity and run incredibly complex algorithms, but at the end of the day, we pass a combination of CLIOptions and String and we expect a specific String to be returned back.

Imperative Shell: Interconnecting to the outer world

The Imperative Shell has to do the IO operations, read and write files, and all this is effectful. Because of effectful operations, we have to deal with mocks, external effects and all that stuff that is hairy to test. But we focused on minimizing this Imperative Shell, this external intercommunication, in order to minimize the complexity of this part. Let's sketch how it would be:

main :: IO ()
main = do
        cmdLineArgs <- getCmdLineArgs
        case parseArgs cmdLineArgs of
          Left error -> putStrLn error
          Right c@CLIOptions{inputFilename, outputFilename} -> do
              inputFile <- readFile inputFilename
              let result = mainOperation c inputFile
              writeFile outputFilename result

Thanks to focusing on this architecture, our shell can be minimal. We read the command line parameters, check if the arguments are valid, open the file, delegate the main processing to mainOperation, and write the results back to a file. As a result of this, we only need two integration tests:

  • One that checks the Left branch of the case statement, I/E a test that confirms that we do nothing if the command line arguments are wrong.
  • One that checks the Right branch of the case statement, I/E a test that confirms that we do the expected operations if the command line arguments are acceptable.

Where are the thousands of tests required for testing all the advanced logic of the CLI application? All these tests are in the Functional Core, where they don't require command line parameters, or files, or anything.

Functional Core 2: The command line parser

Implicitly, I added another Functional Core while designing the Imperative Shell: the command line parser. getCmdLineArgs will get me a list of strings, like I write them on the command line. But parsing this command line into some CLIOptions is a complex task. Lots of books have been written on lexers, parsers and all that stuff, and we can't just ignore the complexity involved. So I have decided that the command line parser will be another pure function that will received the command line arguments, and will return either an error condition or some CLIOptions. It will neither deal on how are we going to receive these command line parameters, nor how are we going to show the error condition back to the user. Those aspects are left to the Imperative Shell.

parseArgs :: [String] -> Either ErrorString CLIOptions
parseArgs args = (lots of lexing and parsing go here)

Again, this is easy to test because it's a pure function. It's just a case of matching inputs and outputs:

  • For the input ["-i", "input.txt", "-o", "output.txt"] the function has to return Right CLIOptions {inputFilename="input.txt", outputFilename="output.txt"}.
  • For ["asdf"], the function has to return Left (Error "Invalid command line parameter: asdf").

Implementation pattern: Interpreter

Indirectly, we have invoked another pattern that is often used along with Functional Core-Imperative Shell: the Interpreter. The logic in the Functional Core must not talk to the outside world, because that's the job of the Imperative Shell and because we want pure functions to remain pure. This means the Functional Core must provide something to the Imperative Shell so that the Imperative Shell can do whatever needs to do without having to do advanced processing. The approach for this consists in creating some kind of language that the Functional Core uses to send instructions to the Imperative Shell. In this case, we can see this in parseArgs. parseArgs decides what the Imperative Shell will do (either fail with an error, or continue with more operations), and communicates with it by using the Either datatype, returning Left err to make the Imperative Shell fail with an error, or Right opts to make the Imperative Shell continue with the next operation.

This is often a sub-pattern in the Functional Core-Imperative Shell architecture: the Functional Core does the "thinking", the Imperative Shell does the "doing". The Imperative Shell is as simple and "dumb" as possible, because the advanced processing and "thinking" is reserved for the Functional Core. The Imperative Shell proceeds to talk to the external world, but as soon as it receives any information from it, it consults the Functional Core what to do with it.

Back to index
Copyright © 2018-2023 Javier Casas - All rights reserved.