Functional programming and F#: Introduction

Computer programmers sometimes mistake brevity for elegance, especially when discussing the relative merits of programming languages. Haskel partisans, for example, love to show how one can implement QuickSort in one line of inscrutable gibberish that looks like somebody had an epileptic seizure with the caps lock on. Haskell is a great language, but not because it saves you some typing. If being concise is really all that important, one might as well define a new programming language called ZipC, whose “compiler” is defined as “gunzip -c $1 | gcc”. You’ll get some really concise and “elegant” programs out of that, I’m sure! I’m usually pretty wary of academic languages like Lisp and ML, which often put computer science orthodoxy above usability. I’m just not interested in investing the time to learn a new language just so that I can save some typing and use cute tricks. The time saved is likely to be more than offset by the inefficiency of maintaining competence in multiple special purpose languages.

F#, however, is the first new language in a long time that I’ve felt is worth taking the time to learn. Developed by Microsoft research in England, F# is a functional language with roots in ML. Unlike many academic functional languages, however, F# is a pragmatic mix of imperative object oriented programming and functional constructs. Instead of forcing a paradigm on you, it simply makes several available. It can also leverage the full .NET framework, and is thus just as capable of implementing a GUI as it is an abstract recursive algorithm. It produces code that’s about as fast as C#, and it should only get faster with improvements in its CLR compiler and in .NET’s JIT compiler. A novel aspect of F# (in the context of .NET languages) is that it can be used as an interactive scripting language from within Visual Studio, allowing for interactive data visualization.

After working with it for a month or so, I find it to be a highly productive and expressive language, especially for numerical algorithms. It manages to be concise and high-level while maintaining coherence and readability. Well-written functions in F# are easier to follow and debug, relative to C++ or C#, and there are fewer opportunities for bugs to begin with. More than just cute syntax, it’s a language that actually encourages you to think differently. Programs implemented well in F# are a closer representation of the underlying concept and thinking, and are less obscured with with mundane details. Perhaps the best argument of all is empirical: the test program I wrote to compute fractal Newton basins worked the first time I compiled it, something that virtually never happens to me with C or C++.

In this post, I’ll provide a rough overview of functional programming and F#. There are much better introductions out there, but I nonetheless wanted to write this as a way to help myself learn about functional programming. I also figure F# is new enough that many of the people who read this blog might have not yet heard of it, and any small additional exposure this great language gets is worth it. This is not meant to provide a tutorial that will bring you up to speed on F#, but is intended to give you enough of an idea about it that you can decide whether or not you want to learn more.

In a follow up article, I’ll step through a short program in F# which uses some of the example functions I define below to implement a parallel computation of some pretty fractals generated by running Newton’s method for polynomial roots in the complex plane. In a third post, I’ll talk about some of the libraries available to speed up F# development. Because of the functional nature of F#, libraries written for F# can do some pretty impressive things. For example, the fractal generator detailed in the next post uses a third-party library function that handles all the details of multithreaded task parallelization through the surprisingly simple use of higher-order functions. In fact, parallelizing the program required typing exactly nine characters.

If you’d like to experiment with F#, you will first need some version of Visual Studio. I’m not sure if one of the free Express versions will work. (Someone please let me know if they have the answer to this.) The F# compiler is freely available from Microsoft Research, and integrates nicely with VS 2005. It is also reported to run under Mono on Mac OS X and Linux, though I haven’t verified this myself.

Functions as variables

A functional language is characterized by several key ideas. The core notion is that functions are fundamental objects, on par with variables. They can thus be passed to, and returned by, other functions. This enables some very efficient and conceptually satisfying ways of expressing algorithms, as we’ll see shortly. A functional language does nothing C or C# can’t do (sometimes in the hype of a new language, it’s easy to forget the old Church-Turing thesis) it’s just that it can often do so much more simply and in a way that is often closer to the way we think.

One oddity in F#, and your first hint that you’re dealing with a different kind of language, is the way function calls are defined. In a sense, an F# function really only ever takes one argument. To get a function to take multiple arguments, you are essentially writing a function which takes a single argument and returns another function which takes the second argument (and so on). For example, a built-in function which adds two integers ( + ), is listed in F# as having type (int -> int -> int). The interpretation of this odd notation is that it’s a function which takes an integer and returns a function of type (int -> int), which is itself a function which takes an integer and returns an integer. Perhaps a simpler way to think about this type is with the bracketed notation(int -> (int -> int)). When you write

z = ( + ) x y

what is happening, at least conceptually, is that the compiler greedily evaluates ( + ) x, which returns a function, and then it applies that function to y. (Don’t worry, F# also has infix syntax for binary operators so you can also just write the familiar z = x + y.)

One consequence of this is that you can do things like define a new function which increments an integer by 2 in terms of the ( + ) function with the following syntax

let incrtwo = ( + ) 2;;
let shouldbethree = incrtwo 1;;

This technique of making functions with multiple inputs from a chain of higher-order functions which each take single arguments is known as function currying. Granted, it’s pretty pointless in this static example, but consider the following interactive F# fragment:

> let makeincrementor s = ( + ) s;;
val makeincrementor : int -> (int -> int)
> let incrbyfive = makeincrementor 5;;
val incrbyfive : int -> int
> incrbyfive 6;;
val it : int = 11

The first line defines a function which takes an integer and returns a new function which increments its input by a run-time defined value s. The second applies this to define a new function which increments an integer by 5. So, function currying can be used to generate new functions dynamically. One exciting aspect of F# being a .NET language is that, in theory, this allows for a nice clean syntax for automatically generating specialized compiled functions on the fly during run time. (If anybody knows whether or not this already actually happens, please leave a comment with the answer.)

All functions in F# are inherently curried functions. Fortunately, there is syntax to write such functions without having to keep track of this fact. You can simply write

let myplus x y = x + y

and you automatically have a curried function of type (int -> int -> int). The fact that it will assume you want integer types all around is part of the automatic type inference of F#, which you will find to be both a blessing and a curse in practice.

The use of functions as arguments to other functions allows for tremendous abstraction, and certain “design patterns” which are no more than conceptual models in imperitive languages can actually be implemented as actual library routines in functional languages. One very powerful example of so called higher-order functions is the F# List.mapi function. It takes two arguments: a function and a list. The function is called repeatedly with two arguments: an integer representing the index into the list and the corresponding element of the list, creating a new list by looping over all elements of the initial list. A loop with an incremented local variable is one of the most common idioms in all of programming. How many times have you typed in the same for loop construct in C or C#? And how many times have you accidentally typed in the termination condition incorrectly, or missed a corner case?

As an example of how this is used in F#, below is a function which computes the derivative of a polynomial, passed in as a list of coefficients, ps, in ascending order:

let polyderiv (ps:float list) =
match ps with
| [] -> []
| lowcoef::pshigh ->
List.mapi (fun k pcoef -> float (k+1) * pcoef) pshigh

Pattern Matching

This example also uses another common idiom in functional programming: pattern matching. Program branch and control can often be handled very elegantly by matching an expression against a list of potential patterns. In the example function above, there are two possible situations. If the list is empty, we return an empty list. Otherwise, the list matches against lowcoef::pshigh, which is syntax for “take the first element as the local variable lowcoef and the remaining elements (if any) as pshigh.” This illustrates another powerful aspect of pattern matching: one simulaneously branches control while assigning relevent local variables to be used in the corresponding branch. Not only is this pattern matching approach more powerful and conceptually intuitive than if/then or switch statements, it also has the added side benefit of doing the work of local variable creation. After all, if you are bothering to treat a given pattern as a special case, you probably need to use at least one of the elements that match that pattern.

The last line uses the List.mapi higher order function applies an anonymous function to each coefficient (except the first), which simply multiplies the coefficient by its cardinal order in the list. Anonymous functions are to the function type what numerals are to the integer type: Just as you can write 3 + 7 without having to make two variables to hold each value, likewise you can just write a function without having to assign it a label. This makes for very efficient use of higher-order functions like List.iter or List.mapi, where a simple function to be given to them as argument can be defined on the spot. A curried function is also commonly used as an anonymous function.

Immutable Data

Another key idea, and one that is very odd at first, is the notion of immutable data structures. In most cases, when you declare a “variable” in F# you are really just creating a label for a value, and nothing more. You can reassign a new value to that label, but it’s just pure syntax. This seems like a huge impediment, but in practice it’s surprisingly not so, especially in the typical idioms of functional programming. If this doesn’t make sense yet, don’t worry, it will when you see more example code. For now, just consider immutable data structures to be like variables which can never be written to by pointer. Immutable data allows the compiler to do some unique optimizations and makes for efficient use of memory; if you create a new list that is the concatenation of two other lists, for example, no new memory needs to be created as the new list is simply composed of references to the immutable lists composing it. It also can make concurrent programming much more straightforward: provided you can express your algorithm in terms of immutable data structures, there’s no need to worry about locks as each thread can’t possibly have side effects that would affect the others. Having code that is automatically thread-safe makes programming concurrent algorithms much more straightforward. As many-core processors loom on the horizon, F# seems a rather timely language.

Recursions

The last functional programming aspect I’ll talk about is recursion. Mathematical induction is a natural way of thinking about many algorithms, and thus recursion is therefore a natural way to implement them that is concise and easily verified as correct. The typical example is computing the Fibonacci sequence: if you know the last two Fibonacci numbers, you just add them to get the next one. A recursive algorithm to compute the nth number would call itself with a request for the (n-1)st and (n-2)nd number and then add them together, with control code in the function for handling the base cases of n = 0 and n = 1. As a concrete example, and in keeping with the polynomial theme, here is a rather ugly function which uses recursion to evaluate a polynomial at a given value:

let rec polyval_aux (ps : float list) x xpow accum =
match ps with
| [] -> accum
| coef::psrest -> polyval_aux psrest x (x * xpow) (accum + (coef * xpow))

The rec keyword signifies to the compiler that this function calls itself, so it had better not try to evaluate the right hand side to greedily. This function certainly isn’t the most computationally efficient way to do this (though it is rather memory efficient) but it’s a simple example of a recursive function, one that implements two more common idioms: tail recursion and continuations. Tail recursion is the idea that if the complete final value of the function is simply the function called again, the stack only has to keep track of one function call at a time. It’s probably not a big deal here, but in cases where the function is recursively called a great number of times, it can matter.

You can also see one of the consequences of immutability: we have to keep passing the state of the computation. The arguments of the function are: a list ps of the polynomial coefficients, the value “x” at which to evaluate them, followed by two utility arguments which keep track of the state of the computation in the form of the current, a simple form of continuation. The values of “x” raised to increasing powers are passed on in “xpow,” and the running sum of the polynomial is kept in accum. Pattern matching is used as before, except now we use the head of the list, computing its contribution to the polynomial before passing along the remaining terms to the function again. When nothing is left, the accumulated sum is simply returned. The extra arguments are ugly, of course, so we use this as an auxilary function, to be called from the following wrapper function:

let polyval ps x =
polyval_aux ps x (Complex.Create (1.,0.)) (Complex.Create (0.,0.))

I think the above implementation of polynomial handling routines has two advantages over a similar set of functions in C. First, it’s really hard to screw up with stupid mistakes, like misallocation of arrays or bad referencing. Second, the intent of the code is more apparent, as the details of loops and counters have been hidden in the library functions. (Though I understand the intent may not be entirely transparent if you’re not used to the F# yet.)

If you’d like to learn more about F#, a couple good places to start are the Flying Frog consultancy and Microsoft Research. In the next post, I’ll present a stand alone Windows program that computes Newton basin fractals in 50 lines. Following that, I’ll finish by writing about some of the nascent resources that are starting to crop up for F#, in particular the excellent graphics and numerical libraries under development at Flying Frog.