Functional (or applicative) programming is a subject of growing interest with a great deal of research being done in Great Britain. It differs from the more normal so called imperative programming by being entirely descriptive. Instead of giving commands to the computer, objects are described. Each object is either a primitive object (like a number) or is generated by applying a function to other objects. Thus each description will have many sub-descriptions embedded in it. One of the great advantages of functional programming is that the sub-objects may be computed in any order, and if many processors are available, this includes the possibility of being computed in parallel.
Another advantage is the facility with which executable specifications of problems may be described, debugged and tried out. This is particularly beneficial for programmer productivity.
Research is now continuing in a number of directions:
Investigation of the theoretical and practical basis of parallel functional programming is being carried out by studying how various orders of evaluation may be modelled using pi-calculus. In addition, we are enquiring into the feasibility of using pi-calculus as an intermediate language in compilers for functional programming languages whose object code is designed for parallel machines.
Investigation of functional hyperprogramming, where program elements may be connected by hyper-links to allow re-use of existing developed software. It continues work done by STAPLE on persistent functional programming and is investigating storage structures needed for convenient browsing and linking of a functinoal program space.
Although parallel computers potentially offer very high performance for a comparatively low cost, it is often difficult to obtain this performance without considerable effort. Since functional languages are much more abstract than conventional programming languages, it is believed that they may simplify the problem of writing portable parallel programs, allowing programs to be implicitly sub-divided for parallel execution.
Removing control issues from the programmer means that they must be tackled directly by the implementation. Among the most important issues that need to be addressed are scheduling, dynamic load management, parallel memory management, and static task identification. We have developed two main tools to assist this research: the GranSim simulator, which exposes information about task granularity and dynamic system load in a controlled but highly tunable environment, and the extremely portable GUM implementation, which should run parallel Haskell programs without change on many kinds of parallel architecture. These tools have already confirmed results such as the need for rapid task migration for large-grained tasks, and demostrated interesting new results such as a general insensitivity to sophisticated data packing algorithms.