Mathematics of Program Construction


Diagonals and Block-Ordered Relations

Roland Backhouse and Ed Voermans

Submitted for publication.

More than 70 years ago, Jaques Riguet suggested the existence of an ``analogie frappante'' (striking analogy) between so-called ``relations de Ferrers'' and a class of difunctional relations, members of which we call ``diagonals''. Inspired by his suggestion, we formulate an ``analogie frappante'' linking the notion of a block-ordered relation and the notion of the diagonal of a relation. We formulate several novel properties of the core/index of a diagonal, and use these properties to rephrase our ``analogie frappante''. Loosely speaking, we show that a block-ordered relation is a provisional ordering up to isomorphism and reduction to its core. (Our theorems make this informal statement precise.) Unlike Riguet (and others who follow his example), we avoid almost entirely the use of nested complements to express and reason about properties of these notions: we use factors (aka residuals) instead. The only (and inevitable) exception to this is to show that our definition of a ``staircase'' relation is equivalent to Riguet's definition of a ``relation de Ferrers''. Our ``analogie frappante'' also makes it obvious that a ``staircase'' relation is not necessarily block-ordered, in spite of the mental picture of such a relation presented by Riguet.

See here for the paper.


The Thins Ordering on Relations

Ed Voermans, Jules Desharnais and Roland Backhouse

Submitted for publication.

Earlier papers introduced the notions of a core and an index of a relation (an index being a special case of a core). A limited form of the axiom of choice was postulated ---specifically that all partial equivalence relations (pers) have an index--- and the consequences of adding the axiom to axiom systems for point-free reasoning were explored. In this paper, we define a partial ordering on relations, which we call the \textsf{thins} ordering. We show that our axiom of choice is equivalent to the property that core relations are the minimal elements of the \textsf{thins} ordering. We also characterise the relations that are maximal with respect to the \textsf{thins} ordering. Apart from our axiom of choice, the axiom system we employ is paired to a bare minimum and admits many models other than concrete relations --- we do not assume, for example, the existence of complements; in the case of concrete relations, the theorem is that the maximal elements of the \textsf{thins} ordering are the empty relation and the equivalence relations. This and other properties of \textsf{thins} provide further evidence that our axiom of choice is a desirable means of strengthening point-free reasoning on relations.

See here for the paper.


An example of a goal-directed, calculational proof

Roland Backhouse, Walter Guttmann and Michael Winter

Accepted for publication in Journal of Functional Programming.

An equivalence relation can be constructed from a given (homogeneous, binary) relation in two steps: first, construct the smallest reflexive and transitive relation containing the given relation (the ``star'' of the relation) and, second, construct the largest symmetric relation that is included in the result of the first step. The fact that the final result is also reflexive and transitive (as well as symmetric), and thus an equivalence relation, is not immediately obvious, although straightforward to prove. Rather than prove that the defining properties of reflexivity and transitivity are satisfied, we establish reflexivity and transitivity constructively by exhibiting a particular starth root --- in a way that emphasises the creative process in its construction. The constructed starth root is fundamental to algorithms that determine the strongly connected components of a graph as well as the decomposition of a graph into its strongly connected components together with an acyclic graph connecting such components.

See here for the paper.


The Index and Core of a Relation, With Applications to the Axiomatics of Relation Algebra

Roland Backhouse and Ed Voermans

Submitted for publication.

We introduce the general notions of an index and a core of a relation. We postulate a limited form of the axiom of choice ---specifically that all partial equivalence relations have an index--- and explore the consequences of adding the axiom to standard axiom systems for point-free reasoning. Examples of the theorems we prove are that a core/index of a difunction is a bijection, and that the so-called ``all or nothing'' axiom used to facilitate pointwise reasoning is derivable from our axiom of choice.

See here for the paper.


On Difunctions

Roland Backhouse and José Nuno Oliveira

Journal of Logical and Algebraic Methods in Programming 134 (2023) 100878

The notion of a difunction was introduced by Jacques Riguet in 1948. Since then it has played a prominent role in database theory; it also has a significant role in the development of a theory of types that includes subtyping and types with laws. Difunctions are also used in program specification and in the characterization of certain types of simulations in process theory. The theory of difunctions is, however, less known in computing than it perhaps should be. The main purpose of the current paper is to give an account of difunction theory in relation algebra, with the aim of making the topic more mainstream. As is common with many important concepts, there are several different but equivalent characterisations of difunctionality, each with its own strength and practical significance. This paper compares different proofs of the equivalence of the characterisations. A well-known property ---first formulated by Riguet--- is that a difunction is a set of completely disjoint rectangles. This property suggests the introduction of the (general) notion of the ``core'' of a relation; we use this notion to give a novel and, we believe, illuminating characterisation of difunctionality as a bijection between the classes of certain partial equivalence relations.

See here for the paper.


The Index and Core of a Relation, With Applications to the Axiomatics of Relation Algebra and Block-Ordered Relations

Roland Backhouse and Ed Voermans

Working Document

We introduce the general notions of an index and a core of a relation. We postulate a limited form of the axiom of choice ---specifically that all partial equivalence relations have an index--- and explore the consequences of adding the axiom to standard axiom systems for point-free reasoning. Examples of the theorems we prove are that a core/index of a difunction is a bijection, and that the so-called ``all or nothing'' axiom used to facilitate pointwise reasoning is derivable from our axiom of choice. We reformulate and generalise a number of theorems originally due to Riguet on polar coverings of a relation. We study the properties of the ``diagonal'' of a relation (called the ``diff\'{e}rence'' by Riguet who introduced the concept in 1951). In particular, we formulate and prove a general theorem relating properties of the diagonal of a relation to block-ordered relations; the theorem generalises a property that Riguet called an ``analogie frappante'' between the ``diff\'{e}rence'' of a relation and ``relations de Ferrers'' (a special case of block-ordered relations).

See here for the paper.


Difunctions and Block-Ordered Relations

Roland Backhouse

Working Document

Seventy years ago, in a series of publications, Jacques Riguet introduced the notions of a ``relation difonctionelle'', the ``diff\a'{e}rence'' of a relation and ``relations de Ferrers''. He also presented a number of properties of these notions, including an ``analogie frappante'' between ``relations de Ferrers'' and the ``diff\a'{e}rence'' of a relation. Riguet's definitions, particularly of the central concept of the ``diff\a'{e}rence'' of a relation, use formulae involving nested complements. Riguet's proofs make extensive use of natural language (French) making them difficult to understand. The primary purpose of this paper is to bring Riguet's work up to date using modern calculational methods. Other goals are to document and extend Riguet's work as fully as possible, and to correct extant errors in the literature. We call a ``relation difonctionelle'' a ``difunctional relation'', the ``diff\a'{e}rence'' of a relation we call the ``diagonal'' of a relation and a ``relation de Ferrers'' we call a ``staircase relation'' --- a special case of a ``block-ordered relation''. We avoid as much as possible the use of nested complements by exploiting the left and right factor operators (aka division or residual operators) on relations. We present complete, calculational proofs of two fundamental properties of difunctions: a relation is difunctional if and only if it can be represented by a pair of functional relations and that a relation is difunctional if and only if it is the union of a set of completely disjoint rectangles. The diagonal of a relation (Riguet's ``diff\a'{e}rence'') is a difunction that plays a very significant r\^{o}le in the study of block-ordered relations; accordingly, we study its properties in depth. For completeness, we also present a second method for constructing a difunction from an arbitrary relation: Riguet's ``fermeture difonctionelle''. Riguet used an informal, mental picture of a staircase-like structure to introduce ``relations de Ferrers'' in the case of finite relations. Riguet also stated a necessary and sufficient condition for a ``relation de Ferrers'' to be the union of a totally ordered class of rectangles, where the ordering has a property that we call ``polar''. By omitting the totality requirement, we abstract the more general notion of a block-ordered relation. We explore conditions under which a given relation has a non-redundant, polar covering and when it is block-ordered. In doing so, we formulate and prove a theorem establishing an equivalence between the property of a relation being block-ordered and properties of the diagonal of a relation. Our theorem generalises Riguet's ``analogie frappante''. Finally, we consider the special case of staircase relations. We consider different definitions that formalise Riguet's mental picture. Contrary to claims made in the published literature, we show that the definitions are not equivalent in general.

See here for the paper.


Elements of Algorithmic Graph Theory. An Exercise in Combining Precision with Concision

Roland Backhouse, with contributions by Henk Doornbos, Roland Glueck and Jaap van der Woude

Working Document

Algorithmic graph theory ---as taught in many university courses--- focuses on the notions of acyclicity and strongly connected components of a graph, and the related search algorithms. This document is about combining mathematical precision and concision in the context of algorithmic graph theory. Specifically, we use point-free reasoning about paths in graphs (as opposed to pointwise reasoning about paths between nodes in graphs), resorting to pointwise reasoning only where this is unavoidable. Our aim is to use the calculations as the basis of a machine-supported formal verification of graph algorithms in order to assess the current state of automated verification systems. This document extends joint work with Henk Doornbos, Roland Glueck and Jaap van der Woude published in the Journal of Logical and Algebraic Methods in Programming.

See here for the paper.


Components and Acyclicity of Graphs. An Exercise in Combining Precision with Concision

Roland Backhouse, Henk Doornbos, Roland Glueck, Jaap van der Woude

J. Logical and Algebraic Methods in Programming, Vol. 124, January 2022, 100730, DOI:10.1016/jlamp.2021.100730

Central to algorithmic graph theory are the concepts of acyclicity and strongly connected components of a graph, and the related search algorithms. This document is about combining mathematical precision and concision in the presentation of these concepts. Concise formulations are given for, for example, the reflexive-transitive reduction of an acyclic graph, reachability properties of acyclic graphs and the relation to the fundamental concept of ``definiteness'', and the decomposition of paths in a graph via the identification of its strongly connected components and a pathwise homomorphic acyclic subgraph. The relevant properties are established by precise algebraic calculation. The combination of concision and precision is achieved by the use of point-free relation algebra capturing the algebraic properties of paths in graphs, as opposed to the use of pointwise reasoning about paths between nodes in graphs.

See here for the paper.


An analysis of repeated graph search

Roland Backhouse

In G.Hutton (Ed.): MPC2019, LNCS 11825, pp.1--31, 2019. DOI:10.13140/RG2.2.28481.81761

Graph-searching algorithms typically assume that a node is given from which the search begins but in many applications it is necessary to search a graph repeatedly until all nodes in the graph have been ``visited''. Sometimes a priority function is supplied to guide the choice of node when restarting the search, and sometimes not. We call the nodes from which a search of a graph is (re)started the ``delegate'' of the nodes found in that repetition of the search and we analyse the properties of the delegate function. We apply the analysis to establishing the correctness of the second phase of the Kosaraju-Sharir algorithm for computing strongly connected components of a graph.

See here for the paper.


Factor Theory Made Calculational

Roland Backhouse

Draft. Comments welcome. DOI: 10.13140/RG.2.2.25718.40009

In 1971, J.H. Conway introduced the notion of the factors of a language and the factor matrix of a regular language. Shortly afterwards, the author extended Conway's factor theory in several ways. The existence of a unique, minimal starth root of the factor matrix, dubbed the factor graph of the language, was established. It was also proved that the factor matrix of a factor of a language is a submatrix of the factor matrix of the language, and the factor graph of a factor of a language is pathwise homomomorphic to the factor graph of the language. The latter result was used to derive an algorithm for computing the factor matrix of a language from its factor graph with the property that the resulting regular expressions have star-height at most the cycle-rank of the factor graph, and could be strictly smaller. Using the simple device of naming the factor operators, the current paper revisits Conway's and our own work on factor theory in a calculational style. We give explicit constructions of the factor matrix, the factor graph, submatrices of the factor matrix defined by subfactors and pathwise homomorphisms of factor graphs. We also extend factor theory beyond this earlier work. We formulate the theory more abstractly so that we can rigorously justify the use of the syntactic monoid of a language in calculations involving factors. We also present Conway's theory of approximations of regular languages but extended to this more general abstract context. When specialised to regular languages, we prove the existence of a unique minimal approximating function and compare this with Conway's maximal constant+linear approximating function. The closure algorithm we present does not always construct regular expressions of minimal star-height. However, we speculate in the conclusions on how the theory of approximations might be exploited to develop a novel, effective approach to solving the star-height problem.

See here for the paper.


Factor Theory and the Unity of Opposites

Roland Backhouse

J. Logical and Algebraic Methods in Programming, Vol.85, No. 5, Pp.824--826 (2016)

The theory of factors of a regular language is used to illustrate the unity-of-opposites theorem of Galois connections. Left and right factors of a language are characterised as unions of right- and left-invariant equivalence classes, respectively, and this characterisation is exploited in the construction of the factor graph. The factor graph is a representation of the poset of left factors and, isomorphically by the unity of opposites, the poset of right factors. Two illustrative examples are given, one of which is the so-called failure function used in the Knuth-Morris-Pratt pattern matching algorithm.

See here for the paper.


In-situ Inversion of a Permutation

Roland Backhouse

Unpublished

A modern presentation of Knuth's Algorithm I for in-situ inversion of a permutation.

See here for the paper.


The Capacity-C Torch Problem

Roland Backhouse and Hai Truong

Science of Computer Programming, 1 May 2015, Vol.102:76-107, doi:10.1016/j.scisco.2015.01.003

The torch problem (also known as the bridge problem or the flashlight problem) is about getting a number of people across a bridge as quickly as possible under certain constraints. Although a very simply stated problem, the solution is surprisingly non-trivial. The case in which there are just four people and the capacity of the bridge is two is a well-known puzzle, widely publicised on the internet. We consider the general problem where the number of people, their individual crossing times and the capacity of the bridge are all input parameters. We present two methods to determine the shortest total crossing time: the first expresses the problem as an integer-programming problem that can be solved by a standard linear-programming package, and the second expresses the problem as a shortest-path problem in an acyclic directed graph, i.e.\ a dynamic-programming solution. The complexity of the integer-programming solution is difficult to predict; its main purpose is to act as an independent test of the correctness of the results returned by the second solution method. The dynamic-programming solution has best- and worst-case time complexity proportional to the square of the number of people. An empirical comparison of the efficiency of both methods is also presented. This manuscript has been accepted for publication in Science of Computer Programming. The manuscript has undergone copyediting, typesetting, and review of the resulting proof before being published in its final form. Please note that during the production process errors may have been discovered which could affect the content, and all disclaimers that apply to the journal apply to this manuscript.

See here for the paper.


First-past-the-post Games

Roland Backhouse

Science of Computer Programming 85, Part B (1 June 2014), pp. 166-203, Selected Papers from Mathematics of Program Construction 2012, Jeremy Gibbons and Pablo Nogueira (Eds.). Available online 13-Aug-2013, Science of Computer Programming (2013), http://authors.elsevier.com/sd/article/S0167642313001640, DOI 10.1016/j.scico.2013.07.007.

Informally, a first-past-the-post game is a (probabilistic) game where the winner is the person who predicts the event that occurs first among a set of events. Examples of first-past-the-post games include so-called block and hidden patterns and the Penney-Ante game invented by Walter Penney. We formalise the abstract notion of a first-past-the-post game, and the process of extending a probability distribution on symbols of an alphabet to the plays of a game. We establish a number of properties of such games, for example , the property that an incomplete first-past-the-post game is also a first-past-the-post game. Penney-Ante games are multi-player games characterised by a collection of regular, prefix-free languages. Analysis of such games is facilitated by a collection of simultaneous (non-linear) equations in languages. Essentially, the equations are due to Guibas and Odlyzko. However, they did not formulate them as equations in languages but as equations in generating functions detailing lengths of words. For such games, we show how to use the equations in languages to calculate the probability of winning and how to calculate the expected length of a game for a given outcome. We also exploit the properties of first-past-the-post games to show how to calculate the probability of winning in the course of a play of the game. In this way, we avoid the construction of a deterministic finite-state machine or the use of generating functions, the two methods traditionally used for the task. We observe that Aho and Corasick's generalisation of the Knuth-Morris-Pratt pattern-matching algorithm can be used to construct the deterministic finite-state machine that recognises the language underlying a Penney-Ante game. The two methods of calculating the probabilities and expected values, one based on the finite-state machine and the other based on the non-linear equations in languages, have been implemented and verified to yield the same results.

See here for the paper.


The Algorithmics of Solitaire-Like Game

Roland Backhouse, Wei Chen and Joao Ferreira

Science of Computer Programming 78 (2013), pp. 2029--2046, DOI: 10.1016/j.scico.2012.07.007

One-person solitaire-like games are explored with a view to using them in teaching algorithmic problem solving. The key to understanding solutions to such games is the identification of invariant properties of polynomial arithmetic. We demonstrate this via three case studies: solitaire itself, tiling problems and a novel class of one-person games.

The known classification of states of the game of (peg) solitaire into 16 equivalence classes is used to introduce the relevance of polynomial arithmetic. Then we give a novel algebraic formulation of the solution to a class of tiling problems. Finally, we introduce an infinite class of challenging one-person games, which we call ``replacement-set games'', inspired by earlier work by Chen and Backhouse on the relation between cyclotomic polynomials and generalisations of the seven-trees-in-one type isomorphism. We present an algorithm to solve arbitrary instances of replacement-set games and we show various ways of constructing infinite (solvable) classes of replacement-set games.

See here for the paper.


Exercises in Algorithm Design

Roland Backhouse

Unpublished

This is a collection of questions in algorithm design which might be adapted for use in an examination.

(Until my retirement, I gave several modules on algorithm design. Each year I developed new exercises and projects for the students to undertake. Whenever possible, I avoided reusing them: earlier exercises together with solutions were made available to the students so that they could see what a truly ``model'' solution looks like.)

See here for the paper.


Designing an algorithmic proof of the two-squares theorem

Joao Ferreira

Mathematics of Program Construction. 10th International Conference, June 2010. Lecture Notes In Computer Science vol. 6120, Claude Bolduc, Jules Desharnais, Bechir Ktari (Eds.) pages 140--156. © Springer-Verlag 2010.

We show a new and constructive proof of the two-squares theorem, based on a somewhat unusual, but very effective, way of rewriting the so-called extended Euclid's algorithm. Rather than simply verifying the result -- as it is usually done in the mathematical community --- we use Euclid's algorithm as an interface to investigate which numbers can be written as sums of two positive squares. The precise formulation of the problem as an algorithmic problem is the key, since it allows us to use algorithmic techniques and to avoid guessing. The notion of invariance, in particular, plays a central role in our development: it is used initially to observe that Euclid's algorithm can actually be used to represent a given number as a sum of two positive squares, and then it is used throughout the argument to prove other relevant properties. We also show how the use of program inversion techniques can make mathematical arguments more precise.

See here for the paper.


On Euclid's Algorithm and Elementary Number Theory

Roland Backhouse and Joao Ferreira

Science of Computer Programming, vol.76, no.3, pages 160 -- 180, 2011

Algorithms can be used to prove and to discover new theorems. This paper shows how algorithmic skills in general, and the notion of invariance in particular, can be used to derive many results from Euclid's algorithm. We illustrate how to use the algorithm as a verification interface (i.e., how to verify theorems) and as a construction interface (i.e., how to investigate and derive new theorems).

The theorems that we verify are well-known and most of them are included in standard number-theory books. The new results concern distributivity properties of the greatest common divisor and a new algorithm for efficiently enumerating the positive rationals in two different ways. One way is known and is due to Moshe Newman. The second is new and corresponds to a deforestation of the Stern-Brocot tree of rationals. We show that both enumerations stem from the same simple algorithm. In this way, we construct a Stern-Brocot enumeration algorithm with the same time and space complexity as Newman's algorithm. A short review of the original papers by Stern and Brocot is also included.

See here for the paper.


Principles of Algorithmic Problem Solving

Joao Ferreira

PhD Thesis, School of Computer Science, University of Nottingham, 2010

Algorithmic problem solving provides a radically new way of approaching and solving problems in general by using the advances that have been made in the basic principles of correct-by-construction algorithm design. The aim of this thesis is to provide educational material that shows how these advances can be used to support the teaching of mathematics and computing.

We rewrite material on elementary number theory and we show how the focus on the algorithmic content of the theory allows the systematisation of existing proofs and, more importantly, the construction of new knowledge in a practical and elegant way. For example, based on Euclid's algorithm, we derive a new and efficient algorithm to enumerate the positive rational numbers in two different ways, and we develop a new and constructive proof of the two-squares theorem.

Because the teaching of any subject can only be effective if the teacher has access to abundant and sufficiently varied educational material, we also include a catalogue of teaching scenarios. Teaching scenarios are fully worked out solutions to algorithmic problems together with detailed guidelines on the principles captured by the problem, how the problem is tackled, and how it is solved. Most of the scenarios have a recreational flavour i and are designed to promote self-discovery by the students.

Based on the material developed, we are convinced that goal-oriented, calculational algorithmic skills can be used to enrich and reinvigorate the teaching of mathematics and computing.

See here for the paper.


The Elements of Dynamic Programming

Roland Backhouse

Unpublished Lecture Notes

So-called ``dynamic programming'' is a technique for constructing efficient solutions to certain optimisation problems and is often discussed in textbooks on algorithms and operations research. This paper presents a simple example of the use of the technique. All components of the technique are detailed with full mathematical rigour.

See here for the paper.


The Algorithmics of Solitaire-Like Game

Roland Backhouse, Wei Chen and Joao Ferreira

Mathematics of Program Construction. 10th International Conference, June 2010. Lecture Notes In Computer Science vol. 6120, Claude Bolduc, Jules Desharnais, Bechir Ktari (Eds.) pages 1--18. © Springer-Verlag 2010.

Puzzles and games have been used for centuries to nurture problem-solving skills. Although often presented as isolated brain-teasers, the desire to know how to win makes games ideal examples for teaching algorithmic problem solving. With this in mind, this paper explores one-person solitaire-like games.

The key to understanding solutions to solitaire-like games is the identification of invariant properties of polynomial arithmetic. We demonstrate this via three case studies: solitaire itself, tiling problems and a collection of novel one-person games. The known classification of states of the game of (peg) solitaire into $16$ equivalence classes is used to introduce the relevance of polynomial arithmetic. Then we give a novel algebraic formulation of the solution to a class of tiling problems. Finally, we introduce an infinite class of challenging one-person games inspired by earlier work by Chen and Backhouse on the relation between cyclotomic polynomials and generalisations of the seven-trees-in-one type isomorphism. We show how to derive algorithms to solve these games.

See here for the paper.


Which Mathematics for the Information Society?

Joao F. Ferreira, Alexandra Mendes, Roland Backhouse and Luis S. Barbosa

Teaching Formal Methods, 2nd International Conference, November 2009. Lecture Notes In Computer Science vol. 5846, Jeremy Gibbons and Jose Nuno Oliveira (Eds.) pages 39--56. © Springer-Verlag 2009.

MathIS is a new project that aims to reinvigorate secondary-school mathematics by exploiting insights of the dynamics of algorithmic problem solving. This paper describes the main ideas that underpin the project. In summary, we propose a central role for formal logic, the development of a calculational style of reasoning, the emphasis on the algorithmic nature of mathematics, and the promotion of self-discovery by the students. These ideas are discussed and the case is made, through a number of examples that show the teaching style that we want to introduce, for their relevance in shaping mathematics training for the years to come. In our opinion, the education of software engineers that work effectively with formal methods and mathematical abstractions should start before university and would benefit from the ideas discussed here.

See here for the paper.


First-past-the-post Games

Roland Backhouse

J. Gibbons and P. Nogueira (Eds.), MPC 2012, LNCS 7342, pp.157-176, 2012. © Springer-Verlag

Informally, a first-past-the-post game is a (probabilistic) game where the winner is the person who predicts the event that occurs first among a set of events. Examples of first-past-the-post games include so-called block and hidden patterns and the Penney-Ante game invented by Walter Penney. We formalise the abstract notion of a first-past-the-post game, and the process of extending a probability distribution on symbols of an alphabet to the plays of a game.

Analysis of first-past-the-post games depends on a collection of simultaneous (non-linear) equations in languages. Essentially, the equations are due to Guibas and Odlyzko but they did not formulate them as equations in languages but as equations in generating functions detailing lengths of words.

Penney-Ante games are two-player games characterised by a collection of regular, prefix-free languages. For such two-player games, we show how to use the equations in languages to calculate the probability of winning. The formula generalises a formula due to John H.\ Conway for the original Penney-Ante game. At no point in our analysis do we use generating functions. Even so, we are able to calculate probabilities and expected values. Generating functions do appear to become necessary when higher-order cumulatives (for example, the standard deviation) are also required.

See here for the paper.


Meeting a Fanclub: A Lattice of Generic Shape Selectors

Roland Backhouse, Richard Bird and Paul Hoogendijk

Accepted for publication in Workshop on Generic Programming WGP09, 30th August 2009. This is the authors' version of the work. It is published here with the permission of ACM for your personal use. Not for redistribution. The definitive version of the article was published in WGP'09, Proceedings of the 2009 ACM SIGPLAN Workshop on Generic Programming, August 30, 2009, pp. 73--83, ISBN: 978-1-60558-510-9/09/08.

The ``fan'' of a datatype F is a relation that holds between a value x and an arbitrary F structure in which the only stored value is x. Fans are used to make precise the notion of the shape of a data structure.

For arbitrary datatypes F, G and H, we consider six different ways of composing their fans in order to construct F structures of G structures of H structures; each of the six imposes a different requirement on the shape of the substructures. We catalogue the relation between different combinations of the constructi ons. We apply the result to the shape properties of a natural transformation (aka polymorphic relation) from G structures to FG structures.

See here for the paper.


The Capacity C Torch Problem

Roland Backhouse

In Mathematics of Program Construction 9th International Conference, MPC 2008, Marseille, France, July 15-18, 2008, pages 57-78 © Springer-Verlag

The torch problem (also known as the bridge problem or the flashlight problem) is about getting a number of people across a bridge as quickly as possible under certain constraints. Although a very simply stated problem, the solution is surprisingly non-trivial. The case in which there are just four people and the capacity of the bridge is two is a well-known puzzle, widely publicised on the internet. We consider the general problem where the number of people, their individual crossing times and the capacity of the bridge are all input parameters. We present an algorithm that determines the shortest total crossing time; the number of primitive computations executed by the algorithm (i.e. the worst-case time complexity of the algorithm) is proportional to the square of the number of people.

Note added 14/02/2012. The statement of theorem 3 in the published conference paper is incorrect. It has been corrected in this version of the paper. Thanks to Ole Rummel for pointing out the error.

See here for the paper.


Datatype-Generic Termination Proofs

Roland Backhouse and Henk Doornbos

In Theory of Computing Systems (2008) 43: 362--393, DOI: 10.1007/s00224-007-9056-z © Springer-Verlag

Datatype-generic programs are programs that are parameterised by a datatype. We review the allegorical foundations of a methodology of designing datatype-generic programs. The notion of F-reductivity, where F parametrises a datatype, is reviewed and a number of its properties are presented. The properties are used to give concise, effective proofs of termination of a number of datatype-generic programming schemas. The paper concludes with a concise proof of the well-foundedness of a datatype-generic occurs-in relation.

See here for the paper.


Recounting the Rationals. Twice!

Roland Backhouse and Joao Ferreira

In Mathematics of Program Construction 9th International Conference, MPC 2008, Marseille, France, July 15-18, 2008, pages 79-91. © Springer-Verlag

We derive an algorithm that enables the rationals to be enumerated in two different ways. One way is known, and is called Calkin-Wilf-Newman enumeration; the second is new and corresponds to a flattening of the Stern-Brocot tree of rationals. We show that both enumerations stem from the same simple algorithm. In this way, we construct a Stern-Brocot enumeration algorithm with the same time and space complexity as Calkin-Wilf-Newman enumeration.

See here for the paper.


Datatype-generic Reasoning

Roland Backhouse

In Logical Approaches to Computational Barriers, Arnold Beckmann, Ulrich Berger, Benedikt L\"owe, John V. Tucker (Eds.), Springer-Verlag, LNCS 3988, pp. 21--34 © Springer-Verlag

Datatype-generic programs are programs that are parameterised by a datatype. Designing datatype-generic programs brings new challenges and new opportunities. We review the allegorical foundations of a methodology of designing datatype-generic programs. The effectiveness of the methodology is demonstrated by an extraordinarily concise proof of the well-foundedness of a datatype-generic occurs-in relation.

See here for the paper.


Calculational Combinatorics

Roland Backhouse and Arjan Mooij

Unpublished

We present a new generalised distributivity rule for manipulating quantified expressions. We use this rule to calculate the solution to a well-known light-bulb problem, which is traditionally solved using induction. Our rule appears to have general applicability to combinatorial problems.

See here for the paper.


Exercises in Quantifier Manipulation

Roland Backhouse and Diethard Michaelis

In Mathematics of Program Construction 2006, Tarmo Uustalu (Ed.), Springer-Verlag, LNCS 4014, pp. 70--81 © Springer-Verlag

The Eindhoven quantifier notation is systematic, unlike standard mathematicial notation. This has the major advantage that calculations with quantified expressions become more straightforward because the calculational rules need be given just once for a great variety of different quantifiers.

We demonstrate the ease of calculation with finite quantifications by considering a number of examples. Two are simple warm-up exercises, using boolean equality as the quantifier. Three are taken from books of challenging mathematical problems, and one is a problem concocted by the authors to demonstrate the techniques.

See here for the paper.


A De Morgan-like Theorem for Increasing Sequences

Roland Backhouse and Diethard Michaelis

Unpublished. (Shortened version included in "Exercises in Quantifier Manipulation", MPC 2006.)

We formulate and prove a theorem on increasing sequences akin to De Morgan's law in boolean algebra. We apply the theorem to a problem on summing absolute differences, due to Vyacheslav Proizvolov. Our theorem is more general than Proizvolov's, as demonstrated by several novel applications.

See here for the paper.


A Problem of Absolute Differences

Roland Backhouse and Diethard Michaelis

Unpublished. (Included in "Exercises in Quantifier Manipulation", MPC 2006.)

We provide a calculational solution to a problem of summing a sequence of absolute differences.

See here for the paper.


Closure Algorithms and the Star-Height Problem of Regular Algorithms

Roland Backhouse

PhD thesis, University of London, 1975.

The main motivation for this work is the star-height problem of regular languages proposed by Eggan.

We begin by investigating in some detail analogies between regular algebra and linear algebra. It is shown how, using a few simple tautologies of regular algebra, one can derive product forms for the closure A* of a matrix A analogous to the Gauss and Jordan product forms for (I-A)^{-1} in linear algebra. Methods for finding A* analogous the iterative methods of linear algebra are also briefly discussed.

Following this we present various results in the theory of factors of regular languages. We show that there is a unique minimal starth root G_Q of the factor matrix |Q| of a regular language Q, and give an algorithm to calculate G_Q. It is then shown that the factor matrix of a factor H of Q is a submatrix of the factor matrix |Q|. This leads to an algorithm to calculate the closure (G_Q)* of G_Q. The algorithm yields a regular expression for Q which is proven to have star-height always less than or equal to (and in some cases strictly less than) the rank of G_Q. Moreover the algorithm does not have any analogous method in linear algebra.

The algorithm to find G_Q)* does not always find a minimal star-height expression for Q and the star-height problem is still open. But it is suggested that the algorithm may be more generally applicable (than just to factor graphs), thus possibly giving fresh insight into the star-height problem.

See here for the paper.


Factor Graphs and Factor Matrices

Roland Backhouse

Extract from Closure Algorithms and the Star Height Problem of Regular Languages, PhD thesis, University of London, 1975.

Conway (Regular algebra and Finite Machines, Chapman and Hall, 1971) introduced the notion of the factor matrix of a regular language. This document introduces the notion of the factor graph (the transitive reduction of the factor matrix) and shows how it is constructed from the machine, anti-machine and semigroup of a language.

See here for the paper.


Factor Graphs, Failure Functions and Bi-Trees

R.C. Backhouse and R.K. Lutz

In Fourth Colloquium on Automata, Languages and Programming, A. Salomaa and M. Steinby (Eds.), Springer-Verlag, LNCS 52, 1977, pp. 61--75

The factors and factor matrix of a language are defined and their properties stated. It is shown that the factor matrix of a language Q has a unique starth root --- called the factor graph of Q. The Knuth, Morris, Pratt pattern-matching algorithm, its extensions and Weiner's substring identifier algorithm are all shown to correspond to finding the factor graph of some regular language.

See here for the paper.


A Calculational Presentation of Impartial Two-Person Games

Roland Backhouse and Diethard Michaelis

Abstract presented at RelMiCS8

We develop the theory of two-person impartial games in a calculational style making clear and precise all the logical principles involved.

See here for the paper.


A Comparison of Gaussian and Gauss-Jordan Elimination in Regular Algebra

R.C. Backhouse and B.A.Carré

International Journal of Computer Mathematics 10, 3--4, 311--325, doi: 10.1080/00207168208803290

A comparison is presented in regular algebra of the Gaussian and Gauss-Jordon elimination techniques for solving sparse systems of simultaneous equations. Specifically, the elimination form and product form of the star A^{*} of a matrix A are defined and it is then shown that the product form is never more sparse than the elimination form. This result generalises an earlier one due to Brayton, Gustavson and Willoughby in which it is shown that the product form of the inverse A^{-1} of a matrix A is never more sparse than the elimination form of the inverse. Our result applies both in linear algebra and, more generally, to path-finding problems.

See here for the paper.


Regular Algebra Applied to Path-Finding Problems

R.C. Backhouse and B.A.Carré

J.Inst.Maths.Applics (1975) 15, 161--186. Reproduced with the permission of Oxford University Press.

In an earlier paper, one of the authors presented an algebra for formulating and solving extremal path problems. There are striking similarities between that algebra and the algebra of regular languages, which lead one to consider whether the previous results can be generalized --for instance to path enumeration problems-- and whether the algebra of regular languages can itself be profitably used for the general study of path-finding problems. This paper gives affirmative answers to both these questions.

See here for the paper.


Regular Algebra Applied to Language Problems

Roland Backhouse

Journal of Logic and Algebraic Programming. 66 (2006) 71--111. © Elsevier Science Direct.

Many functions on context-free languages can be expressed in the form of theleast fixed point of a function whose definition mimics the grammar of the given language. Examples include the function returning the length of the shortest word in a language, and the function returning the smallest number of edit operations required to transform a given word into a word in a language.

This paper presents the basic theory that explains when a function on a context-free language can be defined in this way. It is shown how the theory can be applied in a methodology for programming the evaluation of such functions.

Specific results include a novel definition of a regular algebra focussing on the existence of so-called ``factors'', and several constructions of non-trivial regular algebras. Several challenging problems are given as examples, some of which are old and some of which are new.

Erratum Page 75. Interchange c and d on the right side of the third displayed equation.

See here for the paper.


Fixed-Point Characterisation of Winning Strategies in Impartial Games

Roland Backhouse and Diethard Michaelis

Abridged version (omitting proofs) in LNCS 3051, Relational and Kleene-Algebraic Methods in Computer Science, Rudolf Berghammer, Bernhard Möller and Georg Struth (Eds.), pp 34--47, 2004 © Springer-Verlag)

We use fixed-point calculus to characterise winning strategies in impartial, two-person games. A byproduct is the fixed-point characterisation of winning, losing and stalemate positions. We expect the results to be most useful in teaching calculational reasoning about least and greatest fixed points.

See here for the paper.


Safety of Abstract Interpretations for Free,via Logical Relations and Galois Connections

Kevin Backhouse and Roland Backhouse

In Science of Computer Programming, vol. 15, nos. 1--2, pp. 153--196, 2004.

Algebraic properties of logical relations on partially ordered sets are studied. It is shown how to construct a logical relation that extends a collection of base Galois connections to a Galois connection of arbitrary higher-order type. ``Theorems-for-free'' is used to show that the construction ensures safe abstract interpretation of parametrically polymorphic functions. The properties are used to show how abstract interpretations of program libraries can be constructed.

See here for the paper.


Generic Properties of Datatypes

Roland Backhouse and Paul Hoogendijk

Presented at Summer School on Generic Programming, Oxford, August 2002 © Springer-Verlag.

Generic programming adds a new dimension to the parametrisation of programs by allowing programs to be dependent on the structure of the data that they manipulate. Apart from the practical advantages of improved productivity that this offers, a major potential advantage is a substantial reduction in the burden of proof -- by making programs more general we can also make them more robust and more reliable. These lectures discuss a theory of datatypes based on the algebra of relations which forms a basis for understanding datatype-generic programs. We review the notion of parametric polymorphism much exploited in conventional functional programming languages and show how it is extended to the higher-order notions of polymorphism relevant to generic programming.

See here for the paper.


Galois Connections and Logical Relations

Kevin Backhouse and Roland Backhouse

Int. Conf. on Mathematics of Program Construction, Dagstuhl, Germany, July 2002. Springer Verlag LNCS 2386, pp. 23--39. © Springer-Verlag

Algebraic properties of logical relations on partially ordered sets are studied. It is shown how to construct a logical relation that extends a collection of base Galois connections to a Galois connection of arbitrary higher-order type. ``Theorems-for-free'' is used to show that the construction ensures safe abstract interpretation of parametrically polymorphic functions.

See here for the paper.


Mathematics and Programming. A Revolution in the Art of Effective Reasoning

Roland Backhouse

Inaugural lecture, School of Computer Science and IT, University of Nottingham, 24th October 2001.

The modern world is highly dependent on the reliable functioning of computer software. The sheer scale of software systems makes their design and implementation a highly demanding intellectual activity. Meeting these demands has inspired a revolution in the way that mathematics, the art of effective reasoning, is conducted and presented. Continued effort is needed in education and research in the mathematical construction of programs, based on the controlled manipulation of mathematical formulae.

See here for the paper.


Mathematics of Recursive Program Construction

Henk Doornbos and Roland Backhouse

Unpublished.

A discipline for the design of recursive programs is presented. The core concept of the discipline is a new induction principle. The principle is based on a property of relations, called reductivity, that generalises the property of admitting induction to one relative to a given datatype. The principle is used to characterise a broad class of recursive equations that have a unique solution and is also related to standard techniques for proving termination of programs. Methods based on the structure of the given datatype for constructing reductive relations are discussed.

See here for the paper.


Find (Refined)

Roland Backhouse

Unpublished.

A new implementation of Hoare's Find program is developed. The program determines the Kth smallest element in an (unordered) array of values and how often it occurs.

We argue that Find provides an excellent example of the design of an algorithm that is non-trivial, correct by construction, and yet within the scope of an introductory module on (formal) program design.

See here for the paper.


Fusion on Languages

Roland Backhouse

10th European Symposium on Programming, ESOP 2001. Springer LNCS 2028, pp. 107--121. © Springer-Verlag

Many functions on context-free languages can be expressed in the form of the least fixed point of a function whose definition mimics the grammar of the given language. This paper presents the basic theory that explains when a function on a context-free language can be defined in this way. The contributions are: a novel definition of a regular algebra capturing division properties, several theorems showing how complex regular algebras are built from simpler ones, and the application of fixed point theory and Galois connections to practical programming problems.

See here for the paper.


The Associativity of Equivalence and the Towers of Hanoi Problem

Roland Backhouse and Maarten Fokkinga

Information Processing Letters, 77 (Feb. 2001) pp 71--76.

Dijkstra and Scholten have argued that greater use should be made of the associativity of equivalence. This note shows how the property is used in specifying the rotation of the disks in the well-known Towers of Hanoi problem.
Keywords: propositional calculus, program proof.

See here for the paper.


Galois Connections and Fixed Point Calculus

Roland Backhouse

Lecture Notes for module Programming Algebra, Nottingham, 2001

Fixed point calculus is about the solution of recursive equations defined by a monotonic endofunction on a partially ordered set. This tutorial presents the basic theory of fixed point calculus together with a number of applications of direct relevance to the construction of computer programs. The tutorial also presents the theory and application of Galois connections between partially ordered sets. In particular, the intimate relation between Galois connections and fixed point equations is amply demonstrated.

See here for the paper.


Fixed Point Calculus

Roland Backhouse

Presented at the Summer School and Workshop on Algebraic and Coalgebraic Methods in the Mathematics of Program Construction, Oxford, April 11--14, 2000.

A shortened version of this paper appears in the published proceedings of the Summer School Algebraic and Coalgebraic Methods in the Mathematics of Program Construction. LNCS 2297, 2002, pp. 89--148. © Springer-Verlag.

Fixed point calculus is about the solution of recursive equations defined by a monotonic endofunction on a partially ordered set. This tutorial discusses applications of fixed point calculus in the construction of computer programs, beginning with standard applications and progressing to recent research.

The basic properties of least and greatest fixed points are presented. Well-foundedness and inductive properties of relations are expressed in terms of fixed points. A class of fixed point equations, called ``hylo'' equations, is introduced. A methodology of recursive program design based on the use of hylo equations is presented. Current research on generalisations of well-foundedness and inductive properties of relations, making these properties relative to a datatype, is introduced.

See here for the paper.


Algebra of Program Termination

Henk Doornbos and Roland Backhouse

Presented at the Summer School and Workshop on Algebraic and Coalgebraic Methods in the Mathematics of Program Construction, Oxford, April 11--14, 2000.

Proceedings of the Summer School Algebraic and Coalgebraic Methods in the Mathematics of Program Construction. LNCS 2297, 2002, pp. 89--148. © Springer-Verlag.

Well-foundedness and inductive properties of relations are expressed in terms of fixed points. A class of fixed point equations, called ``hylo'' equations, is introduced. A methodology of recursive program design based on the use of hylo equations is presented. Current research on generalisations of well-foundedness and inductive properties of relations, making these properties relative to a datatype, is introduced.

See here for the paper.


Towards Tool Support for Program Verification and Construction

Richard Verhoeven and Roland Backhouse

FM '99 -- Formal Methods, Jeanette Wing, Jim Woodcock and Jim Davies (Eds.), LNCS 1709, pp. 1128--1146, September 1999

MathSpad is a document preparation system designed and developed by the authors and oriented towards the calculational construction of programs . PVS (Prototype Verification System) is a theorem checker developed at SRI that has been extensively used for verifying software, in particular in safety-critical applications. This paper describes how these two systems have been combined into one. We discuss the potential benefits of the combination seen from the viewpoint of someone wanting to use formal methods for the construction of computer programs, and we discuss the architecture of the combined system for the benefit of anyone wanting to investigate combining the MathSpad system with other programming tools.

See here for the paper.


Inductive Datatypes with Laws and Subtyping --- A Relational Model

Ed Voermans

Ph.D.thesis, Eindhoven University of Technology, January 1999

(From the introduction)
Inductive datatypes, datatypes where elements of the type occur as ``subcomponents'' of other elements of the type, are an essential feature of all modern programming languages. Commonly used examples of such types are for example binary trees where, a tree can have other binary trees as subtrees, or cons-lists, where the tail of a cons-list is another cons-list. A standard mathematical method for reasoning about such datatypes and programs operating with these types was developed by Malcolm. He constructed an elegant generic theory of free inductive datatypes using category theory based on the concepts of functors and initial algebras. By generic we mean parameterised by the shape of the datatype.
A limitation of this theory is that it only deals with free datatypes, types without rules specifying equality of elements or restrictions on the construction of elements. In practice there are many common datatypes that are not free. For example, join-lists have associativity laws on the join operation, and height-balanced trees can not be constructed using arbitrary subtrees. Fokkinga extended Malcolm's theory to datatypes with laws, but was not able to handle restrictions on the construction of elements (subtyping).
Other, set-theoretical, theories about inductive datatypes can handle both laws and subtyping but have as disadvantage that they treat laws and subtyping as dual concepts. This complicates reasoning about datatypes that combine both laws and subtyping. An example of a type combining both concepts is the AVL-tree, where different trees can be used to represent the same set of values (law), but where it is not allowed to join two arbitrary AVL-trees to construct a new valid AVL tree (restriction).
The goal of this thesis is to develop a theory about inductive datatypes that can handle laws and subtyping in a uniform way. The theory should predict when (recursively defined) operations are well-defined and when they are uniquely defined. The theory should also provide a sound basis for the construction and verification of generic programs.
The theory of inductive datatypes presented in this thesis was inspired by the category-theoretical approach but uses a point-free relational calculus to model both datatypes and programs. One of the main advantages of using the relational calculus is that it opens up the possibility of working with lattices where extreme solutions to equations are uniquely defined. Category theory always gives solutions ``up to isomorphism'' that are often less suitable for direct manipulation. The extreme solutions of lattice equations provide unique, canonical representations of the concepts that are being modelled. Datatypes and programs are usually specified as solutions to equations
Another advantage of the lattice structures that are available when working with relations is the abundant possibility for using Galois connections. Identifying Galois connections and using their calculational properties is a recurring theme throughout the thesis. We prefer a calculational style for constructing and presenting proofs and Galois connections are a great tool for this purpose.
We identify a special class of relations that can be used as representatives for datatypes. These datatypes are the elements of a complete lattice where the ordering represents (the combination of) subtyping and quotient formation. Combining these aspects in a single ordering allows us to find solutions for specifications involving both restrictions (subtyping) and laws (quotients). Combining these features is often difficult in other formalisms for datatypes. This ordering is a vital tool for achieving our goal of a uniform treatment of laws and subtyping.
Our datatype construction methods are inspired by categorical datatype theories and we will construct a category where objects and arrows are relations. Categorical notions like functors, natural transformations and F-algebras lead to relational constructions that are useful for the construction of datatypes and programs.
A variant of F-algebras is used for the introduction of inductive datatypes and structural recursion. An important aspect of datatype construction is simulation, implementing one datatype using another datatype. The notion of simulation can easily be formulated in our theory.
Inductive types that simulate each other form equivalence classes. We prove the remarkable result that every equivalence class contains one special representative.The special representatives form a complete lattice, using our special ordering of datatypes. The elements of the lattice represent all inductively defined datatypes for a given induction structure. Using this lattice, we can describe inductive datatypes with both laws and restrictions as an extreme fixpoint. We will give an equivalent characterization of the extreme fixpoint using a Galois connection. This Galois connection, which defines a closure operation, turns out to be very convenient for proving properties of inductive datatypes.
Laws and restrictions can be specified with equations, which can be combined to a single specification of the datatype. Not only are datatypes described as solutions of equations, but recursively defined operations on these inductive datatypes are also specified as solutions of equations. We will show that a large class of ``recursion structure'' equations for operations on inductive datatypes have at most one solution, so they are suitable as a specification.
Another subject investigated in this thesis is conditions under which parameterisation of inductive datatypes with laws and restrictions is possible. Here we demonstrate that, if the law and restriction equations satisfy certain naturality (``polymorphy'') criteria, parameterisation is possible.

See here for the paper.


Final Dialgebras: From Categories to Allegories

Roland Backhouse and Paul Hoogendijk

Theoretical Informatics, Vol. 33, Nos. 4/5, 1999, pp. 401 --426

The study of inductive and coinductive types (like finite lists and streams, respectively) is usually conducted within the framework of category theory, which to all intents and purposes is a theory of sets and functions between sets. Allegory theory, an extension of category theory due to Freyd, is better suited to modelling relations between sets as opposed to functions between sets. The question thus arises of how to extend the standard categorical results on the existence of final objects in categories (for example, coalgebras and products) to their existence in allegories. The motivation is to streamline current work on generic programming, in which the use of a relational theory rather than a functional theory has proved to be desirable.

In this paper, we define the notion of a relational final dialgebra and prove, for an important class of dialgebras, that a relational final dialgebra exists in an allegory if and only a final dialgebra exists in the underlying category of maps. Instances subsumed by the class we consider include coalgebras and products. An important lemma expresses bisimulations in allegorical terms and proves this equivalent to Aczel and Mendler's categorical definition.

See here for the paper.


Generic Programming. An Introduction

Roland Backhouse, Patrik Jansson, Johan Jeuring, Lambert Meertens

Presented at the 3rd International Summer School on Advanced Functional Programming, Braga, Portugal, 12th-19th September, 1998

Lecture notes for an advanced course on functional programming focusing on generic (polytypic) programming. Contents as follows:
Introduction.
Algebras, Functors and Datatypes,
PolyP
Generic Unification
From Functions to Relations
Solutions to Exercises
Bibliography
Glossary of Symbols
Index

See here for the paper.


Extracting Proofs from Documents

Roland Backhouse and Richard Verhoeven

Presented at Workshop on User Interfaces for Theorem Provers UITP'98, Eindhoven, The Netherlands.

Often, theorem checkers like PVS are used to check an existing proof, which is part of some document. Since there is a large difference between the notations used in the documents and the notations used in the theorem checkers, it is usually a laborious task to convert an existing proof into a format which can be checked by a machine. In the system that we propose, the author is assisted in the process of converting an existing proof into the PVS language and having it checked by PVS.

See here for the paper.


Pair Algebras and Galois Connections

Roland Backhouse

Information Processing Letters, Vol.67, No.4, 31 August 1998, pp.169--176

Most studies of Galois connections begin with a function and ask the question: when is there a second function that is connected to the first? In possibly the very first application of Galois connections directly related to the modern digital computer, Hartmanis and Stearns posed a subtly different question, namely: when does a relation define two functions that are Galois connected? Such a relation they called a ``pair algebra''. In this paper we derive a general, necessary and sufficient condition for a relation between complete lattices to define a Galois connection between the lattices. We also give several examples of pair algebras illustrating why this seemingly forgotten notion is relevant to the science of computing.

See here for the paper.


Calculating a Round-Robin Scheduler

Matteo Vaccari and Roland Backhouse

International Conference on Mathematics of Program Construction, Marstrand, Sweden, June 1998, LNCS 1422, pp.365--382

Kropf has collected together a number of problems that can be used as benchmarks for hardware verification, one of which is a bus arbiter. The function of the bus arbiter is to assign the use of a shared resource on each clock cycle to one out of N subsystems that may want to use it, and in such a way that no subsystem is denied access forever. The most significant component in its implementation is a round-robin scheduler. Rather than verify the existing implementation of the scheduler, this paper shows how to construct a correct implementation from the given requirements. We make use of both point-free and pointwise relation algebra.

See here for the paper.


Calculational Derivation of Circuits

Matteo Vaccari

Ph.D. Thesis, Universita degli Studi di Milano, Italy, May 1998

This work is about the calculational construction of circuits. By ``construction'' we mean obtaining a design by means of formal manipulation of a formal specification. ``Calculational'' refers to the style which we employ, that is to try to reduce all program derivations to straightforward calculation.
In other words, in this thesis we aim to develop a style of calculus that aids the designer in the derivation (and the presentation) of circuits.
We start from an established notation called Ruby, developed by Sheeran and Jones, based on the algebra of binary relations. We then develop our own style of Ruby, through a number of derivation case studies.
The main results in this thesis concerns the derivation of regular language recognizing circuits. We show how an existing design, due to Foster and Kung can be derived and explained; we point out a limitation in Foster and Kung's design and show how it can be partially overcome.
Other case studies we present are a new derivation of a circuit that solves the carre problem, and a derivation of a round-robin scheduler.
We then show how our designs can be implemented in practice. First we show how to simulate the circuits using Hutton's Ruby simulator; then we go all the way to implement a compiler from Ruby circuits to a CSP-like notation called Tangram.
Additional tool support is provided by proof checkers. Even though our method is supposed to be mainly used with pencil and paper, it is possible and sometimes useful to check the proofs by machine. We use PVS to verify part of our theory and one of the case studies.

See here for the paper.


MathSpad: A System for On-Line Preparation of Mathematical Documents

Roland Backhouse and Richard Verhoeven

Software--Concepts and Tools (1997) 18: 80--89

\mplogo{} is a structure editor designed to facilitate the writing of documents of a mathematical nature. The principal element of \mplogo{} is a system for defining and using ``stencils'' which define the visual and logical structure of notational elements in a document.

This paper overviews the main features of the editor from a user's viewpoint. Particular attention is given to a discussion of the priorities that guided the system's design --- (on screen) readability, flexibility, writability and the integration of prose with mathematical calculation.

See here for the paper.


When do datatypes commute?

Paul Hoogendijk and Roland Backhouse

Category Theory and Computer Science, LNCS 1290, September 1997, pp. 242--260.

Polytypic programs are programs that are parameterised by type constructors (like List), unlike polymorphic programs which are parameterised by types (like Int). In this paper we formulate precisely the polytypic programming problem of ``commuting'' two datatypes. The precise formulation involves a novel notion of higher order polymorphism. We demonstrate via a number of examples the relevance and interest of the problem, and we show that all ``regular datatypes'' (the sort of datatypes that one can define in a functional programming language) do indeed commute according to our specification. The framework we use is the theory of allegories, a combination of category theory with the point-free relation calculus.

See here for the paper.


A Generic Theory of Data Types

Paul Hoogendijk

Ph.D. Thesis, Eindhoven University of Technology, 1997.

No abstract available

See here for the paper.


Deriving a systolic regular language recognizer

Matteo Vaccari and Roland Backhouse

IFIP TC2 Working Conference on Algorithmic Languages and Calculi, February 1997.

We present a derivation of a regular language recognizing circuit originally developed by Foster and Kung. We make use of point-free relation algebra, in a style combining elements from earlier work in the Eindhoven Mathematics of Program Construction group, and from Ruby. First we derive non-systolic recognizers, much in the same way as functional programs are derived. Then we make use of standard circuit transformation techniques, recast in the relation algebra framework, to obtain circuits that are very close to the ones presented by Foster and Kung

See here for the paper.


Reductivity.

Henk Doornbos and Roland Backhouse.

Science of Computer Programming, 26, pp. 217--236, 1996.

The notion of reductivity of a relation with respect to a datatype is introduced and related to inductivity and initiality. The use of reductivity in constructing terminating programs is discussed. A calculus of reductivity, discussed in more detail in a companion paper, is introduced.

See here for the paper.


Category Theory As Coherently Constructive Lattice Theory

Roland Backhouse, Marcel Bijsterveld, Rik van Geldrop and Jaap van der Woude.

Working Document, July 1994. Latest version, December 1996.

According to the ``proofs as programs'' paradigm of constructive type theory, programs can be derived as a by-product of the process of proving statements about type structures. Inspired by this paradigm, we show in this document how many theorems in category theory can be viewed as ``constructive'' generalisations of theorems in lattice theory, the added value of category theory being that the constructions are ``coherent''. Several applications are presented concerning the construction of isomorphisms between data structures as a by-product of proofs of equality of two languages. The basis for these applications is a categorical fixed point calculus generalising fixed point calculus in lattice theory. In addition several (to our knowledge) novel results in category theory are derived.

A brief outline of each chapter follows:

1. Basic Definitions. An introduction to category theory aimed at making the document self-contained and accessible to those with no knowledge of category theory.

2. Illustrations. Two illustrations of the idea of ``category theory as coherently constructive lattice theory''. The second is ``Yoneda's lemma'': we show how to understand this lemma as a generalisation of an elementary characterisation of the notion of monotonic function in lattice theory.

3. Adjunctions. The categorical notion of an adjunction is introduced as the ``coherently constructive'' counterpart of the notion of a Galois connection. Several (to our knowledge) novel results on adjunctions are derived.

4. Algebras. The notion of an algebra is introduced as the categorical counterpart to the notion of a prefix point of a monotonic function. The map functor is introduced.

5. Fixed Point Calculus. Categorical fixed point rules are derived generalising the rules of the fixed point calculus (rolling rule, diagonal rule, fusion rule etc.)

6. Monads. Properties of closure operators are generalised to monads. In particular the monad decomposition theorem is presented.

7. Applications. Applications of the former to the derivation of isomorphims between data structures as the by-product of proofs of equality of languages.

See here for the paper.


What is a data type?

Paul Hoogendijk and Oege de Moor

CS report 96-16, Eindhoven University of Technology. November 1996

A program derivation is said to be polytypic if some of its parameters are data types. Polytypic program derivations necessitate a general, non-inductive definition of `data type'. Here we propose such a definition: a data type is a relator that has membership. It is shown how this definition implies various other properties that are shared by all data types. In particular, all data types have a unique strength, and all natural transformations between data types are strong.

See here for the paper.


Reductivity Arguments and Program Construction

Henk Doornbos

Ph.D. Thesis, Eindhoven University of Technology, 1996.

This work is a contribution to the unification of the imperative and functional styles of programming. Specifications and data types are modelled as relations, and recursive programs as equations in relations. This modelling gives the possibility to develop a style of programming in which programs are constructed from specifications in a calculational way, using the calculus of relations. The programming style incorporates both the fold-unfold techniques of Burstall and Darlington and the refinement calculus of Back and Morgan.

To be able to prove properties of programs and data types, concise formulations of ``well-foundedness'' and ``admitting induction'' are given. Because the formulations are compact and are given in terms of relations, properties of programs can be proved in a compact, yet precise, way and in a purely calculational style. The power of the developed techniques is demonstrated by proving various non-trivial theorems.

The property ``well-foundedness'' of relations is generalised by explicitly introducing a data type in the definition. The generalised notion is called ``reductivity'': well-founded with respect to a data type. Reductivity is used to formalise the concept of terminating program.

A calculus of reductive relations is developed. This calculus can be used to construct recursive programs that are guaranteed to terminate. Because reductivity mentions the structure of the data on which the program operates, the influence of data structure on program structure can be made precise.

See here for the paper.


Mathematics of Program Construction

Roland Backhouse

Science of Computer Programming

Invited editorial accompanying special issue.

See here for the paper.


MathSpad, Tool Support for the Calculational Method

Roland Backhouse and Richard Verhoeven

February 1996

MathSpad is a structure editor designed to facilitate the integration of doing mathematics and writing about it. The principal element of MathSpad is a system for defining and using ``stencils'' which define the visual and logical structure of notational elements in a document. Structure-editing features enable the use of MathSpad to perform symbolic calculations, in non-standard algebras, directly on-screen.

This paper discusses the considerations that influenced the design of MathSpad. Particular attention is given to a discussion of the priorities that guided the system's design. A brief review of the stencil mechanism is presented.

See here for the paper.


Categorical Fixed Point Rules

Roland Backhouse, Marcel Bijsterveld, Rik van Geldrop and Jaap van der Woude

D. Pitt and D.E. Rydeheard and P. Johnstone, editors, Category Theory and Computer Science, 6th International Conference, volume 953 of LNCS, pages 159--179. Springer-Verlag, 1995.

A number of lattice-theoretic fixed point rules are generalised to category theory and applied to the construction of isomorphisms between list structures.

See here for the paper.


Induction and Recursion on Datatypes

Henk Doornbos and Roland Backhouse

B. Moller, editor, Mathematics of Program Construction, 3rd Internatinal Conference, volume 947 of LNCS, pages 242-256. Springer-Verslag, July 1995.

A new induction principle is introduced. The principle is based on a property of relations, called reductivity, that generalises the property of admitting induction to one relative to a given datatype. The principle is used to characterise a broad class of recursive equations that have a unique solution and is also related to standard techniques for proving termination of programs. Methods based on the structure of the given datatype for constructing reductive relations are discussed. The principle is formulated in the point-free calculus of relations guaranteeing concision without loss of precision and thus ensuring its amenability to calculation.

See here for the paper.


MathSpad: A system for on-line preparation of mathematical documents

Roland Backhouse, Richard Verhoeven and Olaf Weber

Eindhoven University of Technology, Computing Science Report 95/10, March 1995.

No abstract.

See here for the paper.


A database calculus based on strong monads and partial functions.

Eerke Boiten and Paul Hoogendijk.

March 1995.

The research areas of database programming languages and formal program development have a lot to offer to each other. In particular, both areas share a great interest in data types -- in more theoretical approaches, both have reached into category theory, with its ``functor=datatype'' motto. So-called bulk- or collection types have been investigated in both research communities.

This paper sketches an approach to calculi for databases which provide multiple data types. The setup has a categorical flavour, but no more than some intuitive understanding of a few basic concepts of category theory is expected of the reader.

The first layer of the calculus consists of operations on basic values that correspond to the traditional tuples of relational databases. These operations are provided by a category with special (viz. associative and semi-commutative) products. The labels of the tuples are provided by the categorical typing.

In the second layer, these operations are lifted to operations on sets, resulting in the traditional relational algebra operators. The collection of operators that allows us to do that turns out to represent an algebra with sets as ``first-class citizens'', the nested relational algebra. The resulting algebra is, if we abstract from sets, one of strong monads with some additional requirements.

The final section discusses the possibilities of combining and interfacing several such monads.

See here for the paper.


Category Theory as Coherently Constructive Lattice Theory: An Illustration

Roland Backhouse and Marcel Bijsterveld

CS report 94-43, Eindhoven University of Technology. November 1994.

Dijkstra and Scholten have formulated a theorem stating that all disjunctivity properties of a predicate transformer are preserved by the construction of least prefix points. An alternative proof of their theorem is presented based on two fundamental fixed point theorems, the abstraction theorem and the fusion theorem, and the fact that suprema in a lattice are defined by a Galois connection. The abstraction theorem seems to be new; the fusion theorem is known but its importance does not seem to be fully recognised.

The abstraction theorem, the fusion theorem, and Dijkstra and Scholten's theorem are then generalised to the context of category theory and shown to be valid. None of the theorems in this context seems to be known, although specific instances of Dijkstra and Scholten's theorem are known.

The main point of the paper is to discuss the process of drawing inspiration from lattice theory to formulate theorems in category theory (first advocated by Lambek in 1968). We advance the view that, in order to contribute to the development of programming methodology, category theory may be profitably regarded as ``constructive'' lattice theory in which the added value lies in establishing that the constructions are ``coherent''.

This paper was specially prepared for presentation at the meeting of IFIP Working Group 2.3 (Programming Methodology), June 1994. Knowledge of (elementary) lattice theory is assumed. Knowledge of category theory is not.

See here for the paper.


A Calculation Approach to Mathematical Induction

Henk Doornbos, Roland Backhouse and Jaap van der Woude

In Theoretical Computer Science 179 (1997) 103--105

Several concise formulations of mathematical induction are presented and proven equivalent. The formulations are expressed in variable-free relation algebra and thus are in terms of relations only, without mentioning the related objects. It is shown that the induction principle in this form, when combined with the explicit use of Galois connections, lends itself very well for use in calculational proofs. Two non-trivial examples are presented. The first is a proof of a Newman's lemma. The second is a calculation of a condition under which the union of two well-founded relations is well-founded. In both cases the calculations lead to generalisations of the known results. In the case of the latter example, one lemma generalises three different conditions.

See here for the paper.


Calculating path algorithms

Roland C. Backhouse, J.P.H.W. van den Eijnde, A.J.M. van Gasteren

Science of Computer Programming 22 (1994) 3--19, doi: 10.1016/0167-6423(94)90005-1 © 1994 Elsevier Science B.V.

A calculational derivation is given of two abstract path algorithms. The first is an all-pairs algorithm, two well-known instances of which are Warshall's (reachability) algorithm and Floyd's shortest-path algorithm; instances of the second are Dijkstra's shortest-path algorithm and breadth-first/depth-first search of a directed graph. The basis for the derivations is the algebra of regular languages.

See here for the paper.


Relational programming laws in the tree, list, bag, set hierarchy.

Paul F. Hoogendijk and Roland C. Backhouse

Science of Computer Programming 22 (1994) 1--2, pages 67--105

In this paper we demonstrate that the basic rules and calculational techniques used in two extensively documented program derivation methods can be expressed and, indeed, can be generalised within a relational theory of datatypes. The two methods to which we refer are the so-called ``Bird-Meertens formalism'' for the construction of functional programs and the ``Dijkstra-Feijen calculus'' for the construction of imperative programs.

See here for the paper.


The Schröder-Bernstein Theorem

Roland C. Backhouse

Unpublished. 16th Decembere 1993

We present a construction (as opposed to a verification) of the Schr&oum;der-Bernstein theorem.

See here for the paper.


Calculating a Path Algorithm.

Roland C. Backhouse and A.J.M. van Gasteren.

R.S. Bird, C.C. Morgan, and J.C.P. Woodcock, editors, Mathematics of Program Construction. 2nd International Conference, June/July 1992, volume 669 of Lecture Notes in Computer Science, pages 32--44. Springer Verlag, 1993. For extended version see " Calculating Path Algorithms. Roland C. Backhouse and J.P.H.W. van den Eijnde and A.J.M. van Gasteren".

A calculational derivation is given of an abstract path algorithm, one instance of the algorithm being Dijkstra's shortest-path algorithm, another being breadth-first/depth-first search of a directed graph. The basis for the derivation is the algebra of regular languages.

See here for the paper.


Elements of a Relational Theory of Datatypes.

Roland Backhouse and Paul Hoogendijk.

In B. Moller, H.A. Partsch, and S.A. Schuman, editors, Formal Program Development. Proc. IFIP TC2/WG 2.1 State of the Art Seminar, Rio de Janeiro, Jan. 1992, volume 755 of LNCS, pages 7-42. Springer-Verlag, 1993.

The ``Boom hierarchy'' is a hierarchy of types that begins at the level of trees and includes lists, bags and sets. This hierarchy forms the basis for the calculus of total functions developed by Bird and Meertens, and which has become known as the ``Bird-Meertens formalism''.

This paper describes a hierarchy of types that logically precedes the Boom hierarchy. We show how the basic operators of the Bird-Meertens formalism (map, reduce and filter) can be introduced in a logical sequence by beginning with a very simple structure and successively refining that structure.

The context of this work is a relational theory of datatypes, rather than a calculus of total functions. Elements of the theory necessary to the later discussion are summarised at the beginning of the paper.

See here for the paper.


Transformational development of (parallel) programs using skeletons.

E.A. Boiten, A.M. Geerling & H.A. Partsch.

Computing Science in the Netherlands 1993. pages 97--108, 1993.

We describe a framework for the derivation of programs for arbitrary (in particular, parallel) architectures, motivated by a generalization of the derivation process for sequential algorithms. The central concept in this approach is that of a skeleton: on the one hand, a higher-order function for targeting transformational derivations at, on the other hand representing an elementary computation on the architecture aimed at. Skeletons thus form a basis for intermediate languages, that can be implemented once and for all, as a process separate from individual program developments. The available knowledge on the derivation of (higher-order) functional programs can be used for deriving parallel ones.

See here for the paper.


(Relational) Programming Laws in the Boom Hierarchy of Types.

Paul F. Hoogendijk.

Mathematics of Program Construction. 2nd International Conference, June/July 1992. Lecture Notes In Computer Science vol. 669, pages 163--190. Springer Verlag, 1993. For extended version see "Relational Programmig Laws in the Tree, List, Bag and Set Hierarchy. Paul F. Hoogendijk and Roland C. Backhouse."

In this paper we demonstrate that the basic rules and calculational techniques used in two extensively documented program derivation methods can be expressed, and, indeed, can be generalised within a relational theory of datatypes. The two methods to which we refer are the so-called ``Bird-Meertens formalism'' and the ``Dijkstra-Feijen calculus''.

The current paper forms an abridged, though representative, version of a complete account of the algebraic properties of the Boom hierarchy of types. Missing is an account of extensionality and the so-called cross-product.

See here for the paper.


Monad Decomposition.

Roland C. Backhouse.

Unpublished manuscript, April 1993.

This paper makes a start on a research programme the goal of which is to reformulate theorems in category theory as constructive theorems in lattice theory. It is argued that monads should be formulated as constructive closure operators. In order to support this thesis it is shown how, by adapting the relevant theorem in lattice theory, every unary relator defines a monad. The minimality of the constructed monad is captured by a monad simulation theorem, which is then used to establish a monad decomposition theorem by adapting proofs of the closure decomposition theorem in lattice theory.

See here for the paper.


A Relational Theory of Datatypes

Chritiene Aarts, Roland C. Backhouse, Paul Hoogendijk, Ed Voermans and Jaap van der Woude.

Working document, December 1992.

This paper reports ongoing research into a theory of datatypes based on the calculus of relations. A fundamental concept introduced here is the notion of ``relator'' which is an adaption of the categorical notion of functor. Axiomatisations of polynomial relators (that is relators built from the unit type and the disjoint sum and cartesian product relators) are given, following which the general class of initial datatypes is studied. Among the topics discussed are natural polymorphism, junctivity and continuity properties.
The current paper is an incomplete draft and will be supplemented at later dates.

See here for the paper.


A Class of Commuting Relators

Roland Backhouse, Henk Doornbos and Paul Hoogendijk.

Presented at STOP workshop, September 1992.

The zip function is well known to functional programmers. Informally, zip maps two lists of the same length into a single list of pairs whereby ( [a1, b1, ... ] , [a2, b2, ... ] ) --> [ ( a1, a2) , (b1, b2), ... ]. Zip is just one of a whole family of functions that commute the order of two data structures. Whilst zip commutes a pair of lists into a list of pairs, it is not difficult to imagine generalising zip to a function that commutes m lists each of length n into n lists each of length m. Indeed, this latter function is also well known under the name matrix transposition. With a little bit more imagination several other members of the family suggest themselves. For example, there is a function that commutes a tree of lists all of the same length into a list of trees all of the same shape. There is also a function that commutes a rose of trees all of the same shape into a tree of roses all of the same shape.

This paper considers the construction of such functions. We specify the class of so-called ``zippable'' data-structures and consider in detail a large and significant subclass of this class. For this subclass we show how to construct ``zips'' and identify several consequences of their being ``zippable''.

See here for the paper.


A Relational Perspective on Types With Laws.

Jaap van der Woude and Ed Voermans.

Unpublished manuscript, January 1993.

With relational transformational programming in mind, an extension of a ``lawless'' relational theory of datatypes is proposed in order to study and manipulate quotient types within a Tarski-like calculus of relations. The extended notion of type, pertype (from partial equivalence relation), is shown to admit a complete lattice structure by constructing the order via a Galois connection. A pertyping of relations is developed and inductive pertypes generated by equations are discussed. Pertypes do occur in model theory for lambda-calculus but we are unaware of manipulations with inductive ``lawful'' types based on a simple relational calculus.

See here for the paper.


Demonic Operators and Monotype Factors.

Roland Backhouse and Jaap van der Woude.

Final version in Mathematical Structures in Computer Science, 3(4): 417-433, December 1993.

This paper tackles the problem of constructing a compact, point-free proof of the associativity of demonic composition of binary relations and its distributivity through demonic choice. In order to achieve this goal a definition of demonic composition is proposed in which angelic composition is restricted by means of a so-called ``monotype factor''. Monotype factors are characterised by a Galois connection similar to the Galois connection between composition and factorisation of binary relations. The identification of such a connection is argued to be highly conducive to the desired compactness of calculation.

See here for the paper.


Relational programming, program inversion and the derivation of parsing algorithms.

Ed Knapen.

Graduating Disertation, Eindhoven University of Technology, November 1993.

The spec calculus is a programming algebra that supports and even encourages the derivation of relational programs in a clear and calculational style.

After a brief introduction of the spec calculus two particular uses are investigated. First, the notion of program inversion is introduced. This notion from imperative programming allows one to construct from a given program R that satisfies {P} R {Q} a program Rinv that satisfies {Q} Rinv {P} . It is shown that, as may be expected, a relational framework allows easy definition of and calculation with inverses of arbitrary programs. Several examples are discussed and used to introduce different methods to calculate implementable inverses.

The second topic is the derivation of precedence parsing algorithms. A simple algorithm for the calculation of the inorder traversal of a tree is transformed in two steps into an algorithm for precedence parsing. The first transformation is the use of program inversion to obtain an algorithm that constructs a tree from its inorder traversal. The second step is a restriction of the non-determinism of the algorithm. It is shown how the resulting algorithm corresponds to precedence parsing algorithms.

See here for the paper.


Constructive Lattice Theory.

Roland Backhouse.

Unpublished manuscript. October 1993.

A notion of simulation of one datatype by another is defined as a constructive preorder. A calculus of datatype simulation is then developed by formulating constructive versions of least-fixed-point theorems in lattice theory. The calculus is applied to the construction of several isomorphisms between classes of datatypes. In particular constructive adaptations of theorems in lattice theory about closure operators are shown to yield simulations and isomorphisms between monad structures, and constructive adaptations of theorems in regular algebra are shown to yield isomorphisms between list structures.

See here for the paper.


Domain Operators and Domain Kinds.

Roland Backhouse and Jaap van der Woude.

Unpublished manuscript, September 1993.

The notions of domain operator and domain kind are introduced. Several examples are presented. In particular, it is shown that the partial equivalence relations form a domain kind. The proof involves the construction of a Galois connection demonstrating that the partial equivalence relations form a complete lattice under the so-called domain ordering, thus providing another illustration of the importance of the early recognition of Galois connections.

See here for the paper.


Galois Connections Presented Calculationally

C.J. Aarts.

Graduating Disertation, Eindhoven University of Technology, July 1992.

(No abstract)

See here for the paper.


Congruences on Initial Algebras

Roland C. Backhouse, Grant Malcolm and Jaap van der Woude

Technical Report, Technische Universiteit Eindhoven, 1991

The notion of relational catamorphism is used to give a necessary and sufficiend condition for a relation to be a congruence relation on an abstract data type. The condition is shown to be a generalisation of the homomorphism theorem of universal algebra.

See here for the paper.


On Induced Congruences

Roland Backhouse and Grant Malcolm

Bulletin of the EATCS, vol. 40, pp. 201--206, 1990.

In this note we present a proof of a theorem to be found in Ehrig and Mahr [Fundamentals of Algebraic Specification 1. EATCS Monographs on Theoretical Computer Science, 1985]. The theorem states that a relation constructed from a given function is a congruence relation iff the function is a homomorphism; we go on to generalise the result to the relational homomorphisms treated by the first author in [Naturality of homomorphisms. Lecture Notes, International Summer School on Constructive Algorithmics, vol.3, 1989].

See here for the paper.


Relational Catamorphisms

Backhouse, R.C. and Bruin, P. de and Malcolm, G. and Voermans, T.S. and Woude, J. van der"

In Proceedings of the IFIP TC2/WG2.1 Working Conference Constructing Programs from Specifications, 1991, M{\"o}ller B. (Ed.), Elsevier Science Publishers B.V., pp. 287--318

This paper reports ongoing research into a theory of datatypes based on the calculus of relations. A fundamental concept introduced here is the notion of ``relator'' which is an adaptation of the categorical notion of functor. Relational catamorphisms are then introduced and shown to satisfy a unique extension property. Several further properties are discussed including so-called fusion properties. The paper is concluded by showing how new relators can be constructed by an appropriate choice of relational catamorphism.

See here for the paper.


Some Ergonomics of Mathematical Notation

Roland C. Backhouse

Unpublished, September 1991

Choices of mathematical notation are discussed from the perspective of ergonomics.

See here for the paper.


Constructive Type Theory. A Perspective from Computing Science

Roland C. Backhouse

In Formal Development of Programs and Proofs, Edited by Edsger W. Dijkstra, Addison-Wesley, 1990

The contribution of Martin-L&oml;f's formalization of constructive mathematics is explained from the perspective of a computing scientist. The presentation aims to demonstrate how the theory increases understanding of constructive mathematics and the relation between programs and proofs.

See here for the paper.


Making Formality Work For Us.

R.C. Backhouse

EATCS Bulletin, vol. 38, June 1989, pages 219-249.

Formal reasoning is notoriously long and arduous; in order to use it to reason effectively in the construction of programs it is, therefore, paramount that we design our notations to be both clear and economical. Taking examples from AI, from imperative programming, from the use of the Bird-Meertens formalism and from category theory we demonstrate how the right choice of what to denote and how it is denoted can make significant improvements to formal calculations. Brief mention is also made of the connection between economical notation and properties of type.

See here for the paper.


Do-it-Yourself Type Theory

Roland Backhouse, Paul Chisholm, Grant Malcolm and Erik Saaman

Formal Aspects of Computing (1989) 1: 19--84. © 1989 BCS

This paper provides a tutorial introduction to a constructive theory of types based on, but incorporating some extensions to, that originally developed by Per Martin-L\"of. The emphasis is on the relevance of the theory to the construction of computer programs and, in particular, on the formal relationship between program and data structure. Topics discussed include the principle of propositions as types, free types, congruence types, types with information loss and mutually recursive types. Several examples of program development within the theory are also discussed in detail.

See here for the paper.


An Exploration of the Bird-Meertens Formalism.

Roland Backhouse.

Technical Report CS8810, Department of Mathematics and Computing Science, University of Groningen, 1988. This paper has been superceded by: "Elements of a relational theory of datatypes. Roland Backhouse and Paul Hoogendijk." and "Relational programming laws in the tree, list, bag and set hierarchy. Paul F. Hoogendijk and Roland C. Backhouse."

Two formalisms that have been used extensively in the last few years for the calculation of programs are the Eindhoven quantifier notation and the formalism developed by Bird and Meertens. Although the former has always been applied with ultimate goal the derivation of imperative programs and the latter with ultimate goal the derivation of functional programs there is a remarkable similarity in the formal games that are played. This paper explores the Bird-Meertens formalism by expressing and deriving within it the basic rules applicable in the Eindhoven quantifier notation.

See here for the paper.


Non-redundant regular expressions and real arithmetic: Proof of a lemma due to Tarjan

Roland C. Backhouse

Department of Computer Science, Heriot-Watt University, Technical Report no. 9, August 1980

In recent years it has become evident that there is a small core of path-finding algorithms consisting almost exclusively of algorithms known to numerical analysts which solve a system of linear equations Ax = b where A is a real nxn matrix, x is an nx1 vector of variables and b is an nx1 vector or real constants.
The connection of path-finding problems with the solution of linear equations has been a fruitful one but it still cannot be claimed that the connection is completely understood. Some authors have attempted to find a minimal axiom system for which the classical techniques will work. An alternative approach, initiated by Backhouse and Carr'e [1975] and further developed by Tarjan [Tech.Rep. STAN-CS-79-729, April 1979] is to use the algebra of regular expressions, expressing path problems via homomorphisms which map all paths through the graph to the given problem domain.
Tarjan relates the problem of solving the linear system of equations Ax = b in real arithmetic to the problem of finding \emph{non-redundant} regular expressions. Tarjan's results are based on a lemma (lemma 2, ``The hardest result in this paper'') whose proof is incomplete and unnecessarily hedged with qualifications. The present paper gives a complete and accurate proof of Tarjan's lemma. Our proof consists of a reworking of the well-known technique for establishing whether two regular expressions are equal, but with the additional handicap of establishing whether they are equal when viewed as functions of real numbers.
[Note: Subsequent to sending this report to Tarjan, he amended the statement and proof of lemma 2, removing the qualifications I said were unnecessary and adding an additional lemma so that the lemma became lemma 3. The revised paper was published in the Journal of the ACM, vol.\ 28, No.\ 3 (July 1981), pp.\ 577--593. The revised proof uses an argument about multivariate polynomials that are identical on a sufficiently small sphere. I have never understood the proof.]

See here for the paper.


Factor Graphs, Failure Functions and BiTrees

Roland C.\ Backhouse and Rüdiger K. Lutz

Department of Computer Science, Heriot-Watt University, Technical Report no. 4, October 1976

The factors and factor matrix of a regular language are defined and their properties stated. It is then shown that the factor matrix of a language Q has a unique starth root --- called the factor graph of Q --- which is a recogniser of Q. An algorithm to calculate the factor graph is presented. The Knuth, Morris and Pratt pattern matching algorithm, its extensions and Weiner's substring identifier algorithm are outline and are all shown to be equivalent to finding the factor graph of some regular language. Key words and phrases: regular language, factor, factor matrix, pattern matching, string-matching.

See here for the paper.