Validity and Correctness in LEO I and II

Abstract of the presentation at the roundtable What is a (computer) program? at the prelaunch of the project PROGRAMme in Paris, at the CNAM – Conservatoire national des arts et métiers, on the 20th of October 2017.

The presentation is an historical reconstruction of the procedures to ensure validity and program correctness in the early examples of business computers: the focus of the talk is the hardware testing, data validation and program correctness techniques designed for LEO I and II in the UK during the 1950s.

As opposite to mathematical and scientific work, which typically requires a small number of highly complex calculations to be performed, in business computing a large number of simple calculations had to be accomplished in the shortest possible time.

Hardware tests would follow the identification of a fault, but more in general were performed daily, prior to operational work, to check if every physical component of the computer was well-functioning and to check the reliability of the results achieved, in order to minimise the risk of faults. Maintenance tests were of two kinds: preventive and curative. They included thermionic valves testing and marginal testing, also in conjunction with program tests for hardware.

Data Validation procedures were divided into general input checks, manual and automatic checksums and I/O test programs.

In LEO, a systematic approach to fault diagnosis was set up at early stages, together with the development of complex equipment and laborious procedures to process high amount of data.
Following a well-known distinction already in place for the EDSAC, programs to locate errors were of two kinds: post-mortem routines and checking routines. These practices were aided by the useful procedure of photographing the cathode ray tube image of the store.
The process of ensuring program correctness consisted in designing the logic of programs in stages, also through the flowcharting technique. Complete programs or parts of them were then checked on the computer with trial data, prepared to test every possible requirement. A non systematic way of checking program correctness had been in place since the early stages of LEO. The computer was equipped with a loudspeaker: each application had its own noise rhythm and experienced operators would note something had gone wrong.

Validity and correctness preoccupations originate at the very beginning of computing. Methodology and practices developed at LEO show a modern attitude towards these notions, though many of their protocols were rooted in much older accounting and auditing practices. The duality between correctness and reliability, between validity and efficiency, was also the result of the essential impossibility of delegating at least partial control of such issues to an operating system. This would change significantly with the advent of LEO III and its Master Routine.

The full paper “Validity & Correctness before the OS: the case of LEO I and LEO II” by Elisabetta Mori, Giuseppe Primiero and Rabia Arif, will be published in L. de Mol, G. Primiero (eds.), Reflections on Programming Systems – Historical and Philosophical Aspects, Philosophical Studies Series, Springer (forthcoming 2018).