Abstract of the presentation at the roundtable `What is a (computer) program?’ at the prelaunch of the project PROGRAMme in Paris, at the CNAM – Conservatoire national des arts et métiers, on the 20th of October 2017.
The problem of identity is a long-standing one in philosophy. In turn, it is a crucial one for the Philosophy of Computer Science, and in particular for answering the question ‘what is a program?’. Two major strands in the history of such conceptual investigation can be found.
The first one goes back to Frege’s analysis of identity of meanings in his Über Sinn und Bedeutung (1892), i.e. the problem of determining when substitution of identicals in a sentence generates equivalent references, with possibly distinct meanings. This line of research was followed by the famous Quinean motto: “No entity without identity” in his Word and Object (1960). A decade later, Martin-Löf uses an adapted version of the Quinean motto for the conceptual description of his proof-terms and types in his Philosophical aspects of Intuitionistic Type Theory (1993): “No Entity without Type; No Type without semantical Identity”.
Although Intuitionistic Type Theory has a close connection to the semantics of programs, Martin-Löf’s analysis is not the first attempt at introducing and considering the identity problem for programs. Milner in the famous An algebraic definition of simulation between programs (1971), introduces identity explicitly in the form of programs realizing the same algorithm and expressing this condition as the execution of the same relevant computations. This formal analysis notoriously is given formally in terms of (bi)simulation for programs in process algebra.
One aspect of relevance here is the identification of the required level of abstraction for the analysis of simulation to work. A first choice is to work at the highest possible level, defining programs in terms of their algorithmic description. This choice, which is not discussed here, hinges on several problems concerned with the question itself `what is a program’, which can be answered at all the appropriate levels of abstraction. A good argument for the choice of identifying programs with their algorithmic description is offered by Dean in his Algorithms and the mathematical foundation of Computer Science (2016). A program can then be defined as an equivalence class defined over the class of artefacts in the different languages that are appropriate to express the algorithm of interest. The interpretation of an algorithm as the equivalence class of all the programs that express it (or the machines that implement it) requires in turn the definition of the equivalence relation, which is usually given in terms of simulation.
A different philosophical tradition which has investigated the problem of identity is that of analytic metaphysics, which argues that an object is anything satisfying a defined set of identity criteria. Interestingly, this is the tradition followed in the Philosophy of Technology for the definition of identity for technical artefacts. This latter research field ignores how this problem could be extended to computational artefacts and that a formal definition of simulation is usually in place to that aim in Theoretical Computer Science. The Philosophy of Computer Science has so far, on the other hand, ignored the problem entirely.
In current research (Angius, Primiero, submitted), we have tried an approach that merges thosee two strands of philosophical and formal analysis. The aims can be listed as follows:
1. Distinguish several types of identity relations between programs: Perfect Identity, Exact Copies, Inexact Copies, Approximate Copies;
2. Associate to each a formal representation in terms of simulation and associated operations;
3. Investigate which of the above listed identity criteria holds for each of the identity relations.
This research has several lines of expansion of interest to the current project:
1. An historical analysis, which would investigate the point of convergence and divergence of the metaphysical analysis and of the proofs-as-programs tradition.
2. A technical analysis focusing on complexity issues for the operations weaker than bisimulation used to express some of the notions of copy.
3. An epistemological analysis, focusing on the identification of which properties are preserved by copies (e.g. correctness, reliability, robustness, trustworthiness).
4. An ethical analysis, focusing on issues of software copyright, in particular concerning the formulation of quantitative and qualitative criteria extracted from the formal analysis of copies.