State-of-the-art and objectives
With the advent of web technologies and the proliferation of programmable and inter-connectable devices, we are faced today with a powerful and heterogeneous computing environment. This environment is inherently parallel and distributed and, unlike previous computing environments, it heavily relies on communication. It therefore calls for a new programming paradigm which is sometimes called communication-centered. Moreover, since computation takes place concurrently in all kinds of different devices, controlled by parties which possibly do not trust each other, security properties such as the confidentiality and integrity of data become of crucial importance. The issue is then to develop models, as well as programming abstractions and methodologies, to be able to exploit the rich potential of this new computing environment, while making sure that we can harness its complexity and resolve its security vulnerabilities. To this end, calculi and languages for communication-centred programming must be security-minded from their very conception, and make use of specifications not only for data structures, but also for communication interfaces and for security properties. In addition, the need of programming and maintaining eternal, highly decentralised systems (e.g., those considered by the FP7-FET initiative Forever Yours) is emphasising the requirement of reliable and dependable applications. However, the forms of verification, mainly static, we are used to tend not to be enough anymore. Therefore, there is a strong demand for designing forms of runtime verification.
If we adopt the perspective of the programmer, this means being able to write computational units able to adapt to different contexts, implying the need of flexible specifications depending, for example, on how much a unit is trusted by the environment executing it, or on the resources offered to it. From the system perspective, this requires writing software coordinating and orchestrating units which are not guaranteed to fully follow the protocols they claim to respect.
These new needs can be observed at every level of the design and implementation of the computational units. In particular, at the level of software development, modern applications tend to be multi-threaded, sometimes distributed, concurrent and interactive. Application components are discovered and attached dynamically at runtime and they are often asked to cooperate with components whose behaviours can vary over time, can be stochastic or can be just unexpected.
Given this amount of evidence, it is easy to speculate that the trend towards fragmented, distributed, heterogeneous, dynamic systems will increase at a steady pace. Unfortunately, the complexity in programming these systems is doomed to increase similarly, and it is clear that currently available methodologies, techniques and tools for the development of correct, dependable, resilient software are largely inadequate. They are not able to cope with situations where not all the elements of a picture are under control, for instance when the code of some computational units is not known. Moreover, they are not suited to make it possible the development of software that adapts its specifications to the new requirements of a dynamic context, whose rules can change over the time, possibly in a stochastic way. In a word, they do not allow programmers to write the certified, self-adapting and autonomic code that the market will require soon.
Type theories need to be rethought in order to cope with these new requirements. While the development of systems operating in stochastic environments, or operating according to real-time constraints, has been supported by formal methods such as model checking, there is little work on type theories for such systems that can guide our research on typing computational units with time dependent and/or stochastic behaviours.
The PI has worked essentially on type theories and in the present project she will lead a multi-disciplinary team, collecting competencies from programming language theory system security, program correctness and stochastic model checking: this is essential to obtain the synergy needed for developing the next generation of formal methods based on type theories for the specification, design, implementation, verification, deployment, evolution, orchestration and reuse of the computational units of the future.
The project is articulated along the three scenarios, with the four steps for each scenario providing also the focus of the research: first, the design and the semantics of types, then the algorithms and the tools based on the algorithms. These steps will be done cyclically, since the development of algorithms and tools can suggest improvements to the design of the types, which necessarily will imply new semantics. The first two scenarios, trusted and untrusted, are more related to each other, with the former being in part a precondition to the latter, in particular, with respect to the definition of flexible protocols. Therefore, the activities for the two scenarios will be synchronised carefully. The activity for the third scenario will start after one year from the beginning of the project, when the types for the first two scenarios will be mature enough, since the third scenario essentially adds quantitative aspects to the second one. Because the project is interdisciplinary, merging competences from model checking with competences from type theory, at the beginning activities will be planned to make all participants acquainted with the state of the art of the areas they have to work on. Indeed, this interdisciplinary approach to types is also one of the most innovative elements of the project: this requires, however, an analysis of the project’s advancement. The team will organise monthly meetings (physical or virtual) to monitor the progress and, in case of problems, elicit a revised plan and possibly reassign tasks to participants. Given the 2 year-length of the project, it is important to follow developments in other settings and emerging new challenges due to technological evolution. Thus, further opportunities of importing newly-developed methodologies from related areas will be pursued.