Séminaires

Séminaires des Equipes

Les équipes organisent leurs propres séminaires thématiques

Séminaire Général du DIENS

Ce séminaire rassemble des exposés généraux conçus pour être accessibles à tous (élèves/étudiants, doctorants, chercheurs).

Une participation régulière pendant la scolarité à ENS (environ 24 séminaires) peut permettre d’obtenir 3 ECTS pour le diplôme de l’ENS.
L’élève/étudiant devra veiller que ses coordonnées soient bien inscrites sur la feuille de présence lors de chaque séminaire et demander la validation de ces ECTS lors de sa dernière année de scolarité.

La personne responsable est Marc Pouzet.

Les exposés ont lieu au 45, rue d'Ulm, 75005 Paris, en général "Salle Henri Cartan", et certains sont filmés et consultables sur : savoirs.ens.fr

2020

*Salle Henri Cartan *

How do you synthesize ranking information into a single most representative global ranking?
This can be modeled as an optimization problem. There are a few possibilities for the objective.
Related to the feedback arc set problem in directed graphs, its theoretical formulation is difficult to solve exactly.
We will show sinple heuristics and analyze them from the viewpoint of quality of approximation.

* *

*Salle Henri Cartan *

Computers are widely used in timing-sensitive applications such as controlling vehicles, factories, and medical devices. Today, real-time behavior of programs is a property that emerges from implementations rather than a property that is specified in models. Control over timing behavior of software is difficult to achieve, and timing behavior is neither predictable nor repeatable. I will argue that this problem can be solved by making a commitment to deterministic models that embrace temporal properties as an integral part of the modeling paradigm. I will explain how such models differ from the prevailing practice of modeling low-level computer architectures and estimating worst-case execution time. I will show moreover that deterministic timing is practical today without sacrificing performance for many useful workload models. Specifically, I will describe a class of computer architectures called PRET Machines that deliver deterministic timing with no loss of performance for a family of real-time problems consisting of sporadic event streams with hard deadlines.

*Salle Henri Cartan *

La biologie des systèmes est un sous-domaine à l'interface entre l'informatique, les mathématiques et la biologie. L'objectif est de développer des méthodes originales pour modéliser à l'aide de systèmes dynamiques la réponse d'organismes vivants (des bactéries à la l'humain) à des perturbations. Ce domaine est confronté à un problème de passage à l'échelle, en particulier lorsqu'il s'agit de modéliser les possibles interactions moléculaires entre des centaines de bactéries et leur hôte à partir de données lacunaires. Dans cet exposé, nous expliquerons comment une approche basée sur du raisonnement symbolique et de la résolution de problème combinatoire permet de combler les lacunes des approches usuellement utilisées pour identifier des systèmes dynamiques, dans le but de faire progresser la connaissance biologique.

*Salle Henri Cartan *

2019

*Salle Henri Cartan*

Future information and communication networks will certainly consist of both classical and quantum devices, some of which are expected to be dishonest, with various degrees of functionality, ranging from simple routers to servers executing quantum algorithms. Most of the technology required to achieve advanced stages of a quantum internet is still in its infancy, hence it is very hard to predict the potential use cases. Several applications, however, have already been characterized depending on the different stages of a quantum network such as secure delegated quantum computing, quantum key distribution, clock synchronization, leader election, quantum digital signatures, quantum money among others. Such applications promise to impact and transform the society on multiple levels including communication, accessing information and security. Therefore, it would be extremely useful to have a standard framework to describe the protocols that are relevant to quantum internet such that they become available to the diverse quantum information science community. We take the first step in this direction and call such an initiative: The Quantum Protocol Zoo which consists of an organised collection of protocols that could be implemented (or simulated) in the coming years. In this lecture I present an overview of the filed through this new platform of interaction with various communities contributing to it.

*Salle U/V*

*Salle Henri-Cartan *

* *

*Salle Henri Cartan *

Fault-tolerant distributed protocols play an important role in many critical/high-availability applications.
The literature distinguishes two main classes: asynchronous vs. synchronous protocols, based on the semantics of the parallel composition. Asynchronous protocols are crucial parts of many distributed systems because networks are natively asynchronous and so these algorithms have better performance when compared against the synchronous ones.
However, their correctness is very hard to obtain, due to the challenges of concurrency, faults (message loss or process crash), buffered message queues, and message re-ordering at the network.
In contrast, reasoning about synchronous protocols is simpler, as they are structured in rounds and one only has to consider specific global states at round boundaries. Therefore, synchronous algorithms are less error prone and formal verification becomes tractable.
The question we address in this talk is how to connect these worlds, such that we can take advantage of the more-understandable and verifiable synchronous protocols when reasoning about asynchronous protocols.

*Amphithéâtre Dussane*

The notion of a /model/ is well known in physics (a set of
mathematical equations that help understand and predict a natural
phenomenon), or in formal logic and /model-checking/ (a purely
abstract object), but what does it mean for hardware/software systems
(which are neither purely abstract nor natural objects) and what's the
intended use of such hardware/software models?

We will explain what it means to design abstract models for hardware/software systems, the intrinsic trade-off between exploitability (simulation speed for instance) and the level of details included in the model, and how to use these models for simulation and validation purposes.

The emphasis will be on defining operational models of the hardware, on which real software can be executed. This approach is used for the early exploration and validation of a hardware/software system, w.r.t. its functional and extra-functional properties like energy consumption.

We will explain what it means to design abstract models for hardware/software systems, the intrinsic trade-off between exploitability (simulation speed for instance) and the level of details included in the model, and how to use these models for simulation and validation purposes.

The emphasis will be on defining operational models of the hardware, on which real software can be executed. This approach is used for the early exploration and validation of a hardware/software system, w.r.t. its functional and extra-functional properties like energy consumption.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, 75005 Paris.*

Models that can be turned into physical systems are usually called programs, while those that are obtained by studying a pre-existing system are usually called abstractions. Thus, programs and abstractions are classes of models that are distinguished by their lineage to systems: programs beget systems, while abstractions are derived from them. Modelling can therefore be defined as the art of inferring “well behaved” abstractions of systems, while programming is the activity of producing systems that are “well-behaved” with respect to their original model. Proving that programs are “sound” with respect to a system is the aim of the well established discipline of program verification. Interestingly, claiming that a system’s abstraction is valid is always relative to an abstraction barrier, and often a disputable statement in the context of evolved systems, for which no specification is at hand. In Biology, good abstractions are generally those that reproduce data well. It is therefore difficult to “prove” that an abstraction is correct because an infinite amount of tests would be needed to assert this. In this presentation we advocate the idea that a biological model should be considered correct when it has been provably obtained by integration of falsifiable data. In this context, we present an “executable knowledge representation” of biological mechanisms in the form of a graph rewriting language named Kappa.
After a brief introduction to molecular biology, we will discuss expressiveness and algorithmic issues of graph rewriting techniques.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème.*

Mathematics is both the language and the instrument that connects our abstract understanding with the physical world, and computer science is the mechanism we use to realise the full potential of our mathematics. Anyone with an aptitude and training in one invariably has an understanding of the other. Together, these two intrinsically-linked disciplines run practically everything in our society. Thus, knowledge of these quickly translates to substantial knowledge and influence on the way the world works. But those who have the greatest ability to understand and manipulate the world hold the greatest capacity to do damage and inflict harm. In this talk I’ll explain that yes, there is ethics in mathematics and computer science, and that it is up to us as the ones wielding these abstract technical tools to make good ethical choices, in order to prevent our work from becoming harmful.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, 75005 Paris.*

pour les normaliens en L3 d informatique ou L3 info et L3 maths en 2018-2019

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, 75005 Paris.*

In this talk I will attempt to answer the following questions I have been asked quite often: What are the current challenges in dynamic graph algorithms? What are good starting points for people who want to try working in this field? The talk will focus on challenges for basic graph problems (e.g. connectivity, shortest paths, maximum matching), and will survey some existing upper and lower bound results and techniques.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème*

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème*

2018

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème.*

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème*

In 1941, Claude Shannon introduced a continuous-time analog model of computation, namely the General Purpose Analog Computer (GPAC). The GPAC is a physically feasible model in the sense that it can be implemented in practice through the use of analog electronics or mechanical devices. It can be proved that the functions computed by a GPAC are precisely the solutions of a special class of differential equations where the right-hand side is a polynomial. Analog computers have since been replaced by digital counterpart. Nevertheless, one can wonder how the GPAC could be compared to Turing machines.

A few years ago, it was shown that Turing-based paradigms and the GPAC have the same computational power. However, this result did not shed any light on what happens at a computational complexity level. In other words, analog computers do not make a difference about what can be computed; but maybe they could compute faster than a digital computer. A fundamental difficulty of continuous-time model is to define a proper notion of complexity. Indeed, a troubling problem is that many models exhibit the so-called Zeno's phenomenon, also known as space-time contraction.

In this talk, I will present results from my thesis that give several fundamental contributions to these questions. We show that the GPAC has the same computational power as the Turing machine, at the complexity level. We also provide as a side effect a purely analog, machine- independent characterization of P and Computable Analysis.

I will present some recent work on the universality of polynomial differential equations. We show that when we impose no restrictions at all on the system, it is possible to build a fixed equation that is universal in the sense it can approximate arbitrarily well any continuous curve over R, simply by changing the initial condition of the system.

If time allows, I will also mention some recent application of this work to show that chemical reaction networks are strongly Turing complete with the differential semantics.*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème.*

Récemment, le domaine de la compression a vu de nouveaux développements survenir à la suite de l'introduction des systèmes numériques asymétriques (ANS) par J.Duda. Nous nous proposons de revisiter les concepts sous-jacents dans une version simplifiée de cette publication initiale. Ce sera l'occasion d'un tour d'horizon ludique du domaine de la compression, vue principalement du coté ingénierie: information, communication, codage, etc."

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème.*

The idea of Probabilistic Programming is to use the expressiveness of programming languages to build probabilistic models. To implement this idea, a common approach is to take a general purpose language and extend it with (1) a function that allows to sample a value from a distribution, (2) a function that allows to condition values of variables in a program via observations, and (3) an inference procedure that build a distribution for a program using the two previous constructs.

Following this approach, we propose to extends a reactive programming language with probabilistic constructs. This new language enables the modeling of probabilistic reactive systems, that is, probabilistic models in constant interaction with their environment. Examples of such systems include time-series prediction, agent-based systems, or infrastructure self-tuning.

To demonstrate this approach, we started from ReactiveML, a reactive extension of OCaml that supports parallel composition of processes, communication through signals, preemption, and suspension. We extend the language with classic probabilistic constructs (sample, factor la WebPPL) and propose an inference scheme for reactive processes, that are, non-terminating functions.

This is a join work with Guillaume Baudart, Martin Hirzel, Kiran Kate, and Avraham Shinnar.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème.*

Verification activities are liable for about 70% of the overall effort in the development of critical avionics software. Considering the steady increase in size and complexity of this kind of software, classical V&V processes, based on massive testing campaigns and complementary intellectual reviews and analyses, no longer scale up within reasonable costs.

On the other hand, some formal verification techniques have been shown to cope with real-world industrial software, while improving automation. Such techniques have therefore been introduced into avionics verification processes, in order to replace or complement legacy methods.

We will show which formal methods are being used, and how industrial processes are being transformed to take advantage of them, i.e. improve industrial efficiency while maintaining the safety and reliability of avionics systems.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème.*

Working on various industrial case-studies in the past decade, we realized that, given the very quick evolution of hardware platforms, and the growing complexity of embedded software, it is, more than ever, necessary to start with a very general point of view.
On one hand we need to understand the physical timing constraints and tolerances as determined by control engineers for a given application (sampling frequencies and jitters, end-to_end latencies, etc.), without mentioning software entities like tasks or scheduling.
On the other hand we need to characterize precisely the computation and communication performances of a given execution platform, and assess their predictability.
Designing the implementation means finding space and time allocations for the application functional parts, in such a way that the physical constraints are met by construction, and in a provable way for certification authorities.
We will use examples with the Kalray MPPA, to illustrate the nature and specification of the timing constraints, the implementation schemes, and how the constraints can be met.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème*

Tor is a way to access the Internet securely and privately.

It's a way to escape the everyday surveillance that companies and governments have been imposing to people. In this talk we are going to present Tor, as a software and as an organization, as well as give an overview of new features that have been in development lately.

It's a way to escape the everyday surveillance that companies and governments have been imposing to people. In this talk we are going to present Tor, as a software and as an organization, as well as give an overview of new features that have been in development lately.

*Salle Henri Cartan, 2e sous sol, Aile Rataud, École normale supérieure, 45 rue d'Ulm, Paris 5ème*

If biological big data has underlying “big mechanisms”, there is little hope that any mental construction can ever comprehend these “large explanatory models”. Moreover, because of the combinatorics of protein-protein interactions, no static map, database, or ontology list, can suffice to complement human brains in the design of these models. These simple observations have taken a large swath of molecular biologists away from any (formal or semi-formal) modeling activity. Most experimental biologists give little credit to existing models, which they know are partial, outdated and have little predictive power. Yet, while rejecting models, experimental biologists have to rely on implicit mental constructs in order to decide which experiment to set up. Thus every biologist has a model, either implicitly or explicitly. In this presentation, we argue that the lack of success story of formal models in Systems biology is not due to the impossibility of having accurate and u! seful models, but rather to the fact that one is confusing models of interactions with explanations of biological functions. Our approach is to view models as programs and biological functions as computations. If models are to be treated as programs what language one should use to assemble them is a critical point. In this talk we will present the issue of modeling biological "big mechanisms": i.e, protein-protein interaction networks whose specification is too combinatorial to be explicitly represented. In particular we will discuss the lessons that can be learned from the DARPA "BIGMEC" project, in which we took part, and that involved conjuncts efforts from A.I experts, biologists and language theory computer scientists.