Publications

Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021

Abstract

Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.

Continue reading

Bridging human concepts and computer vision for explainable face verification

By Miriam Doh, Caroline Mazini-Rodrigues, Nicolas Boutry, Laurent Najman, Mancas Matei, Hugues Bersini

2023-10-10

In 2nd international workshop on emerging ethical aspects of AI (BEWARE-23)

Abstract

With Artificial Intelligence (AI) influencing the decision-making process of sensitive applications such as Face Verification, it is fundamental to ensure the transparency, fairness, and accountability of decisions. Although Explainable Artificial Intelligence (XAI) techniques exist to clarify AI decisions, it is equally important to provide interpretability of these decisions to humans. In this paper, we present an approach to combine computer and human vision to increase the explanation’s interpretability of a face verification algorithm. In particular, we are inspired by the human perceptual process to understand how machines perceive face’s human-semantic areas during face comparison tasks. We use Mediapipe, which provides a segmentation technique that identifies distinct human-semantic facial regions, enabling the machine’s perception analysis. Additionally, we adapted two model-agnostic algorithms to provide human-interpretable insights into the decision-making processes.

Continue reading

Refinement of a ligand activity and representation of topological phamacophores in a colored network

By Maroua Lejmi, Damien Geslin, Bertrand Cuissart, Ilef Ben Slima, Nida Meddouri, Ronan Bureau, Alban Lepailleur, Amel Borgi, Jean-Luc Lamotte

2023-10-01

In Proceedings of the 11èmes journées de la société française de chémoinformatique

Abstract

Structure-Activity Relationships is a critical aspect of drug design. It enables us to examine ligand interactions and performances towards specific targets, then to design effective drugs for treating diseases or improving existing medical therapies. In this context, we specifically study the activity of ligands towards kinases using the BCR-ABL dataset. The work is dedicated to introduce a refinement method for the activity of molecules. Instead of considering anity as a binary activity, a molecule being either active or inactive, the compounds were partitioned into 4 classes according to their activity: very active, moderately active, slightly active, inactive. This activity is later used to evaluate molecular descriptors called topological pharmacophores [1]. These pharmacophores provide essential information by representing the key structural features of a molecule. Their quality is determined by measuring their “growth-rate” which corresponds to the ratio of active molecules over inactive ones, among the molecules supported by the pharmacophore. In our work, the calculation of the growth-rate is based on the classes of activity that we have created. Consequently, we will obtain three measurements of the growth rate, each one being related to a class of activity. In addition, we proposed to convert the new information of the quality of the pharmacophores into a visual representation called “The Pharmacophore Network” [2]. The latter is a graph whose nodes represent the pharmacophores and edges represent a graph-edit distance that separates them. Our goal was to structure more finely the pharmacophore space and to be able to detect visually interesting areas that can be explored. For this purpose, we integrated colors in this Pharmacophore Network, where each color refers to a class of activity.

Continue reading

How to compute the convex hull of a binary shape? A real-time algorithm to compute the convex hull of a binary shape

By Jonathan Fabrizio

2023-09-13

In Journal of Real-Time Image Processing volume

Abstract

In this article, we present an algorithm to compute the convex hull of a binary shape. Efficient algorithms to compute the convex hull of a set of points had been proposed long time ago. For a binary shape, the common practice is to rely on one of them: to compute the convex hull of binary shape, all pixels of the shape are first listed, and then the convex hull is computed on this list of points. The computed convex hull is finally rasterized to provide the final result as, for example, in the famous scikit-image library. To compute the convex hull of an arbitrary set of points, the points of the list that lie on the outline of the convex hull must be selected (to simplify, we call these points “extrema”). To find them, for an arbitrary set of points, it is necessary to browse all the points but not in the particular case of a binary shape. In this specific situation, the extrema necessarily belong to the inner boundary of the shape. It is a waste of time to browse all the pixels as it is possible to discard most of them when we search for these extrema. Based on this analysis, we propose a new method to compute the convex hull dedicated to binary shapes. This method browses as few pixels as possible to select a small subset of boundary pixels. Then it deduces the convex hull only from this subset. As the size of the subset is very small, the convex hull is computed in real time. We compare it with the commonly used methods and common functions from libraries to prove that our approach is faster. This comparison shows that, for a very small shape, the difference is acceptable, but when the area of the shape grows, this difference becomes significant. This leads us to conclude that substituting current functions to compute convex hull of binary shapes with our algorithm in frequently used libraries would lead to a great improvement.

Continue reading

Interactive and real-time typesetting for demonstration and experimentation: <span style="font-variant:small-caps;">ETAP</span>

By Didier Verna

2023-09-01

In TUGboat

Abstract

In general, typesetting experimentation is not a very practical thing to do. WYSIWYG typesetting systems are very reactive but do not offer highly configurable algorithms, and TeX, with its separate development / compilation / visualization phases, is not as interactive as its WYSIWYG competitors. Being able to experiment with typesetting algorithms interactively and in real-time is nevertheless desirable, for instance for demonstration purposes, or for rapid prototyping and debugging of new ideas. We present ETAP (Experimental Typesetting Algorithms Platform), a tool written to ease typesetting experimentation and demonstration. ETAP currently provides several paragraph justification algorithms, all with many configuration options such as kerning, ligatures, flexible spaces, sloppiness, hyphenation, etc. The resulting paragraph is displayed with many visual hints as well, such as paragraph, character, and line boxes, baselines, over/underfullness hints, hyphenation clues, etc. All these parameters, along with the desired paragraph width, are adjustable interactively through a GUI, and the resulting paragraph is displayed and updated in real-time. But ETAP can also be used without, or in conjunction with the GUI, as a scriptable application. In particular, it is able to generate all sorts of statistical reports or charts on the behavior of the various algorithms, for instance, the number of over/underfull boxes per paragraph width, the average compression or stretch ratio per line, whatever else you want. This allows you to quickly demonstrate or evaluate the comparative behavior or merits of the provided algorithms, or whichever you may want to add to the pool.

Continue reading

Layered controller synthesis for dynamic multi-agent systems

By Emily Clement, Nicolas Perrin-Gilbert, Philipp Schlehuber-Caissier

2023-09-01

In Proceedings of the 21st international conference on formal modeling and analysis of timed systems (FORMATS’23)

Abstract

In this paper we present a layered approach for multi-agent control problem, decomposed into three stages, each building upon the results of the previous one. First, a high-level plan for a coarse abstraction of the system is computed, relying on parametric timed automata augmented with stopwatches as they allow to efficiently model simplified dynamics of such systems. In the second stage, the high-level plan, based on SMT-formulation, mainly handles the combinatorial aspects of the problem, provides a more dynamically accurate solution. These stages are collectively referred to as the SWA-SMT solver. They are correct by construction but lack a crucial feature: they cannot be executed in real time. To overcome this, we use SWA-SMT solutions as the initial training dataset for our last stage, which aims at obtaining a neural network control policy. We use reinforcement learning to train the policy, and show that the initial dataset is crucial for the overall success of the method.

Continue reading

Open Access to Data about Silk Heritage: A Case Study in Digital Information Sustainability

Abstract

This article builds on work conducted and lessons learned within SILKNOW, a research project that aimed at enhancing the preservation and digital dissemination of silk heritage. Taking the project and this heritage typology as a case study in the digital transformation of cultural heritage institutions, it illustrates specific challenges that these institutions must face and demonstrates a few innovative answers to meet those challenges. The methodology combines approaches typical of the humanities and others usual in ICT, being inductive regarding materials and methods (consisting of a detailed review of existing online repositories and research projects devoted to textile heritage) and descriptive for the results and discussion (which explain at length the development of some tools and resources that responded to the needs detected in the previous analysis). The article reports on the state of the art and recent developments in the field of textile heritage, the tools implemented to allow the semantic access and text analysis of descriptive records associated with silk fabrics, and the spatiotemporal visualization of that information. Finally, it argues that institutional policies, namely the creation and free dissemination of open data related to cultural heritage are just as important as technical developments, showing why any future effort in these areas should take data sustainability, both in its technical and in institutional aspects, into account, since it is the most responsible and reasonable approach in terms of efficient resource allocation.

Continue reading

Dissecting ltlsynt

Abstract

ltlsynt is a tool for synthesizing a reactive circuit satisfying a specification expressed as an LTL formula. ltlsynt generally follows a textbook approach: the LTL specification is translated into a parity game whose winning strategy can be seen as a Mealy machine modeling a valid controller. This article details each step of this approach, and presents various refinements integrated over the years. Some of these refinements are unique to ltlsynt: for instance, ltlsynt supports multiple ways to encode a Mealy machine as an AIG circuit, features multiple simplification algorithms for the intermediate Mealy machine, and bypasses the usual game-theoretic approach for some subclasses of LTL formulas in favor of more direct constructions.

Continue reading

Experimenting with additive margins for contrastive self-supervised speaker verification

By Théo Lepage, Réda Dehak

2023-08-20

In Proceedings of the 24rd annual conference of the international speech communication association (interspeech 2023)

Abstract

Continue reading