Publications

A CP-based automatic tool for instantiating truncated differential characteristics

By Fraņois Delobel, Patrick Derbez, Arthur Gontier, Loïc Rouquette, Christine Solnon

2023-12-01

In Progress in cryptology – INDOCRYPT 2023

Abstract

An important criteria to assert the security of a cryptographic primitive is its resistance against differential cryptanalysis. For word-oriented primitives, a common technique to determine the number of rounds required to ensure the immunity against differential distinguishers is to consider truncated differential characteristics and to count the number of active S-boxes. Doing so allows to provide an upper bound on the probability of the best differential characteristic with a reduced computational cost. However, in order to design very efficient primitives, it might be needed to evaluate the probability more accurately. This is usually done in a second step, during which one tries to instantiate truncated differential characteristics with actual values and computes its corresponding probability. This step is usually done with ad-hoc algorithms and CP or MILP models for generic solvers. In this paper, we present a generic approach for automatically generating these models to handle all word-oriented ciphers. Furthermore the running times to solve these models are very competitive with all the previous dedicated approaches.

Continue reading

Ce que nous savons sur (les) sciences du jeu : Analyse bibliométrique et lexicométrique des articles de la revue (octobre 2013 - mai 2022)

Abstract

On its website, Sciences du jeu describes itself as an “international and interdisciplinary journal whose mission is to develop and promote French-speaking researc[h] on play”, “to foster dialogue between social sciences and set off debates on this particular subject” Created in 2013 following a study day in tribute to the work of Jacques Henriot, its scientific program clearly follows in his footsteps. Indeed, Sciences du jeu defines itself not only as “open to all approaches or methods”, but also to “every aspect of play” (including, but not exclusively, video games) and to “researc[h] from various fields related to play in a broad sense (objects, structures, situations, experiences, attitudes)”.. Ten years after the publication of the first issue of the journal, we may well ask to what extent the articles published to date reflect the original approach of play originally promoted by Henriot. What about the references to this author and to the concepts he developed in his work? More generally, what are the bibliographical references most frequently used by the journal’s authors? What do they tell us about their conception of play and how they approach it? On which disciplinary approaches and methods are their analyses most often based? What types of games, themes and/or fields are most frequently studied? What are the gray areas and less visible fields? Finally, who are the authors of these papers (in terms of gender and status), where do they come from (in terms of affiliation and disciplinary roots) and how does this influence their perspective on play? To answer these questions, this paper draws on a bibliometric, lexicometric and sociological analysis based on a corpus comprising all the articles published in the first seventeen issues of the journal.

Continue reading

Closure and decision properties for higher-dimensional automata

By Amazigh Amrane, Hugo Bazille, Uli Fahrenberg, Krzysztof Ziemiański

2023-12-01

In 20th international colloquium on theoretical aspects of computing (ICTAC’23)

Abstract

Continue reading

Performance evaluation of container management tasks in OS-level virtualization platforms

By Pedro Melo, Lucas Gama, Jamilson Dantas, David Beserra, Jean Araujo

2023-12-01

In 31th IEEE international conference on enabling technologies: Infrastructure for collaborative enterprises (WETICE)

Abstract

Cloud computing is a method for accessing and managing computing resources over the internet, providing flexibility, scalability, and cost-efficiency. Cloud computing relies more and more on OS-level virtualization tools such as Docker and Podman, enabling users to create and run containers, which are widely used for application management. Given its significance in cloud infrastructures, it is crucial to have a better understanding of OS-level virtualization performance, especially in tasks related to container management (ex: creation, destruction). In this paper, we conducted benchmarking tests on Docker and Podman to evaluate their performance in various container management scenarios and with different image sizes. The results revealed that Podman excels in quickly instantiating small-sized containers, while Docker demonstrates superior performance with larger-sized containers.

Continue reading

An improved spectral extraction method for JWST/NIRSpec fixed slit observations

Abstract

The James Webb Space Telescope is performing beyond our expectations. Its Near Infrared Spectrograph (NIRSpec) provides versatile spectroscopic capabilities in the 0.6-5.3 micrometre wavelength range, where a new window is opening for studying Trans-Neptunian objects in particular. We propose a spectral extraction method for NIRSpec fixed slit observations, with the aim of meeting the superior performance on the instrument with the most advanced data processing. We applied this method on the fixed slit dataset of the guaranteed-time observation program 1231, which targets Plutino 2003 AZ84. We compared the spectra we extracted with those from the calibration pipeline.

Continue reading

Where is VALDO? VAscular Lesions Detection and segmentatiOn challenge at MICCAI 2021

Abstract

Imaging markers of cerebral small vessel disease provide valuable information on brain health, but their manual assessment is time-consuming and hampered by substantial intra- and interrater variability. Automated rating may benefit biomedical research, as well as clinical assessment, but diagnostic reliability of existing algorithms is unknown. Here, we present the results of the VAscular Lesions DetectiOn and Segmentation (Where is VALDO?) challenge that was run as a satellite event at the international conference on Medical Image Computing and Computer Aided Intervention (MICCAI) 2021. This challenge aimed to promote the development of methods for automated detection and segmentation of small and sparse imaging markers of cerebral small vessel disease, namely enlarged perivascular spaces (EPVS) (Task 1), cerebral microbleeds (Task 2) and lacunes of presumed vascular origin (Task 3) while leveraging weak and noisy labels. Overall, 12 teams participated in the challenge proposing solutions for one or more tasks (4 for Task 1-EPVS, 9 for Task 2-Microbleeds and 6 for Task 3-Lacunes). Multi-cohort data was used in both training and evaluation. Results showed a large variability in performance both across teams and across tasks, with promising results notably for Task 1-EPVS and Task 2-Microbleeds and not practically useful results yet for Task 3-Lacunes. It also highlighted the performance inconsistency across cases that may deter use at an individual level, while still proving useful at a population level.

Continue reading

Bridging human concepts and computer vision for explainable face verification

By Miriam Doh, Caroline Mazini-Rodrigues, Nicolas Boutry, Laurent Najman, Mancas Matei, Hugues Bersini

2023-10-10

In 2nd international workshop on emerging ethical aspects of AI (BEWARE-23)

Abstract

With Artificial Intelligence (AI) influencing the decision-making process of sensitive applications such as Face Verification, it is fundamental to ensure the transparency, fairness, and accountability of decisions. Although Explainable Artificial Intelligence (XAI) techniques exist to clarify AI decisions, it is equally important to provide interpretability of these decisions to humans. In this paper, we present an approach to combine computer and human vision to increase the explanation’s interpretability of a face verification algorithm. In particular, we are inspired by the human perceptual process to understand how machines perceive face’s human-semantic areas during face comparison tasks. We use Mediapipe, which provides a segmentation technique that identifies distinct human-semantic facial regions, enabling the machine’s perception analysis. Additionally, we adapted two model-agnostic algorithms to provide human-interpretable insights into the decision-making processes.

Continue reading

Refinement of a ligand activity and representation of topological phamacophores in a colored network

By Maroua Lejmi, Damien Geslin, Bertrand Cuissart, Ilef Ben Slima, Nida Meddouri, Ronan Bureau, Alban Lepailleur, Amel Borgi, Jean-Luc Lamotte

2023-10-01

In Proceedings of the 11èmes journées de la société française de chémoinformatique

Abstract

Structure-Activity Relationships is a critical aspect of drug design. It enables us to examine ligand interactions and performances towards specific targets, then to design effective drugs for treating diseases or improving existing medical therapies. In this context, we specifically study the activity of ligands towards kinases using the BCR-ABL dataset. The work is dedicated to introduce a refinement method for the activity of molecules. Instead of considering anity as a binary activity, a molecule being either active or inactive, the compounds were partitioned into 4 classes according to their activity: very active, moderately active, slightly active, inactive. This activity is later used to evaluate molecular descriptors called topological pharmacophores [1]. These pharmacophores provide essential information by representing the key structural features of a molecule. Their quality is determined by measuring their “growth-rate” which corresponds to the ratio of active molecules over inactive ones, among the molecules supported by the pharmacophore. In our work, the calculation of the growth-rate is based on the classes of activity that we have created. Consequently, we will obtain three measurements of the growth rate, each one being related to a class of activity. In addition, we proposed to convert the new information of the quality of the pharmacophores into a visual representation called “The Pharmacophore Network” [2]. The latter is a graph whose nodes represent the pharmacophores and edges represent a graph-edit distance that separates them. Our goal was to structure more finely the pharmacophore space and to be able to detect visually interesting areas that can be explored. For this purpose, we integrated colors in this Pharmacophore Network, where each color refers to a class of activity.

Continue reading

How to compute the convex hull of a binary shape? A real-time algorithm to compute the convex hull of a binary shape

By Jonathan Fabrizio

2023-09-13

In Journal of Real-Time Image Processing volume

Abstract

In this article, we present an algorithm to compute the convex hull of a binary shape. Efficient algorithms to compute the convex hull of a set of points had been proposed long time ago. For a binary shape, the common practice is to rely on one of them: to compute the convex hull of binary shape, all pixels of the shape are first listed, and then the convex hull is computed on this list of points. The computed convex hull is finally rasterized to provide the final result as, for example, in the famous scikit-image library. To compute the convex hull of an arbitrary set of points, the points of the list that lie on the outline of the convex hull must be selected (to simplify, we call these points “extrema”). To find them, for an arbitrary set of points, it is necessary to browse all the points but not in the particular case of a binary shape. In this specific situation, the extrema necessarily belong to the inner boundary of the shape. It is a waste of time to browse all the pixels as it is possible to discard most of them when we search for these extrema. Based on this analysis, we propose a new method to compute the convex hull dedicated to binary shapes. This method browses as few pixels as possible to select a small subset of boundary pixels. Then it deduces the convex hull only from this subset. As the size of the subset is very small, the convex hull is computed in real time. We compare it with the commonly used methods and common functions from libraries to prove that our approach is faster. This comparison shows that, for a very small shape, the difference is acceptable, but when the area of the shape grows, this difference becomes significant. This leads us to conclude that substituting current functions to compute convex hull of binary shapes with our algorithm in frequently used libraries would lead to a great improvement.

Continue reading