Publications

Modular and efficient divide-and-conquer SAT solver on top of the Painless framework

By Ludovic Le Frioux, Souheib Baarir, Julien Sopena, Fabrice Kordon

2019-02-13

In Proceedings of the 25th international conference on tools and algorithms for the construction and analysis of systems (TACAS’19)

Abstract

Over the last decade, parallel SATisfiability solving has been widely studied from both theoretical and practical aspects. There are two main approaches. First, divide-and-conquer (D&C) splits the search space, each solver being in charge of a particular subspace. The second one, portfolio launches multiple solvers in parallel, and the first to find a solution ends the computation. However although D&C based approaches seem to be the natural way to work in parallel, portfolio ones experimentally provide better performances. An explanation resides on the difficulties to use the native formulation of the SAT problem (i.e., the CNF form) to compute an a priori good search space partitioning (i.e., all parallel solvers process their subspaces in comparable computational time). To avoid this, dynamic load balancing of the search subspaces is implemented. Unfortunately, this is difficult to compare load balancing strategies since state-of-the-art SAT solvers appropriately dealing with these aspects are hardly adaptable to various strategies than the ones they have been designed for. This paper aims at providing a way to overcome this problem by proposing an implementation and evaluation of different types of divide-and- conquer inspired from the literature. These are relying on the Painless framework, which provides concurrent facilities to elaborate such parallel SAT solvers. Comparison of the various strategies are then discussed.

Continue reading

How to make $n$-D plain maps Alexandrov-well-composed in a self-dual way

By Nicolas Boutry, Thierry Géraud, Laurent Najman

2019-02-04

In Journal of Mathematical Imaging and Vision

Abstract

In 2013, Najman and Géraud proved that by working on a well-composed discrete representation of a gray-level image, we can compute what is called its tree of shapes, a hierarchical representation of the shapes in this image. This way, we can proceed to morphological filtering and to image segmentation. However, the authors did not provide such a representation for the non-cubical case. We propose in this paper a way to compute a well-composed representation of any gray-level image defined on a discrete surface, which is a more general framework than the usual cubical grid. Furthermore, the proposed representation is self-dual in the sense that it treats bright and dark components in the image the same way. This paper can be seen as an extension to gray-level images of the works of Daragon et al. on discrete surfaces.

Continue reading

High throughput automated detection of axial malformations in Medaka embryo

Abstract

Fish embryo models are widely used as screening tools to assess the efficacy and/or toxicity of chemicals. This assessment involves the analysis of embryo morphological abnormalities. In this article, we propose a multi-scale pipeline to allow automated classification of fish embryos (Medaka: Oryzias latipes) based on the presence or absence of spine malformations. The proposed pipeline relies on the acquisition of fish embryo 2D images, on feature extraction based on mathematical morphology operators and on machine learning classification. After image acquisition, segmentation tools are used to detect the embryo before analysing several morphological features. An approach based on machine learning is then applied to these features to automatically classify embryos according to the presence of axial malformations. We built and validated our learning model on 1459 images with a 10-fold cross-validation by comparison with the gold standard of 3D observations performed under a microscope by a trained operator. Our pipeline results in correct classification in 85% of the cases included in the database. This percentage is similar to the percentage of success of a trained human operator working on 2D images. The key benefit of our approach is the low computational cost of our image analysis pipeline, which guarantees optimal throughput analysis..

Continue reading

Finite automata theory based optimization of conditional variable binding

By Jim Newton, Didier Verna

2019-01-14

In ELS 2019, the 12th european lisp symposium

Abstract

We present an efficient and highly optimized implementation of destructuring-case in Common Lisp. This macro allows the selection of the most appropriate destructuring lambda list of several given based on structure and types of data at run-time and thereafter dispatches to the corresponding code branch. We examine an optimization technique, based on finite automata theory applied to conditional variable binding and execution, and type-based pattern matching on Common Lisp sequences. A risk of inefficiency associated with a naive implementation of destructuring-case is that the candidate expression being examined may be traversed multiple times, once for each clause whose format fails to match, and finally once for the successful match. We have implemented destructuring-case in such a way to avoid multiple traversals of the candidate expression. This article explains how this optimization has been implemented.

Continue reading

Motion compensation in digital holography for retinal imaging

By Julie Rivet, Guillaume Tochon, Serge Meimon, Michel Paques, Michael Atlan, Thierry Géraud

2018-12-19

In Proceedings of the IEEE international symposium on biomedical imaging (ISBI)

Abstract

The measurement of medical images can be hindered by blur and distortions caused by the physiological motion. Specially for retinal imaging, images are greatly affected by sharp movements of the eye. Stabilization methods have been developed and applied to state-of-the-art retinal imaging modalities; here we intend to adapt them for coherent light detection schemes. In this paper, we demonstrate experimentally cross-correlation-based lateral and axial motion compensation in laser Doppler imaging and optical coherence tomography by digital holography. Our methods improve lateral and axial image resolution in those innovative instruments and allow a better visualization during motion.

Continue reading

Taking into account inclusion and adjacency information in morphological hierarchical representations, with application to the extraction of text in natural images and videos.

Abstract

The inclusion and adjacency relationship between image regions usually carry contextual information. The later is widely used since it tells how regions are arranged in images. The former is usually not taken into account although it parallels the object-background relationship. The mathematical morphology framework provides several hierarchical image representations. They include the Tree of Shapes (ToS), which encodes the inclusion of level-line, and the hierarchies of segmentation (e.g., alpha-tree, BPT), which is useful in the analysis of the adjacency relationship. In this work, we take advantage of both inclusion and adjacency information in these representations for computer vision applications. We introduce the spatial alignment graph w.r.t inclusion that is constructed by adding a new adjacency relationship to nodes of the ToS. In a simple ToS such as our Tree of Shapes of Laplacian sign, which encodes the inclusion of Morphological Laplacian 0-crossings, the graph is reduced to a disconnected graph where each connected component is a semantic group. In other cases, e.g., classic ToS, the spatial alignment graph is more complex. To address this issue, we expand the shape-spaces morphology. Our expansion has two primary results: 1)It allows the manipulation of any graph of shapes. 2)It allows any tree filtering strategy proposed by the connected operators frameworks. With this expansion, the spatial graph could be analyzed with the help of an alpha-tree. We demonstrated the application aspect of our method in the application of text detection. The experiment results show the efficiency and effectiveness of our methods, which is appealing to mobile applications.

Continue reading

High throughput automated detection of axial malformations in fish embryo

Abstract

Fish embryo models are widely used as screening tools to assess the efficacy and /or toxicity of chemicals. This assessment involves analysing embryo morphological abnormalities. In this article, we propose a multi-scale pipeline to allow automated classification of fish embryos (Medaka: Oryzias latipes) based on the presence or absence of spine malformations. The proposed pipeline relies on the acquisition of fish embryo 2D images, on feature extraction due to mathematical morphology operators and on machine learning classification. After image acquisition, segmentation tools are used to focus on the embryo before analysing several morphological features. An approach based on machine learning is then applied to these features to automatically classify embryos according to the detection of axial malformations. We built and validated our learning model on 1,459 images with a 10-fold cross- validation by comparison with the gold standard of 3D observations performed under a microscope by a trained operator. Our pipeline results in correct classification in 85% of the cases included in the database. This percentage is similar to the percentage of success of a trained human operator working on 2D images. Indeed, most of the errors are due to the inherent limitations of 2D images compared to 3D observations. The key benefit of our approach is the low computational cost of our image analysis pipeline, which guarantees optimal throughput analysis.

Continue reading

Intervertebral disc segmentation using mathematical morphology—A CNN-free approach

By Edwin Carlinet, Thierry Géraud

2018-11-26

In Proceedings of the 5th MICCAI workshop & challenge on computational methods and clinical applications for spine imaging (CSI)

Abstract

In the context of the challenge of “automatic InterVertebral Disc (IVD) localization and segmentation from 3D multi-modality MR images” that took place at MICCAI 2018, we have proposed a segmentation method based on simple image processing operators. Most of these operators come from the mathematical morphology framework. Driven by some prior knowledge on IVDs (basic information about their shape and the distance between them), and on their contrast in the different modalities, we were able to segment correctly almost every IVD. The most interesting feature of our method is to rely on the morphological structure called the Three of Shapes, which is another way to represent the image contents. This structure arranges all the connected components of an image obtained by thresholding into a tree, where each node represents a particular region. Such structure is actually powerful and versatile for pattern recognition tasks in medical imaging.

Continue reading

Segmentation of gliomas and prediction of patient overall survival: A simple and fast procedure

By Élodie Puybareau, Guillaume Tochon, Joseph Chazalon, Jonathan Fabrizio

2018-11-05

In Proceedings of the workshop on brain lesions (BrainLes), in conjunction with MICCAI

Abstract

In this paper, we propose a fast automatic method that seg- ments glioma without any manual assistance, using a fully convolutional network (FCN) and transfer learning. From this segmentation, we predict the patient overall survival using only the results of the segmentation and a home made atlas. The FCN is the base network of VGG-16, pretrained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2018 BraTS Challenge. It relies on the “pseudo-3D” method published at ICIP 2017, which allows for segmenting objects from 2D color images which contain 3D information of MRI volumes. For each n th slice of the volume to segment, we consider three images, corresponding to the (n-1)th, nth, and (n-1)th slices of the original volume. These three gray-level 2D images are assembled to form a 2D RGB color image (one image per channel). This image is the input of the FCN to obtain a 2D segmentation of the n th slice. We process all slices, then stack the results to form the 3D output segmentation. With such a technique, the segmentation of a 3D volume takes only a few seconds. The prediction is based on Random Forests, and has the advantage of not being dependant of the acquisition modality, making it robust to inter-base data.

Continue reading

Representing and computing with types in dynamically typed languages

Abstract

In this report, we present code generation techniques related to run-time type checking of heterogeneous sequences. Traditional regular expressions can be used to recognize well defined sets of character strings called rational languages or sometimes regular languages. Newton et al. present an extension whereby a dynamic programming language may recognize a well defined set of heterogeneous sequences, such as lists and vectors. As with the analogous string matching regular expression theory, matching these regular type expressions can also be achieved by using a finite state machine (deterministic finite automata, DFA). Constructing such a DFA can be time consuming. The approach we chose, uses meta-programming to intervene at compile-time, generating efficient functions specific to each DFA, and allowing the compiler to further optimize the functions if possible. The functions are made available for use at run-time. Without this use of meta-programming, the program might otherwise be forced to construct the DFA at run-time. The excessively high cost of such a construction would likely far outweigh the time needed to match a string against the expression. Our technique involves hooking into the Common Lisp type system via the DEFTYPE macro. The first time the compiler encounters a relevant type specifier, the appropriate DFA is created, which may be a Omega(2^n operation, from which specific low-level code is generated to match that specific expression. Thereafter, when the type specifier is encountered again, the same pre-generated function can be used. The code generated is Theta(n) complexity at run-time. A complication of this approach, which we explain in this report, is that to build the DFA we must calculate a disjoint type decomposition which is time consuming, and also leads to sub-optimal use of TYPECASE in machine generated code. To handle this complication, we use our own macro OPTIMIZED-TYPECASE in our machine generated code. Uses of this macro are also implicitly expanded at compile time. Our macro expansion uses BDDs (Binary Decision Diagrams) to optimize the OPTIMIZED-TYPECASE into low level code, maintaining the TYPECASE semantics but eliminating redundant type checks. In the report we also describe an extension of BDDs to accomodate subtyping in the Common Lisp type system as well as an in-depth analysis of worst-case sizes of BDDs.

Continue reading