Publications

Implementing baker’s SUBTYPEP decision procedure

By Léo Valais, Jim Newton, Didier Verna

2019-04-01

In ELS 2019, the 12th european lisp symposium

Abstract

We present here our partial implementation of Baker’s decision procedure for SUBTYPEP. In his article “A Decision Procedure for Common Lisp’s SUBTYPEP Predicate”, he claims to provide implementation guidelines to obtain a SUBTYPEP more accurate and as efficient as the average implementation. However, he did not provide any serious implementation and his description is sometimes obscure. In this paper we present our implementation of part of his procedure, only supporting primitive types, CLOS classes, member, range and logical type specifiers. We explain in our words our understanding of his procedure, with much more detail and examples than in Baker’s article. We therefore clarify many parts of his description and fill in some of its gaps or omissions. We also argue in favor and against some of his choices and present our alternative solutions. We further provide some proofs that might be missing in his article and some early efficiency results. We have not released any code yet but we plan to open source it as soon as it is presentable.

Continue reading

Model checking with generalized Rabin and Fin-less automata

By Vincent Bloemen, Alexandre Duret-Lutz, Jaco van de Pol

2019-04-01

In International Journal on Software Tools for Technology Transfer

Abstract

In the automata theoretic approach to explicit state LTL model checking, the synchronized product of the model and an automaton that represents the negated formula is checked for emptiness. In practice, a (transition-based generalized) Büchi automaton (TGBA) is used for this procedure.This paper investigates whether using a more general form of acceptance, namely a transition-based generalized Rabin automaton (TGRA), improves the model checking procedure. TGRAs can have significantly fewer states than TGBAs, however the corresponding emptiness checking procedure is more involved. With recent advances in probabilistic model checking and LTL to TGRA translators, it is only natural to ask whether checking a TGRA directly is more advantageous in practice.We designed a multi-core TGRA checking algorithm and performed experiments on a subset of the models and formulas from the 2015 Model Checking Contest and generated LTL formulas for models from the BEEM database. While we found little to no improvement by checking TGRAs directly, we show how various aspects of a TGRA’s structure influences the model checking performance.In this paper, we also introduce a Fin-less acceptance condition, which is a disjunction of TGBAs. We show how to convert TGRAs into automata with Fin-less acceptance and show how a TGBA emptiness procedure can be extended to check Fin-less automata.

Continue reading

Parallelizing quickref

By Didier Verna

2019-04-01

In ELS 2019, the 12th european lisp symposium

Abstract

Quickref is a global documentation project for Common Lisp software. It builds a website containing reference manuals for Quicklisp libraries. Each library is first compiled, loaded, and introspected. From the collected information, a Texinfo file is generated, which is then processed into an HTML one. Because of the large number of libraries in Quicklisp, doing this sequentially may require several hours of processing. We report on our experiments in parallelizing Quickref. Experimental data on the morphology of Quicklisp libraries has been collected. Based on this data, we are able to propose a number of parallelization schemes that reduce the total processing time by a factor of 3.8 to 4.5, depending on the exact situation.

Continue reading

An equivalence relation between morphological dynamics and persistent homology in 1D

By Nicolas Boutry, Thierry Géraud, Laurent Najman

2019-03-13

In Mathematical morphology and its application to signal and image processing – proceedings of the 14th international symposium on mathematical morphology (ISMM)

Abstract

We state in this paper a strong relation existing between Mathematical Morphology and Discrete Morse Theory when we work with 1D Morse functions. Specifically, in Mathematical Morphology, a classic way to extract robust markers for segmentation purposes, is to use the dynamics. On the other hand, in Discrete Morse Theory, a well-known tool to simplify the Morse-Smale complexes representing the topological information of a Morse function is the persistence. We show that pairing by persistence is equivalent to pairing by dynamics. Furthermore, self-duality and injectivity of these pairings are proved.

Continue reading

Constructing a braid of partitions from hierarchies of partitions

By Guillaume Tochon, Mauro Dalla Mura, Jocelyn Chanussot

2019-03-13

In Mathematical morphology and its application to signal and image processing – proceedings of the 14th international symposium on mathematical morphology (ISMM)

Abstract

Braids of partitions have been introduced in a theoretical framework as a generalization of hierarchies of partitions, but practical guidelines to derive such structures remained an open question. In a previous work, we proposed a methodology to build a braid of partitions by experimentally composing cuts extracted from two hierarchies of partitions, notably paving the way for the hierarchical representation of multimodal images. However, we did not provide the formal proof that our proposed methodology was yielding a braid structure. We remedy to this point in the present paper and give a brief insight on the structural properties of the resulting braid of partitions.

Continue reading

Introducing multivariate connected openings and closings

By Edwin Carlinet, Thierry Géraud

2019-03-13

In Mathematical morphology and its application to signal and image processing – proceedings of the 14th international symposium on mathematical morphology (ISMM)

Abstract

The component trees provide a high-level, hierarchical, and contrast invariant representations of images, suitable for many image processing tasks. Yet their definition is ill-formed on multivariate data, e.g., color images, multi-modality images, multi-band images, and so on. Common workarounds such as marginal processing, or imposing a total order on data are not satisfactory and yield many problems, such as artifacts, loss of invariances, etc. In this paper, inspired by the way the Multivariate Tree of Shapes (MToS) has been defined, we propose a definition for a Multivariate min-tree or max-tree. We do not impose an arbitrary total ordering on values; we use only the inclusion relationship between components. As a straightforward consequence, we thus have a new class of multivariate connected openings and closings.

Continue reading

Spherical fluorescent particle segmentation and tracking in 3D confocal microscopy

By Élodie Puybareau, Edwin Carlinet, Alessandro Benfenati, Hugues Talbot

2019-03-13

In Mathematical morphology and its application to signal and image processing – proceedings of the 14th international symposium on mathematical morphology (ISMM)

Abstract

Spherical fluorescent particle are micrometer-scale spherical beads used in various areas of physics, chemistry or biology as markers associated with local physical media. They are useful for example in fluid dynamics to characterize flows, diffusion coefficients, viscosity or temperature; they are used in cells dynamics to estimate mechanical strain and stress at the micrometer scale. In order to estimate these physical measurements, tracking these particles is necessary. Numerous approaches and existing packages, both open-source and proprietary are available to achieve tracking with a high degree of precision in 2D. However, little such software is available to achieve tracking in 3D. One major difficulty is that 3D confocal microscopy acquisition is not typically fast enough to assume that the beads are stationary during the whole 3D scan. As a result, beads may move between planar scans. Classical approaches to 3D segmentation may yield objects are not spherical. In this article, we propose a 3D bead segmentation that deals with this situation.

Continue reading

Modular and efficient divide-and-conquer SAT solver on top of the Painless framework

By Ludovic Le Frioux, Souheib Baarir, Julien Sopena, Fabrice Kordon

2019-02-13

In Proceedings of the 25th international conference on tools and algorithms for the construction and analysis of systems (TACAS’19)

Abstract

Over the last decade, parallel SATisfiability solving has been widely studied from both theoretical and practical aspects. There are two main approaches. First, divide-and-conquer (D&C) splits the search space, each solver being in charge of a particular subspace. The second one, portfolio launches multiple solvers in parallel, and the first to find a solution ends the computation. However although D&C based approaches seem to be the natural way to work in parallel, portfolio ones experimentally provide better performances. An explanation resides on the difficulties to use the native formulation of the SAT problem (i.e., the CNF form) to compute an a priori good search space partitioning (i.e., all parallel solvers process their subspaces in comparable computational time). To avoid this, dynamic load balancing of the search subspaces is implemented. Unfortunately, this is difficult to compare load balancing strategies since state-of-the-art SAT solvers appropriately dealing with these aspects are hardly adaptable to various strategies than the ones they have been designed for. This paper aims at providing a way to overcome this problem by proposing an implementation and evaluation of different types of divide-and- conquer inspired from the literature. These are relying on the Painless framework, which provides concurrent facilities to elaborate such parallel SAT solvers. Comparison of the various strategies are then discussed.

Continue reading

How to make $n$-D plain maps Alexandrov-well-composed in a self-dual way

By Nicolas Boutry, Thierry Géraud, Laurent Najman

2019-02-04

In Journal of Mathematical Imaging and Vision

Abstract

In 2013, Najman and Géraud proved that by working on a well-composed discrete representation of a gray-level image, we can compute what is called its tree of shapes, a hierarchical representation of the shapes in this image. This way, we can proceed to morphological filtering and to image segmentation. However, the authors did not provide such a representation for the non-cubical case. We propose in this paper a way to compute a well-composed representation of any gray-level image defined on a discrete surface, which is a more general framework than the usual cubical grid. Furthermore, the proposed representation is self-dual in the sense that it treats bright and dark components in the image the same way. This paper can be seen as an extension to gray-level images of the works of Daragon et al. on discrete surfaces.

Continue reading

High throughput automated detection of axial malformations in Medaka embryo

Abstract

Fish embryo models are widely used as screening tools to assess the efficacy and/or toxicity of chemicals. This assessment involves the analysis of embryo morphological abnormalities. In this article, we propose a multi-scale pipeline to allow automated classification of fish embryos (Medaka: Oryzias latipes) based on the presence or absence of spine malformations. The proposed pipeline relies on the acquisition of fish embryo 2D images, on feature extraction based on mathematical morphology operators and on machine learning classification. After image acquisition, segmentation tools are used to detect the embryo before analysing several morphological features. An approach based on machine learning is then applied to these features to automatically classify embryos according to the presence of axial malformations. We built and validated our learning model on 1459 images with a 10-fold cross-validation by comparison with the gold standard of 3D observations performed under a microscope by a trained operator. Our pipeline results in correct classification in 85% of the cases included in the database. This percentage is similar to the percentage of success of a trained human operator working on 2D images. The key benefit of our approach is the low computational cost of our image analysis pipeline, which guarantees optimal throughput analysis..

Continue reading