Publications

On the usefulness of clause strengthening in parallel SAT solving

By Vincent Vallade, Ludovic Le Frioux, Souheib Baarir, Julien Sopena, Fabrice Kordon

2020-08-01

In Proceedings of the 12th NASA formal methods symposium (NFM’20)

Abstract

In the context of parallel SATisfiability solving, this paper presents an implementation and evaluation of a clause strengthening algorithm. The developed component can be easily combined with (virtually) any CDCL-like SAT solver. Our implementation is integrated as a part of Painless, a generic and modular framework for building parallel SAT solvers.

Continue reading

A 4D counter-example showing that DWCness does not imply CWCness in $n$-D

By Nicolas Boutry, Rocio Gonzalez-Diaz, Laurent Najman, Thierry Géraud

2020-07-21

In Combinatorial image analysis: Proceedings of the 20th international workshop (IWCIA 2020)

Abstract

In this paper, we prove that the two flavours of well-composedness called Continuous Well-Composedness (shortly CWCness), stating that the boundary of the continuous analog of a discrete set is a manifold, and Digital Well-Composedness (shortly DWCness), stating that a discrete set does not contain any critical configuration, are not equivalent in dimension 4. To prove this, we exhibit the example of a configuration of 8 tesseracts (4D cubes) sharing a common corner (vertex), which is DWC but not CWC. This result is surprising since we know that CWCness and DWCness are equivalent in 2D and 3D. To reach our goal, we use local homology.

Continue reading

Euler well-composedness

By Nicolas Boutry, Rocio Gonzalez-Diaz, Maria-Jose Jimenez, Eduardo Paluzo-Hildago

2020-07-21

In Combinatorial image analysis: Proceedings of the 20th international workshop (IWCIA 2020)

Abstract

In this paper, we define a new flavour of well-composedness, called Euler well-composedness, in the general setting of regular cell complexes: A regular cell complex is Euler well-composed if the Euler characteristic of the link of each boundary vertex is $1$. A cell decomposition of a picture $I$ is a pair of regular cell complexes $\big(K(I),K(\bar{I})\big)$ such that $K(I)$ (resp. $K(\bar{I})$) is a topological and geometrical model representing $I$ (resp. its complementary, $\bar{I}$). Then, a cell decomposition of a picture $I$ is self-dual Euler well-composed if both $K(I)$ and $K(\bar{I})$ are Euler well-composed. We prove in this paper that, first, self-dual Euler well-composedness is equivalent to digital well-composedness in dimension 2 and 3, and second, in dimension 4, self-dual Euler well-composedness implies digital well-composedness, though the converse is not true.

Continue reading

Non-iterative methods for image improvement in digital holography of the retina

Abstract

With the increase of the number of people with moderate to severe visual impairment, monitoring and treatment of vision disorders have become major issues in medicine today. At the Quinze-Vingts national ophthalmology hospital in Paris, two optical benches have been settled in recent years to develop two real-time digital holography techniques for the retina: holographic optical coherence tomography (OCT) and laser Doppler holography. The first reconstructs three-dimensional images, while the second allows visualization of blood flow in vessels. Besides problems inherent to the imaging system itself, optical devices are subject to external disturbance, bringing also difficulties in imaging and loss of accuracy. The main obstacles these technologies face are eye motion and eye aberrations. In this thesis, we have introduced several methods for image quality improvement in digital holography, and validated them experimentally. The resolution of holographic images has been improved by robust non-iterative methods: lateral and axial tracking and compensation of translation movements, and measurement and compensation of optical aberrations. This allows us to be optimistic that structures on holographic images of the retina will be more visible and sharper, which could ultimately provide very valuable information to clinicians.

Continue reading

Practical “paritizing” of Emerson–Lei automata

By Florian Renkin, Alexandre Duret-Lutz, Adrien Pommellet

2020-07-07

In Proceedings of the 18th international symposium on automated technology for verification and analysis (ATVA’20)

Abstract

We introduce a new algorithm that takes a Transition-based Emerson-Lei Automaton (TELA), that is, an $\omega$-automaton whose acceptance condition is an arbitrary Boolean formula on sets of transitions to be seen infinitely or finitely often, and converts it into a Transition-based Parity Automaton (TPA). To reduce the size of the output TPA, the algorithm combines and optimizes two procedures based on a latest appearance record principle, and introduces a partial degeneralization. Our motivation is to use this algorithm to improve our LTL synthesis tool, where producing deterministic parity automata is an intermediate step.

Continue reading

A new minimum barrier distance for multivariate images with applications to salient object detection, shortest path finding, and segmentation

By Minh Ôn Vũ Ngọc, Nicolas Boutry, Jonathan Fabrizio, Thierry Géraud

2020-06-02

In Computer Vision and Image Understanding

Abstract

Distance transforms and the saliency maps they induce are widely used in image processing, computer vision, and pattern recognition. One of the most commonly used distance transform is the geodesic one. Unfortunately, this distance does not always achieve satisfying results on noisy or blurred images. Recently, a new (pseudo-)distance, called the minimum barrier distance (MBD), more robust to pixel variations, has been introduced. Some years after, Géraud et al. have proposed a good and fast-to compute approximation of this distance: the Dahu pseudo-distance. Since this distance was initially developped for grayscale images, we propose here an extension of this transform to multivariate images; we call it vectorial Dahu pseudo-distance. An efficient way to compute it is provided in this paper. Besides, we provide benchmarks demonstrating how much the vectorial Dahu pseudo-distance is more robust and competitive compared to other MB-based distances, which shows how much this distance is promising for salient object detection, shortest path finding, and object segmentation.

Continue reading

Improving swarming using genetic algorithms

By Étienne Renault

2020-06-02

In Innovations in Systems and Software Engineering: a NASA journal (ISSE)

Abstract

The verification of temporal properties against a given system may require the exploration of its full state space. In explicit model checking, this exploration uses a depth-first search and can be achieved with multiple randomized threads to increase performance. Nonetheless, the topology of the state space and the exploration order can cap the speedup up to a certain number of threads. This paper proposes a new technique that aims to tackle this limitation by generating artificial initial states, using genetic algorithms. Threads are then launched from these states and thus explore different parts of the state space. Our prototype implementation is 10% faster than state-of-the-art algorithms on a general benchmark and 40% on a specialized benchmark. Even if we expected a decrease in an order of magnitude, these results are still encouraging since they suggest a new way to handle existing limitations. Empirically, our technique seems well suited for “linear topology”, i.e., the one we can obtain when combining model checking algorithms with partial-order reduction techniques.

Continue reading

Community and LBD-based clause sharing policy for parallel SAT solving

By Vincent Vallade, Ludovic Le Frioux, Souheib Baarir, Julien Sopena, Vijay Ganesh, Fabrice Kordon

2020-06-01

In Proceedings of the 23rd international conference on theory and applications of satisfiability testing (SAT’20)

Abstract

Modern parallel SAT solvers rely heavily on effective clause sharing policies for their performance. The core problem being addressed by these policies can be succinctly stated as “the problem of identifying high-quality learnt clauses” that when shared between the worker nodes of parallel solvers results in improved performance than otherwise. The term “high-quality clauses” is often defined in terms of metrics that solver designers have identified over years of empirical study. Some of the more well-known metrics to identify high-quality clauses for sharing include clause length, literal block distance (LBD), and clause usage in propagation. In this paper, we propose a new metric aimed at identifying high-quality learnt clauses and a concomitant clause-sharing policy based on a combination of LBD and community structure of Boolean formulas. The concept of community structure has been proposed as a possible explanation for the extraordinary performance of SAT solvers in industrial instances. Hence, it is a natural candidate as a basis for a metric to identify high-quality clauses. To be more precise, our metric identifies clauses that have low LBD and low community number as ones that are high-quality for applications such as verification and testing. The community number of a clause C measures the number of different communities of a formula that the variables in C span. We perform extensive empirical analysis of our metric and clause-sharing policy, and show that our method significantly outperforms state-of-the-art techniques on the benchmark from the parallel track of the last four SAT competitions.

Continue reading

Using separated inputs for multimodal brain tumor segmentation with 3D U-Net-like architectures

By Nicolas Boutry, Joseph Chazalon, Élodie Puybareau, Guillaume Tochon, Hugues Talbot, Thierry Géraud

2020-06-01

In Proceedings of the 5th international workshop, BrainLes 2019, held in conjunction with MICCAI 2019

Abstract

The work presented in this paper addresses the MICCAI BraTS 2019 challenge devoted to brain tumor segmentation using mag- netic resonance images. For each task of the challenge, we proposed and submitted for evaluation an original method. For the tumor segmentation task (Task 1), our convolutional neural network is based on a variant of the U-Net architecture of Ronneberger et al. with two modifications: first, we separate the four convolution parts to decorrelate the weights corresponding to each modality, and second, we provide volumes of size 240 * 240 * 3 as inputs in these convolution parts. This way, we profit of the 3D aspect of the input signal, and we do not use the same weights for separate inputs. For the overall survival task (Task 2), we compute explainable features and use a kernel PCA embedding followed by a Random Forest classifier to build a predictor with very few training samples. For the uncertainty estimation task (Task 3), we introduce and compare lightweight methods based on simple principles which can be applied to any segmentation approach. The overall performance of each of our contribution is honorable given the low computational requirements they have both for training and testing.

Continue reading