Publications

An image processing library in modern C++: Getting simplicity and efficiency with generic programming

By Michaël Roynard, Edwin Carlinet, Thierry Géraud

2018-10-25

In Proceedings of the 2nd workshop on reproducible research in pattern recognition (RRPR 2018)

Abstract

As there are as many clients as many usages of an Image Processing library, each one may expect different services from it. Some clients may look for efficient and production-quality algorithms, some may look for a large tool set, while others may look for extensibility and genericity to inter-operate with their own code base… but in most cases, they want a simple-to-use and stable product. For a C++ Image Processing library designer, it is difficult to conciliate genericity, efficiency and simplicity at the same time. Modern C++ (post 2011) brings new features for library developers that will help designing a software solution combining those three points. In this paper, we develop a method using these facilities to abstract the library components and augment the genericity of the algorithms. Furthermore, this method is not specific to image processing; it can be applied to any C++ scientific library.

Continue reading

Deep neural networks for aberrations compensation in digital holographic imaging of the retina

By Julie Rivet, Guillaume Tochon, Serge Meimon, Michel Pâques, Thierry Géraud, Michael Atlan

2018-10-25

In Proceedings of the SPIE conference on adaptive optics and wavefront control for biological systems v

Abstract

In computational imaging by digital holography, lateral resolution of retinal images is limited to about 20 microns by the aberrations of the eye. To overcome this limitation, the aberrations have to be canceled. Digital aberration compensation can be performed by post-processing of full-field digital holograms. Aberration compensation was demonstrated from wavefront measurement by reconstruction of digital holograms in subapertures, and by measurement of a guide star hologram. Yet, these wavefront measurement methods have limited accuracy in practice. For holographic tomography of the human retina, image reconstruction was demonstrated by iterative digital aberration compensation, by minimization of the local entropy of speckle-averaged tomographic volumes. However image-based aberration compensation is time-consuming, preventing real-time image rendering. We are investigating a new digital aberration compensation scheme with a deep neural network to circumvent the limitations of these aberrations correction methods. To train the network, 28.000 anonymized images of eye fundus from patients of the 15-20 hospital in Paris have been collected, and synthetic interferograms have been reconstructed digitally by simulating the propagation of eye fundus images recorded with standard cameras. With a U-Net architecture, we demonstrate defocus correction of these complex-valued synthetic interferograms. Other aberration orders will be corrected with the same method, to improve lateral resolution up to the diffraction limit in digital holographic imaging of the retina.

Continue reading

Left atrial segmentation in a few seconds using fully convolutional network and transfer learning

By Élodie Puybareau, Zhou Zhao, Younes Khoudli, Edwin Carlinet, Yongchao Xu, Jérôme Lacotte, Thierry Géraud

2018-10-25

In Proceedings of the workshop on statistical atlases and computational modelling of the heart (STACOM 2018), in conjunction with MICCAI

Abstract

In this paper, we propose a fast automatic method that segments left atrial cavity from 3D GE-MRIs without any manual assistance, using a fully convolutional network (FCN) and transfer learning. This FCN is the base network of VGG-16, pre-trained on ImageNet for natural image classification, and fine tuned with the training dataset of the MICCAI 2018 Atrial Segmentation Challenge. It relies on the “pseudo-3D” method published at ICIP 2017, which allows for segmenting objects from 2D color images which contain 3D information of MRI volumes. For each $n^{\text{th}}$ slice of the volume to segment, we consider three images, corresponding to the $(n-1)^{\text{th}}$, $n^{\text{th}}$, and $(n+1)^{\text{th}}$ slices of the original volume. These three gray-level 2D images are assembled to form a 2D RGB color image (one image per channel). This image is the input of the FCN to obtain a 2D segmentation of the $n^{\text{th}}$ slice. We process all slices, then stack the results to form the 3D output segmentation. With such a technique, the segmentation of the left atrial cavity on a 3D volume takes only a few seconds. We obtain a Dice score of 0.92 both on the training set in our experiments before the challenge, and on the test set of the challenge.

Continue reading

Document detection in videos captured by smartphones using a saliency-based method

By Minh Ôn Vũ Ngọc, Jonathan Fabrizio, Thierry Géraud

2018-09-20

In International conference on document analysis and recognition workshops (ICDARW)

Abstract

Smartphones are now widely used to digitizepaper documents. Document detection is the first importantstep of the digitization process. Whereas many methods extractlines from contours as candidates for the document boundary, we present in this paper a region-based approach. A key feature of our method is that it relies on visual saliency, using a recent distance existing in mathematical morphology. We show that the performance of our method is competitive with state-of-the-art methods on the ICDAR Smartdoc 2015 Competition dataset.

Continue reading

Recognizing heterogeneous sequences by rational type expression

By Jim Newton, Didier Verna

2018-09-14

In Proceedings of the meta’18: Workshop on meta-programming techniques and reflection

Abstract

We summarize a technique for writing functions which recognize types of heterogeneous sequences in Common Lisp. The technique employs sequence recognition functions, generated at compile time, and evaluated at run-time. The technique we demonstrate extends the Common Lisp type system, exploiting the theory of rational languages, Binary Decision Diagrams, and the Turing complete macro facility of Common Lisp. The resulting system uses meta-programming to move an exponential complexity operation from run-time to a compile-time operation, leaving a highly optimized linear complexity operation for run-time.

Continue reading

A theoretical and numerical analysis of the worst-case size of reduced ordered binary decision diagrams

By Jim Newton, Didier Verna

2018-08-28

In ACM Transactions on Computational Logic

Abstract

Binary Decision Diagrams (BDDs) and in particular ROBDDs (Reduced Ordered BDDs) are a common data structure for manipulating Boolean expressions, integrated circuit design, type inferencers, model checkers, and many other applications. Although the ROBDD is a lightweight data structure to implement, the behavior, in terms of memory allocation, may not be obvious to the program architect. We explore experimentally, numerically, and theoretically the typical and worst-case ROBDD sizes in terms of number of nodes and residual compression ratios, as compared to unreduced BDDs. While our theoretical results are not surprising, as they are in keeping with previously known results, we believe our method contributes to the current body of research by our experimental and statistical treatment of ROBDD sizes. In addition, we provide an algorithm to calculate the worst-case size. Finally, we present an algorithm for constructing a worst-case ROBDD of a given number of variables. Our approach may be useful to projects deciding whether the ROBDD is the appropriate data structure to use, and in building worst-case examples to test their code.

Continue reading

Weakly well-composed cell complexes over $n$D pictures

Abstract

In previous work we proposed a combinatorial algorithm to “locally repair” the cubical complex $Q(I)$ that is canonically associated with a given 3D picture I. The algorithm constructs a 3D polyhedral complex $P(I)$ which is homotopy equivalent to $Q(I)$ and whose boundary surface is a 2D manifold. A polyhedral complex satisfying these properties is called well-composed. In the present paper we extend these results to higher dimensions. We prove that for a given $n$-dimensional picture the obtained cell complex is well-composed in a weaker sense but is still homotopy equivalent to the initial cubical complex.

Continue reading

Improving parallel state-space exploration using genetic algorithms

By Étienne Renault

2018-06-14

In Proceedings of the 12th international conference on verification and evaluation of computer and communication systems (VECOS’18)

Abstract

The verification of temporal properties against a given system may require the exploration of its full state space. In explicit model-checking this exploration uses a Depth-First-Search (DFS) and can be achieved with multiple randomized threads to increase performance. Nonetheless the topology of the state-space and the exploration order can cap the speedup up to a certain number of threads. This paper proposes a new technique that aims to tackle this limitation by generating artificial initial states, using genetic algorithms. Threads are then launched from these states and thus explore different parts of the state space. Our prototype implementation runs 10% faster than state-of-the-art algorithms. These results demonstrate that this novel approach worth to be considered as a way to overcome existing limitations.

Continue reading

Reactive synthesis from LTL specification with Spot

By Thibaud Michaud, Maximilien Colange

2018-06-07

In Proceedings of the 7th workshop on synthesis, SYNT@CAV 2018

Abstract

We present ltlsynt, a new tool for reactive synthesis from LTL specifications. It relies on the efficiency of Spot to translate the input LTL specification to a deterministic parity automaton. The latter yields a parity game, which we solve with Zielonka’s recursive algorithm.The approach taken in ltlsynt was widely believed to be impractical, due to the double-exponential size of the parity game, and to the open status of the complexity of parity games resolution. ltlsynt ranked second of its track in the $2017$ edition of the SYNTCOMP competition. This demonstrates the practical applicability of the parity game approach, when backed by efficient manipulations of $\omega$-automata such as the ones provided by Spot. We present our approach and report on our experimental evaluation of ltlsynt to better understand its strengths and weaknesses.

Continue reading

A formally-proved algorithm to compute the correct average of decimal floating-point numbers

By Sylvie Boldot, Florian Faissole, Vincent Tourneur

2018-06-01

In 25th IEEE symposium on computer arithmetic

Abstract

Some modern processors include decimal floating-point units, with a conforming implementation of the IEEE-754 2008 standard. Unfortunately, many algorithms from the computer arithmetic literature are not correct anymore when computations are done in radix 10. This is in particular the case for the computation of the average of two floating-point numbers. Several radix-2 algorithms are available, including one that provides the correct rounding, but none hold in radix 10. This paper presents a new radix-10 algorithm that computes the correctly-rounded average. To guarantee a higher level of confidence, we also provide a Coq formal proof of our theorems, that takes gradual underflow into account. Note that our formal proof was generalized to ensure this algorithm is correct when computations are done with any even radix.

Continue reading