Publications

The tree of shapes turned into a max-tree: A simple and efficient linear algorithm

By Edwin Carlinet, Thierry Géraud, Sébastien Crozet

2018-05-10

In Proceedings of the 24th IEEE international conference on image processing (ICIP)

Abstract

The Tree of Shapes (ToS) is a morphological tree-based representation of an image translating the inclusion of its level lines. It features many invariances to image changes, which makes it well-suited for a lot of applications in image processing and pattern recognition. In this paper, we propose a way of turning this algorithm into a Max-Tree computation. The latter has been widely studied, and many efficient algorithms (including parallel ones) have been developed. Furthermore, we develop a specific optimization to speed-up the common 2D case. It follows a simple and efficient algorithm, running in linear time with a low memory footprint, that outperforms other current algorithms. For Reproducible Research purpose, we distribute our code as free software.

Continue reading

Segmentation des hyperintensités de la matière blanche en quelques secondes à l’aide d’un réseau de neurones convolutif et de transfert d’apprentissage

By Élodie Puybareau, Yongchao Xu, Joseph Chazalon, Isabelle Bloch, Thierry Géraud

2018-05-04

In Actes du congrès reconnaissance des formes, image, apprentissage et perception (RFIAP), session spéciale “deep learning, deep in france”

Abstract

Dans cet article, nous proposons une méthode automatique et rapide pour segmenter les hyper-intensités de la matière blanche (WMH) dans des images IRM cérébrales 3D, en utilisant un réseau de neurones entièrement convolutif (FCN) et du transfert d’apprentissage. Ce FCN est le réseau neuronal du Visual Geometry Group (VGG) pré-entraîné sur la base ImageNet pour la classification des images naturelles, et affiné avec l’ensemble des données d’entraînement du concours MICCAI WMH. Nous considérons trois images pour chaque coupe du volume à segmenter, provenant des acquisitions en T1, en FLAIR, et le résultat d’un opérateur morphologique appliqué sur le FLAIR, le top-hat, qui met en évidence les petites structures de forte intensité. Ces trois images 2D sont assemblées pour former une image 2D-3 canaux interprétée comme une image en couleurs, ensuite passée au FCN pour obtenir la segmentation 2D de la coupe correspondante. Nous traitons ainsi toutes les coupes pour former la segmentation de sortie 3D. Avec une telle technique, la segmentation de WMH sur un volume cérébral 3D prend environ 10 secondes, pré-traitement compris. Notre technique a été classée 6e sur 20 participants au concours MICCAI WMH.

Continue reading

Un algorithme de complexité linéaire pour le calcul de l’arbre des formes

By Edwin Carlinet, Sébastien Crozet, Thierry Géraud

2018-05-04

In Actes du congrès reconnaissance des formes, image, apprentissage et perception (RFIAP)

Abstract

L’arbre des formes (AdF) est une représentation morpho- logique hiérarchique de l’image qui traduit l’inclusion des ses lignes de niveaux. Il se caractérise par son invariance à certains changement de l’image, ce qui fait de lui un outil idéal pour le développement d’applications de reconnaissance des formes. Dans cet article, nous proposons une méthode pour transformer sa construction en un calcul de Max-tree. Ce dernier a été largement étudié au cours des dernières années et des algorithmes efficaces (dont certains parallèles) existent déjà. Nous proposons également une optimisation qui permet d’accélérer son calcul dans le cas classique des images 2D. Il en découle un algorithme simple, efficace, s’exécutant linéairement en fonction du nombre de pixels, avec une faible empreinte mémoire, et qui surpasse les algorithmes à l’état de l’art.

Continue reading

Parallel model checking algorithms for linear-time temporal logic

Abstract

Model checking is a fully automated, formal method for demonstrating absence of bugs in reactive systems. Here, bugs are violations of properties in Linear-time Temporal Logic (LTL). A fundamental challenge to its application is the exponential explosion in the number of system states. The current chapter discusses the use of parallelism in order to overcome this challenge. We reiterate the textbook automata-theoretic approach, which reduces the model checking problem to the graph problem of finding cycles. We discuss several parallel algorithms that attack this problem in various ways, each with different characteristics: Depth-first search (DFS) based algorithms rely on heuristics for good parallelization, but exhibit a low complexity and good on-the-fly behavior. Breadth-first search (BFS) based approaches, on the other hand, offer good parallel scalability and support distributed parallelism. In addition, we present various simpler model checking tasks, which still solve a large and important subset of the LTL model checking problem, and show how these can be exploited to yield more efficient algorithms. In particular, we provide simplified DFS-based search algorithms and show that the BFS-based algorithms exhibit optimal runtimes in certain cases.

Continue reading

Strategies in typecase optimization

By Jim Newton, Didier Verna

2018-04-05

In ELS 2018, the 11th european lisp symposium

Abstract

We contrast two approaches to optimizing the Common Lisp typecase macro expansion. The first approach is based on heuristics intended to estimate run time performance of certain type checks involving Common Lisp type specifiers. The technique may, depending on code size, exhaustively search the space of permutations of the type checks, intent on finding the optimal order. With the second technique, we represent a typecase form as a type specifier, encapsulating the side-effecting non-Boolean parts so as to appear compatible with the Common Lisp type algebra operators. The encapsulated expressions are specially handled so that the Common Lisp type algebra functions preserve them, and we can unwrap them after a process of Boolean reduction into efficient Common Lisp code, maintaining the appropriate side effects but eliminating unnecessary type checks. Both approaches allow us to identify unreachable code, test for exhaustiveness of the clauses and eliminate type checks which are calculated to be redundant.

Continue reading

Parallel computation of component trees on distributed memory machines

By Markus Götz, Gabriele Cavallaro, Thierry Géraud, Matthias Book, Morris Riedel

2018-04-02

In IEEE Transactions on Parallel and Distributed Systems

Abstract

Component trees are region-based representations that encode the inclusion relationship of the threshold sets of an image. These representations are one of the most promising strategies for the analysis and the interpretation of spatial information of complex scenes as they allow the simple and efficient implementation of connected filters. This work proposes a new efficient hybrid algorithm for the parallel computation of two particular component trees—the max- and min-tree—in shared and distributed memory environments. For the node-local computation a modified version of the flooding-based algorithm of Salembier is employed. A novel tuple-based merging scheme allows to merge the acquired partial images into a globally correct view. Using the proposed approach a speed-up of up to 44.88 using 128 processing cores on eight-bit gray-scale images could be achieved. This is more than a five-fold increase over the state-of-the-art shared-memory algorithm, while also requiring only one-thirty-second of the memory.

Continue reading

Method combinators

By Didier Verna

2018-03-25

In ELS 2018, the 11th european lisp symposium

Abstract

In traditional object-oriented languages, the dynamic dispatch algorithm is hardwired: for every polymorphic call, only the most specific method is used. ClosClos, the Common Lisp Object System, goes beyond the traditional approach by providing an abstraction known as method combinationsmethod combinations: when several methods are applicable, it is possible to select several of them, decide in which order they will be called, and how to combine their results, essentially making the dynamic dispatch algorithm user-programmable.Although a powerful abstraction, method combinations are under-specified in the Common Lisp standard, and the MopMop, the Meta-Object Protocol underlying many implementations of ClosClos, worsens the situation by either contradicting it or providing unclear protocols. As a consequence, too much freedom is granted to conforming implementations. The exact or intended behavior of method combinations is unclear and not necessarily coherent with the rest of ClosClos.In this paper, we provide a detailed analysis of the problems posed by method combinations, the consequences of their lack of proper specification in one particular implementation, and a MopMop-based extension called method combinatorsmethod combinators, aiming at correcting these problems and possibly offer new functionality.

Continue reading

The challenge of cerebral magnetic resonance imaging in neonates: A new method using mathematical morphology for the segmentation of structures including diffuse excessive high signal intensities

Abstract

Preterm birth is a multifactorial condition associated with increased morbidity and mortality. Diffuse excessive high signal intensity (DEHSI) has been recently described on T2-weighted MR sequences in this population and thought to be associated with neuropathologies. To date, no robust and reproducible method to assess the presence of white matter hyperintensities has been developed, perhaps explaining the current controversy over their prognostic value. The aim of this paper is to propose a new semi-automated framework to detect DEHSI on neonatal brain MR images having a particular pattern due to the physiological lack of complete myelination of the white matter. A novel method for semi- automatic segmentation of neonatal brain structures and DEHSI, based on mathematical morphology and on max-tree representations of the images is thus described. It is a mandatory first step to identify and clinically assess homogeneous cohorts of neonates for DEHSI and/or volume of any other segmented structures. Implemented in a user-friendly interface, the method makes it straightforward to select relevant markers of structures to be segmented, and if needed, apply eventually manual corrections. This method responds to the increasing need for providing medical experts with semi-automatic tools for image analysis, and overcomes the limitations of visual analysis alone, prone to subjectivity and variability. Experimental results demonstrate that the method is accurate, with excellent reproducibility and with very few manual corrections needed. Although the method was intended initially for images acquired at 1.5T, which corresponds to usual clinical practice, preliminary results on images acquired at 3T suggest that the proposed approach can be generalized.

Continue reading

White matter hyperintensities segmentation in a few seconds using fully convolutional network and transfer learning

By Yongchao Xu, Thierry Géraud, Élodie Puybareau, Isabelle Bloch, Joseph Chazalon

2018-02-06

In Brainlesion: Glioma, multiple sclerosis, stroke and traumatic brain injuries— 3rd international workshop, BrainLes 2017, held in conjunction with MICCAI 2017, quebec city, QC, canada, september 14 2017, revised selected papers

Abstract

Continue reading

Lisp, jazz, aikido

By Didier Verna

2018-02-05

In The Art, Science and Engineering of Programming Journal

Abstract

The relation between Science (what we can explain) and Art (what we can’t) has long been acknowledged and while every science contains an artistic part, every art form also needs a bit of science. Among all scientific disciplines, programming holds a special place for two reasons. First, the artistic part is not only undeniable but also essential. Second, and much like in a purely artistic discipline, the act of programming is driven partly by the notion of aesthetics: the pleasure we have in creating beautiful things.Even though the importance of aesthetics in the act of programming is now unquestioned, more could still be written on the subject. The field called “psychology of programming”psychology of programming focuses on the cognitive aspects of the activity, with the goal of improving the productivity of programmers. While many scientists have emphasized their concern for aesthetics and the impact it has on their activity, few computer scientists have actually written about their thought process while programming.What makes us like or dislike such and such language or paradigm? Why do we shape our programs the way we do? By answering these questions from the angle of aesthetics, we may be able to shed some new light on the art of programming. Starting from the assumption that aesthetics is an inherently transversal dimension, it should be possible for every programmer to find the same aesthetic driving force in every creative activity they undertake, not just programming, and in doing so, get deeper insight on why and how they do things the way they do.On the other hand, because our aesthetic sensitivities are so personal, all we can really do is relate our own experiences and share it with others, in the hope that it will inspire them to do the same. My personal life has been revolving around three major creative activities, of equal importance: programming in Lisp, playing Jazz music, and practicing Aikido. But why so many of them, why so different ones, and why these specifically?By introspecting my personal aesthetic sensitivities, I eventually realized that my tastes in the scientific, artistic, and physical domains are all motivated by the same driving forces, hence unifying Lisp, Jazz, and Aikido as three expressions of a single essence, not so different after all. Lisp, Jazz, and Aikido are governed by a limited set of rules which remain simple and unobtrusive. Conforming to them is a pleasure. Because Lisp, Jazz, and Aikido are inherently introspective disciplines, they also invite you to transgress the rules in order to find your own. Breaking the rules is fun. Finally, if Lisp, Jazz, and Aikido unify so many paradigms, styles, or techniques, it is not by mere accumulation but because they live at the meta-level and let you reinvent them. Working at the meta-level is an enlightening experience.Understand your aesthetic sensitivities and you may gain considerable insight on your own psychology of programming. Mine is perhaps common to most lispers. Perhaps also common to other programming communities, but that, is for the reader to decide…

Continue reading