Didier Verna

Estimation of the noise level function for color images using mathematical morphology and non-parametric statistics

By Baptiste Esteban, Guillaume Tochon, Edwin Carlinet, Didier Verna

2022-04-08

In Proceedings of the 26th international conference on pattern recognition

Abstract

Noise level information is crucial for many image processing tasks, such as image denoising. To estimate it, it is necessary to find homegeneous areas within the image which contain only noise. Rank-based methods have proven to be efficient to achieve such a task. In the past, we proposed a method to estimate the noise level function (NLF) of grayscale images using the tree of shapes (ToS). This method, relying on the connected components extracted from the ToS computed on the noisy image, had the advantage of being adapted to the image content, which is not the case when using square blocks, but is still restricted to grayscale images. In this paper, we extend our ToS-based method to color images. Unlike grayscale images, the pixel values in multivariate images do not have a natural order relationship, which is a well-known issue when working with mathematical morphology and rank statistics. We propose to use the multivariate ToS to retrieve homogeneous regions. We derive an order relationship for the multivariate pixel values thanks to a complete lattice learning strategy and use it to compute the rank statistics. The obtained multivariate NLF is composed of one NLF per channel. The performance of the proposed method is compared with the one obtained using square blocks, and validates the soundness of the multivariate ToS structure for this task.

Continue reading

ETAP: Experimental typesetting algorithms platform

By Didier Verna

2022-03-01

In ELS 2022, the 15th european lisp symposium

Abstract

We present the early development stages of ETAP, a platform for experimenting with typesetting algorithms. The purpose of this platform is twofold: while its primary objective is to provide building blocks for quickly and easily designing and testing new algorithms (or variations on existing ones), it can also be used as an interactive, real time demonstrator for many features of digital typography, such as kerning, hyphenation, or ligaturing.

Continue reading

A corpus processing and analysis pipeline for Quickref

By Antoine Hacquard, Didier Verna

2021-05-01

In ELS 2021, the 14th european lisp symposium

Abstract

Quicklisp is a library manager working with your existing Common Lisp implementation to download and install around 2000 libraries, from a central archive. Quickref, an application itself written in Common Lisp, generates, automatically and by introspection, a technical documentation for every library in Quicklisp, and produces a website for this documentation. In this paper, we present a corpus processing and analysis pipeline for Quickref. This pipeline consists of a set of natural language processing blocks allowing us to analyze Quicklisp libraries, based on natural language contents sources such as README files, docstrings, or symbol names. The ultimate purpose of this pipeline is the generation of a keyword index for Quickref, although other applications such as word clouds or topic analysis are also envisioned.

Continue reading

Quickref: Common Lisp reference documentation as a stress test for Texinfo

By Didier Verna

2019-11-06

In TUGboat

Abstract

Quickref is a global documentation project for the Common Lisp ecosystem. It creates reference manuals automatically by introspecting libraries and generating corresponding documentation in Texinfo format. The Texinfo files may subsequently be converted into PDF or HTML. Quickref is non-intrusive: software developers do not have anything to do to get their libraries documented by the system.Quickref may be used to create a local website documenting your current, partial, working environment, but it is also able to document the whole Common Lisp ecosystem at once. The result is a website containing almost two thousand reference manuals. Quickref provides a Docker image for an easy recreation of this website, but a public version is also available and actively maintained.Quickref constitutes an enormous and successful stress test for Texinfo. In this paper, we give an overview of the design and architecture of the system, describe the challenges and difficulties in generating valid Texinfo code automatically, and put some emphasis on the currently remaining problems and deficiencies.

Continue reading

Implementing baker’s SUBTYPEP decision procedure

By Léo Valais, Jim Newton, Didier Verna

2019-04-01

In ELS 2019, the 12th european lisp symposium

Abstract

We present here our partial implementation of Baker’s decision procedure for SUBTYPEP. In his article “A Decision Procedure for Common Lisp’s SUBTYPEP Predicate”, he claims to provide implementation guidelines to obtain a SUBTYPEP more accurate and as efficient as the average implementation. However, he did not provide any serious implementation and his description is sometimes obscure. In this paper we present our implementation of part of his procedure, only supporting primitive types, CLOS classes, member, range and logical type specifiers. We explain in our words our understanding of his procedure, with much more detail and examples than in Baker’s article. We therefore clarify many parts of his description and fill in some of its gaps or omissions. We also argue in favor and against some of his choices and present our alternative solutions. We further provide some proofs that might be missing in his article and some early efficiency results. We have not released any code yet but we plan to open source it as soon as it is presentable.

Continue reading

Parallelizing quickref

By Didier Verna

2019-04-01

In ELS 2019, the 12th european lisp symposium

Abstract

Quickref is a global documentation project for Common Lisp software. It builds a website containing reference manuals for Quicklisp libraries. Each library is first compiled, loaded, and introspected. From the collected information, a Texinfo file is generated, which is then processed into an HTML one. Because of the large number of libraries in Quicklisp, doing this sequentially may require several hours of processing. We report on our experiments in parallelizing Quickref. Experimental data on the morphology of Quicklisp libraries has been collected. Based on this data, we are able to propose a number of parallelization schemes that reduce the total processing time by a factor of 3.8 to 4.5, depending on the exact situation.

Continue reading

Finite automata theory based optimization of conditional variable binding

By Jim Newton, Didier Verna

2019-01-14

In ELS 2019, the 12th european lisp symposium

Abstract

We present an efficient and highly optimized implementation of destructuring-case in Common Lisp. This macro allows the selection of the most appropriate destructuring lambda list of several given based on structure and types of data at run-time and thereafter dispatches to the corresponding code branch. We examine an optimization technique, based on finite automata theory applied to conditional variable binding and execution, and type-based pattern matching on Common Lisp sequences. A risk of inefficiency associated with a naive implementation of destructuring-case is that the candidate expression being examined may be traversed multiple times, once for each clause whose format fails to match, and finally once for the successful match. We have implemented destructuring-case in such a way to avoid multiple traversals of the candidate expression. This article explains how this optimization has been implemented.

Continue reading

Recognizing heterogeneous sequences by rational type expression

By Jim Newton, Didier Verna

2018-09-14

In Proceedings of the meta’18: Workshop on meta-programming techniques and reflection

Abstract

We summarize a technique for writing functions which recognize types of heterogeneous sequences in Common Lisp. The technique employs sequence recognition functions, generated at compile time, and evaluated at run-time. The technique we demonstrate extends the Common Lisp type system, exploiting the theory of rational languages, Binary Decision Diagrams, and the Turing complete macro facility of Common Lisp. The resulting system uses meta-programming to move an exponential complexity operation from run-time to a compile-time operation, leaving a highly optimized linear complexity operation for run-time.

Continue reading

A theoretical and numerical analysis of the worst-case size of reduced ordered binary decision diagrams

By Jim Newton, Didier Verna

2018-08-28

In ACM Transactions on Computational Logic

Abstract

Binary Decision Diagrams (BDDs) and in particular ROBDDs (Reduced Ordered BDDs) are a common data structure for manipulating Boolean expressions, integrated circuit design, type inferencers, model checkers, and many other applications. Although the ROBDD is a lightweight data structure to implement, the behavior, in terms of memory allocation, may not be obvious to the program architect. We explore experimentally, numerically, and theoretically the typical and worst-case ROBDD sizes in terms of number of nodes and residual compression ratios, as compared to unreduced BDDs. While our theoretical results are not surprising, as they are in keeping with previously known results, we believe our method contributes to the current body of research by our experimental and statistical treatment of ROBDD sizes. In addition, we provide an algorithm to calculate the worst-case size. Finally, we present an algorithm for constructing a worst-case ROBDD of a given number of variables. Our approach may be useful to projects deciding whether the ROBDD is the appropriate data structure to use, and in building worst-case examples to test their code.

Continue reading