Conferences & Symposiums Institution Partnerships Research

WP4 TRAIL/ARIAC kick-off at UMONS

Published on 2 February 2022
Written by Hugo Bohy
The fourth work package (WP4) of the ARIAC project was kicked off on 2/2/2022 in UMONS (Houdain site). At the beginning of the kick-off meeting, Prof. Thierry Dutoit introduced the fourth work package which focuses on optimized AI implementations. The event was moderated by Dr. Matie Mancas, and attended by 40 AI researchers.

The ARIAC project (Applications and Research for Trusted Artificial Intelligence) was designed by TRAIL. TRAIL brings together the five French-speaking universities and the four approved Walloon research centers (CRA). Its ambition is to pool research in artificial intelligence in the Wallonia-Brussels Federation.

ARIAC project is articulated around 5 Work Packages:

  • Human-AI Interaction,
  • Trust mechanisms for AI,
  • Model-AI integration,
  • Optimized AI implementations,
  • TRAIL Factory

The event took place on Wednesday, February 2, 2020 in UMONS (Rue de Houdain 9, Mons 7000, Auditorium 23) and streamed online.

Event webpage: https://web.umons.ac.be/isia/en/event/wp4-kickoff-meeting/

 

WP4 TRAIL/ARIAC kick-off program:

*******************************

1:00 p.m.: Thierry Dutoit (UMONS)
Introduction to WP4 and VIP slides: better community!

1:15 p.m.: Sidi Mahmoudi (UMONS)
Edge AI: Implement algorithms on devices with low computing power

1:45 p.m.: Antonion Garcia-Diaz (ULB)
This presentation focuses on Antonio’s research into automatic neural architecture (NAS) research techniques, the results of this research at present, and his current and future contributions to TRAIL.

2:00 p.m.: Sarah Klein (SIRRIS)
With an increasing install base of sensors collecting data with high temporal resolution, a huge amount of raw data need to be transferred to a backend system. In many industrial use cases, this is challenging but most of the time doable. As soon as devices (often resource-constrained) are in the field though, sending high-frequency data in real-time via a mobile connection becomes very challenging and expensive.

One way to facilitate data transfer from the edge to a central backend is to compress the data before sending it. Recently, several algorithms have been proposed to handle compression of time series on constrained devices. In our presentation, we will give an overview of different classes of compression methods and their application in industrial use cases.

2:15 p.m.: Nathan Hubens (UMONS)
Presentation of pruning and the FasterAI library: https://github.com/nathanhubens/fasterai

2:35 p.m .:  Antoine Vanderschueren (UCLouvain)
We present a new, simple and effective method for forming ‘sparse’ neural networks. Our method is based on a differentiation of the front and rear paths: the weights in the front path are a thresholded version of the weights maintained in the rear path. This decoupling allows micro-updates, produced by gradient descent, to add up; thus leading to the possible reactivation of weights that have been set to zero before during training. At the end of the training, the links with zero weights are ‘pruned’.

3:00 p.m.: break

3:20 p.m.: Lucile Dierckx (UCLouvain)
The topic proposed for a short presentation would present the main ideas of a paper that has just been accepted for a conference. This focuses on the combination of detection and classification of multiple bats in audio recordings.
The challenge is that the datasets available for this task are either for detection or uni-label classification, but not for their combination. Indeed, the labels of the available datasets indicate either only the positions of the calls in the recordings, or only the species that is heard in a file. In addition, it is rare to find datasets with labels to make multi-label classification of bat calls.
So we show how we manage to do both detection and multi-label classification by creating artificial labels based on those commonly available. We also show how we evaluate the performance of the model designed not only on the generated dataset, but also on the basis of the available hard-labels.

3:30 p.m.: Noé Tits: (Flowchase)
Use of self-supervision techniques to obtain an intelligent representation of phonemes in a latent space. This would allow us to obtain metrics and distances to evaluate pronunciation and detect errors.

4:00 p.m.: Maxime Zanella (UCLouvain/UMONS)
Active Learning then focus on its usefulness in “continuous life learning”