Research team

Internet Data Lab (IDLab)

Expertise

Research interest: - Visual Representation Learning. - Model Explanation and Interpretation. - Collective Representations and Relational Learning. - Disentangled Representation Learning.

IDLab - Internet and Data Lab 01/01/2021 - 31/12/2026

Abstract

The IOF consortium IDLab is composed of academic supervisors at the IDLab Research Group, a UAntwerp research group with members from the Faculty of Science and the Faculty of Applied Engineering. IDLab develops innovative digital solutions in the area of two main research lines: (1) Internet technologies, focusing on wireless networking and Internet of Things (IoT), and (2) Data science, focussing on distributed intelligence and Artificial Intelligence (AI). The mission of the IDLab consortium is to be the number one research and innovation partner in Flanders and leading partner worldwide, in the above research areas, especially applied in a city and its metropolitan surroundings (industry, ports & roads). To realize its mission, IDLab looks at integrated solutions from an application and technology perspective. From an application point of view, we explicitly provide solutions for all stakeholders in metropolitan areas aiming to cross-fertilize these applications. From a technological point of view, our research includes hardware prototyping, connectivity and AI, enabling us to provide a complete integrated solution to our industrial partners from sensor to software. Over the past years, IDLab has been connecting the city and its surroundings with sensors and actuators. It is time to (1) reliably and efficiently connect the data in an integrated way to (2) turn them into knowledgeable insights and intelligent actions. This perfectly matches with our two main research lines that we want to extensively valorise the upcoming years. The IDLab consortium has a unique position in the Flemish eco-system to realize this mission as it is strategically placed across different research and innovation stakeholders: (1) IDLab is a research group embedded in the Strategic Research Centre imec, a leading research institute in the domain of nano-electronics, and more recently through groups such as IDLab, in the domain of digital technology. (2) IDLab has a strategic link with IDLab Ghent, a research group at Ghent University. While each group has its own research activities, we define a common strategy and for the Flemish ecosystem, we are perceived as the leading partner in the research we are performing. (3) IDLab is the co-founder of The Beacon, an Antwerp-based eco-system on innovation where start-ups, scale ups, etc. that work on IoT and AI solutions for the city, logistics, mobility and industry 4.0 come together. (4) Within the valorisation at UAntwerp, IDLab contributes to the valorisation within the domain 'Metropolitanism, Smart City and Mobility'. To realize our valorisation targets, IDLab will define four valorisation programs: VP1: Emerging technologies for next-generation IoT; VP2: Human-like artificial Intelligence; VP3: Learning at the edge; VP4: Deterministic communication networks. Each of these valorisation programs is led by one of the (co-)promoters of the IDLab consortium, and every program is composed of two or three innovation lines. This way, the IDLab research will be translated into a clear program offer towards our (industrial) partners, allowing us to build a tailored offer. Each valorisation program will contribute to the different IOF objectives, but in a differentiated manner. Based on our current experience, some valorisation programs are focusing more on local partners, while others are mainly targeting international and EU funded research projects.

Researcher(s)

Research team(s)

Learning-based representations for the automation of hyperspectral microscopic imaging and predictive maintenance. 01/09/2020 - 31/08/2024

Abstract

In this project we will focus on designing a model for representation learning that will enable the detection of pollution in microscopic samples at the earliest time possible from hyperspectral images (HSI). Current methods for this task operate of on top of RGB images derived from HSIs. Taking this into account we will focus our efforts on designing a method capable of analyzing the full raw data cube that composes each HSI sample and identifying potential signals to enable the accurate detection of pollution in the sample. In addition, as industrial customers become increasingly aware of the growing maintenance costs and downtime caused by the unexpected machinery failures, predictive maintenance solutions for biopharma companies gain more interest to maintain a competitive advantage. To address this issue, we will investigate methods to analyze data traces coming from different sources, e.g. computer logs, operator reports, quality of the collected samples, etc., in order to identify temporal patterns that can serve as strong indicators of a potential anomaly that will occur in the near future on the monitored systems. Finally, for both of the tasks mentioned above, model explanation algorithms will be investigated and designed so that the predictions made by their respective models can be justified. Moreover, these explanation algorithms, will serve to debug the trained models and assess their validity and robustness towards artifacts, e.g. biases, data leakage, etc., introduced during the training stage.

Researcher(s)

Research team(s)

Multimodal Relational Interpretation for Deep Models. 01/05/2020 - 30/04/2024

Abstract

Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. Model interpretation consists on getting an insight on the information learned by a model from a set of examples. Model explanation focuses on justifying the predictions made by a model on a given input. While there a is a continuously increasing amount of efforts addressing the task of model explanation, its interpretation counterpart has received significant less attention. In this project we aim taking a solid step forward on the interpretation and understanding of deep neural networks. More specifically, we will focus our efforts on four complementary directions. First, by reducing the computational costs of model interpretation algorithms and by improving the clarity of the visualizations they produce. Second, by developing interpretation algorithms that are capable of discovering complex structures encoded in the models to be interpreted. Third, by developing algorithms to produce multimodal interpretations based on different types of data such as images and text. Fourth, by proposing an evaluation protocol to objectively assess the performance of algorithms for model interpretation. As a result, we aim to propose a set of principles and foundations that can be followed to improve the understanding of any existing or future deep complex model.

Researcher(s)

Research team(s)