Applied Engineering


Attend a PhD defence or find the archive of concluded doctoral research

'Resource Allocation for Intelligent Reflecting Surface Aided Wireless Networks' (15/07/2024)

Jalal Faghih Mohammadi Jalali

  • Monday 15 July 2024
  • 1 p.m.
  • Stadscampus - office s.C.101 & 002
  • Promotors: prof. dr. Jeroen Famaey & prof. dr. Rafael Berkvens
  • Faculty of Applied Engineering


In today’s world, achieving reliable wireless communication is crucial but challenging. This work introduces Intelligent Reflecting Surfaces (IRSs), a groundbreaking technology poised to transform connectivity. IRSs act as smart, invisible "mirrors" that precisely bend and direct wireless signals, ensuring strong connections by overcoming obstacles. ,

An IRS consists of a sophisticated planar array with numerous passive or active elements that individually manipulate electromagnetic waves, reshaping the wireless signal propagation environment. By adjusting the phase and amplitude of these elements, an IRS can steer signals toward intended receivers, creating optimized communication paths even when direct Line of Sight (LoS) is obstructed. This capability enhances connectivity in various environments, from urban areas to indoor spaces, while reducing energy consumption due to its passive operation.

This work presents IRS as a key enabler for advanced technologies, enhancing their performance and efficiency. IRS improves power efficiency in multi-user Simultaneous Wireless Information and Power Transfer (SWIPT) networks, supporting both energy harvesting and data transmission. Integration of IRS into Ultra-Reliable Low-Latency Communication (URLLC) and Machine Type Communication (MTC) systems significantly reduces latency and increases reliability. Additionally, IRS benefits Virtual Reality (VR) users by mitigating path loss or blockages, ensuring immersive experiences without latency or quality loss.

IRS also optimizes Mobile Edge Computing (MEC) by improving signal delivery for efficient edge data processing. This is critical for applications requiring instantaneous feedback and high data integrity, such as autonomous vehicles and industrial automation. The work explores IRS deployment across a broad frequency spectrum, from Frequency Range 1 (FR1) to Frequency Range 2 (FR2), including millimeter-Wave (mmWave) and TeraHertz (THz) frequencies, highlighting its profound impact on future telecommunications.

To evaluate IRS-assisted networks, this work defines Key Performance Indicators (KPIs) such as data rate, power efficiency, energy efficiency, Signal-to-Interference-plus-Noise Ratio (SINR), transmit signal power budget, and received power strength. These KPIs help assess and optimize network performance through efficient resource allocation policies. Addressing the non-linear, non-convex, and Mixed Integer Nonlinear Programming (MINLP) problems associated with resource allocation, the work employs strategies to simplify and solve these complex problems. Techniques like convex relaxation and approximation are used to make these problems more manageable.

Algorithms developed in this work solve the optimization problems either globally or sub-optimally, using optimization solvers and simulations. Advanced mathematical tools, such as the big-M method and Successive Convex Approximation (SCA), are employed to linearize and approximate non-convex terms. Iterative solutions refine resource allocation designs, ensuring optimal performance despite initial complexity.

Simulations demonstrate the performance improvements achievable through IRS-assisted networks, validating theoretical models and confirming practical feasibility. By exploring various IRS configurations and their implementation in different environments, the study showcases IRS's adaptability and versatility. This work establishes IRS as a pivotal technology, enhancing SWIPT networks, URLLC, MTC, MEC, VR, mmWave, and THz applications, paving the way for robust, efficient, and engaging communication ecosystems.
Low threshold translation: meaning/potential for society in English: Intelligent Reflecting Surfaces (IRSs) revolutionize connectivity by enhancing wireless communication reliability, reducing energy consumption, and enabling advanced technologies like VR, autonomous vehicles, and smart cities. This innovation promises more efficient, eco-friendly, and robust communication systems, significantly improving daily life and technological progress for society.

'Sustainability assessment of roads containing reclaimed asphalt pavement (RAP) – Decision support based on LCA & LCCA during road design' (4/07/2024)

Ben Moins


In 2020, the European Commission launched its Green Deal, aiming for Europe to become the first climate-neutral continent by 2050. A big part of this plan involves the construction industry, which is responsible for about 40% of global greenhouse gas (GHG) emissions. Specifically, roads contribute over 15% of worldwide emissions and are one of the main users of resources in Europe. Since 90% of European roads are paved with asphalt, the asphalt industry plays a crucial role in reducing these emissions.

The main idea of this research is that for the road industry to become more sustainable, it needs to use fewer new materials and carefully study the impacts of its processes from the start. This dissertation focuses on using life cycle assessment (LCA) and life cycle cost analysis (LCCA) to examine pavement designs that include recycled asphalt. It tackles two main challenges: optimizing the recycling of asphalt and evaluating the benefits of using more recycled asphalt in new roads. The goal is to create a comprehensive approach that combines LCA, LCCA, and the use of recycled asphalt to make the road industry more sustainable.
The study of existing research highlighted the growing importance of LCA and LCCA in making pavement construction more sustainable. However, it found inconsistencies and gaps in these methods, such as the lack of standard rules and varying system boundaries for assessments. It also emphasized the need to consider the full life cycle of materials, including their end-of-life phase. Moreover, the study explored how to combine LCA and LCCA into a single measure through putting a monetary value on environmental impacts.

The research found that environmental impacts shift throughout the different phases of a pavement's life, highlighting the importance of considering the entire life cycle, especially when using waste materials. Key environmental impacts include global warming, fine particulate matter, fossil resource scarcity, and human health risks. The study suggested that focusing on these factors could simplify LCA for green public procurement, but care should be taken due to the shifting.

The dissertation underscores the need to include durability in the sustainability of asphalt pavements. It explored three methods to account for material performance: adjusting layer thickness while keeping the same service life, finding break-even points with performance evaluation, and using detailed service life predictions. Results showed the importance of holistic modelling that considers both material quality and sustainability. Comparing standard service life estimates with predicted service life revealed that conservative estimates might undervalue the impact of performance on sustainability.

The research stressed the urgency for the asphalt industry to take immediate action to achieve net-zero emissions by 2050. An industry-wide study showed that GHG emissions could be reduced by up to 23.7% yearly by 2030, and up to 40% with the best available technologies. However, achieving carbon neutrality requires economically viable and proven strategies, highlighting the need for urgent action and innovation.
The dissertation emphasized the importance of using recycled asphalt to reduce environmental and economic impacts in road construction. Recycled asphalt can replace both new bitumen and aggregate without major production changes. The study found that the end-of-waste location, where recycled asphalt changes from waste to a secondary material, affects various impact modules. This shows the complexity of sustainability comparisons in pavement studies. Simulations suggested that higher recycling rates improve pavement durability, though results vary with different mixtures and recycling rates. Overall, the integration of recycled asphalt is crucial for improving pavement sustainability across different layers and mixtures.

'Multi-Agent Communication and Behaviour Training using Reinforcement Learning' (3/07/2024)

Simon Vanneste


There are many real-world problems where distributed systems must work together to achieve a common goal. Artificial intelligence has started to play an important role in our lives. Therefore, we investigated how we can use it to develop these distributed systems. In this research, we explore how different intelligent entities (agents) can work together and communicate with each other. We employ reinforcement learning to allow the agents to learn how to communicate and how to behave. In reinforcement learning, the agent learns which actions it needs to take based on the reward it receives. This training method allows the agents to develop a custom communication protocol that is thoroughly integrated with the trained behaviour. In the first part of the thesis, we investigated how we can use these kinds of methods in real-life applications. Next, we developed multiple algorithms to learn a communication protocol. Finally, we explore how we can train these systems in a decentralized way.

'Active Infrared Thermography for the Inspection of Paintings' (14/06/2024)

Michaël Hillen


Active infrared thermography (AIRT) is a non-destructive imaging method that can reveal information about the internal structure of an object. Here it was used to study wooden panel paintings by visualizing damage to the paintings and the structure of the wooden support. An automated mobile measurement system was developed for measuring large paintings in situ. This system was employed to study several paintings from the collection of the Royal Museum of Fine Arts Antwerp (KMSKA). The AIRT results were compared to other established imaging methods in the field: X-ray radiography (XRR), infrared reflectography (IRR), and macroscopic X-ray fluorescence (MA-XRF). Ultimately, it was shown that AIRT provides complementary information to these methods and would be a valuable addition to the conservator’s toolkit.

'Optimizing the Ductwork Design of Centralized Air Distribution Systems for New Buildings and Retrofits: A Holistic Simulation-Based Approach' (14/06/2024)

Zakarya Kabbara


This research introduces a new method for optimizing the design of HVAC ductwork systems for both new buildings and renovations. Using advanced simulations, this method identify optimized designs that enhance the system's performances while also minimizing its life cycle costs.

'Distributed Microphone Arrays for Passive Acoustic Localization Across Spatial and Temporal Scales' (6/06/2024)

Erik Verreycken


To an electronics engineer, the world around us is quantified using sensors. These sensors fall into two main categories: passive, where existing phenomena are measured, and active, where the system emits a signal itself. Active sensing modalities include sonar, which emits sound and records echoes with microphones to sense objects, GPS that triangulates receiver position using time delays from satellite signals, and MRI, utilizing strong magnetic fields to visualize the interior of a human body. In contrast, passive sensing modalities require no signal emission and instead rely only on existing phenomena for measurement. Examples include cameras, microphones, seismometers, and magnetic compasses.

Passive acoustic localization refers to a set of measurement techniques utilizing pre-existing audio to localize and analyze the audio source. Typically, a microphone array is employed. Spatial variation in the microphone locations, i.e. two microphones cannot be placed at the same position, result in slight differences in the recorded audio at each microphone. These differences can manifest as timing disparities, attributed to varying sound travel distances to each microphone, or differences in intensity due do microphone position and the radiation pattern of the sound source, i.e. the difference in audio strength depending on the angle. Analyzing these differences allows deducing properties of the sound source, including position, path, point of acoustic focus, radiation pattern, etc.

The main advantage of passive localization is that it can be deployed unobtrusively, i.e. without disturbing the sound emitting object that is being studied. In biological applications, this means there is no need to capture the animal to attach sensors to it. In industry, this means no machines have to be brought down to be retrofitted with sensors. In public infrastructure, this means no major infrastructure works and downtime are needed. The main advantage of acoustic localization is that it can be applied to any animal, object, etc. that emits sound and only these already occurring sounds are needed to perform the analysis.

The main reason for us to study passive acoustic localization is that it enables us to study animals in their natural environment. Passive acoustic localization allows us to study bats without disturbing their natural behavior. We can just sit by areas that are frequented by bats, set up our array and investigate what makes them the aerial acrobats that they are.

The main downside of passive acoustic localization is that it can only be performed when the animal/object is making a sound. This limits the applicability of passive acoustic localization to vocalizing animals during their vocalizations or larger animals that make sound by moving physically. Passive acoustic localization is therefore very well suited to study animals that use vocalizations for navigation, such as bats and some species of cetaceans. A second downside is that passive acoustic localization is limited in range based on the amplitude and directionality of the sound source. To overcome this problem, larger and denser arrays or networks of microphones must be constructed. Conventionally, these arrays are constructed with expensive microphones and using expensive hardware, which can limit the size or number of microphones that can be deployed in a single experiment. Furthermore, in larger microphone arrays, it may not be possible to capture or sample all microphones using the same recording device. This in turn can create disparities in the timings of the recorded audio, which has an adverse effect on the algorithms used for passive acoustic localization, many of which require a very high degree of synchronization. Finally, the algorithms as passive acoustic localization described throughout the literature and this thesis require the microphone positions to be known because the accuracy of the algorithm depends on the accuracy to which the microphone positions are known. Retrieving the microphone position may not be a trivial task when the array is constructed in a remote area, or may require a considerable amount of error-prone human work to measure all microphone positions.

The key contributions of this thesis are situated around the development of a framework that allows the construction of microphone arrays for passive acoustic localization in a biological context. The framework is composed of a novel hardware platform that enables the construction of microphone arrays of (nearly) arbitrary size, and the framework is able to collect data in a manner that allows for acoustic localization and other methods of analysis. We also optimized our array for end-user convenience.

The first contribution is the development of a hardware platform for the construction of microphone arrays. We have explored the usage of MEMS microphones that are orders of magnitude cheaper than current commercial solutions. In this thesis, we prove that these MEMS microphones can be used for acoustic localization and analysis of acoustic signals. We further prove that the platform can be used to create microphone arrays of nearly arbitrary size.

The second contribution deals with data collection and synchronization. In any system dealing with acoustic localization, knowing the timing of the acoustic signal is critical. We created a novel synchronization technique by adding a synchronization channel to our data that can be wired or wireless and that enables us to synchronize data from different microphones/sensors in a post-processing step.

The third contribution is about the usability of the system, which can be split in two sub-contributions. The first is automatic spatial array calibration. We exploit the fact that our small-scale arrays can be described as a collection of microphones, i.e. an array with six degrees of freedom, instead of individual microphones with each three degrees of freedom. We also propose a calibration tool that can exploit those properties to get a quicker, more robust calibration of the array. The second sub-contribution is situated in some key concepts that enable us to write better, more efficient algorithms. The terms small-scale array and large-scale array are introduced to describe different kinds of arrays, which are constructed using the same hardware components, but their differences can be exploited in software to analyze or localize acoustic signals in a more efficient way.

Finally, the fourth contribution is the performance of a biological experiment with the framework on the hunting behavior of pallid bats. This experiment shows a significant effect of the surface roughness, on which a prey is placed, on the capture efficiency of pallid bats which use trawling behavior. These results were published in Nature Communications Biology.

'Gaussian Processes for 3D Measurements' (29/05/2024)

Ivan De Boi

  • Wednesday 29 May 2024
  • 3.30 p.m.
  • Campus Middelheim- office m.A.143 (aula Patrice Lumumba)
  • Promotors: prof. dr. Rudi Penne & dr. Pieter Jorissen
  • Faculty of Applied Engineering


On the usage of probabilistic machine learning methods for calibrating 3D measuring devices based on straight lines.

'Artificial Intelligence for Industrial Process Control: Modeling, Optimization and Explainability' (28/05/2024)

Furkan Elmaz


Artificial Intelligence (AI) has been at the forefront of recent technological advancements. Various sectors have started integrating AI into their workflows and this transformation has been expected to increase at an accelerating pace. However, certain industries have been struggling to adopt AI despite its promising potential documented in the literature. Especially industrial processes with strict constraints and infrastructural inertia show a strong tendency towards more conventional, reliable and time-tested methods. This situation creates a "gap" between academic AI research and industrial practice. This thesis aims to investigate and tackle this gap by addressing the needs and limitations of industrial process control applications from a pragmatic perspective.

This thesis is structured around three core pillars: modeling, optimization, and explainability. In the modeling pillar, we propose a completely data-driven hybrid AI methodology to efficiently combine expert knowledge with real-life data to address the data variance limitation. The proposed methodology is applied to an HVAC use case to predict indoor temperature. The optimization pillar proposes the use of Reinforcement Learning (RL) in a pharmaceutical process and shows its potential as a process optimization tool even in the case of multiple constraints. This pillar was also verified experimentally in a real-life plant, further proving its practical viability. Lastly, the explainability pillar proposes an Explainable AI framework, aiming to generate human-understandable explanations from the RL agent which not only increases the transparency and trustworthiness but also allows us to gain insights and pave the way to enhancing our fundamental knowledge about the process itself.

Through the integration and utilization of these pillars, a cyclical pattern reveals itself, where our fundamental knowledge and real-life data, when combined, allow us to develop accurate and reliable predictive models. Then the utilization of RL for optimization enables us to find novel and more performant strategies that are capable of outperforming more conventional approaches. Finally, with the application of XAI, we can unpack and understand the new strategies RL generates which further deepen our fundamental understanding of fueling the next cycle. Therefore, going beyond resolving the resistance towards AI applications, this thesis also aims to propose to sustainable and iterative approach to gradually integrate AI into industrial process control workflow. Thus, aiming to make AI from a "magical" tool to a crucial asset that can be effectively utilized.

'Development of electrochemical steps for glucose electrooxidation to value-added products' (22/04/2024)

Giulia Moggia


Carbohydrates are renewable, inexpensive and available organic raw materials. Only 3–5% of them have industrial use, the rest decays and recycles along natural pathways. One interesting finding in this field has been the recognition that acids derived from sugars have potential uses in fine chemistry. The biggest challenge in the use of carbohydrates as raw materials in fine chemistry is to achieve their direct and region-selective oxidation in aqueous media, which is difficult by classical chemical methods without a preliminary protection strategy. Electroorganic approaches have currently fascinated academicians and industrial researchers because of their high potential prospects for industrial ventures. Electrocatalytic organic synthesis provides a powerful tool to control the reaction rate and selectivity through electrode potential and current, and represents a promising alternative to the traditional industrial methods. In fact, electrosynthesis is naturally suited to obey the principles of Green Chemistry, owning to several environmentally favorable features: i.e., reduced energy consumption, use of renewable raw materials, decreased emission of pollutants or toxic raw materials.Despite its sustainable nature and its potential to electrify the industry, as such replacing traditional, non-sustainable production processes of a broad range of fine chemicals, electrochemical synthesis methods are still very underdeveloped as compared to their traditional alternatives. More research is needed to better understand electrochemical processes and address the main challenges that prevent their application at industrial scale: i.e., the still unsatisfactory selectivity and/or productivity, the electrodes’ limited lifetime and the insufficient know-how on up-scaling towards industrial scale.This PhD thesis is specifically dedicated to the study of electrocatalytic routes for the selective oxidation of glucose to gluconic and glucaric acid (both of which are commercially relevant carbohydrates). The aim here is thus to investigate the factors that determine the selectivity of the reaction towards the two products of interest, including the choice of the catalyst and the reaction conditions, and, as such, unravel the reaction mechanism beyond it. To this end, a combination of electrochemical and analytical techniques is used where microscopical surface analysis, used for the morphological characterization, is linked to its electrocatalytic performance.

'Optimizing Simulated-assisted Verification of Safety Properties of Cyber-Physical Systems' (17/01/2024)

Mehrdad Moradi


The validation of the safety properties of Cyber-Physical Systems (CPS) requires tremendous effort, as the complexity of cyber-physical systems is increasing. A well-known approach for the safety validation of CPS is Fault Injection (FI). Fault injection is a testing technique that aids in understanding how the system behavioral when stressed in an unusual way. The goal of fault injection is to find a catastrophic fault that can cause the system to fail by injecting faults into it. These catastrophic faults are less likely to occur, and finding them requires tremendous labor and cost, as fault space is enormous and multidimensional. Therefore, traditional fault injection methods are not effective in terms of number of found faults and severity of them. rates.
In this thesis, we utilize simulation-based fault injection in the system models, which enables the test engineer to identify the fault in the early phase of system development. We first performed a systematic literature review to categorize the existing methods, fault models, metrics for system models. Then, we propose a fault injection method to inject faults into the MATLAB/Simulink model as white-box models using model transformation. We also worked on the fault injection in black-box models, which is based on Functional Mock-up Interface (FMI). Next, we investigated multiple methods to increase the efficiency (in terms of total number of critical faults and run time execution) of fault injection using sensitivity analysis, reinforcement learning (RL), and the Generative Adversarial Network (GAN). These methods utilize high-level domain knowledge of the model under test to set up the fault injection simulation. The proposed methods automatically configure faults in the model under test and find catastrophic faults that can violate the safety properties of the model in the early stage of system development.
We compared the proposed method (RL-based and GAN-based) with random-based fault injection, and our proposed method outperformed random-based fault injection in terms of the severity or number of faults found.
We also demonstrated our method in Hazard Analysis and Risk Assessment (HARA), specified in ISO 26262 (functional safety standard in automotive), identifies malfunctions that could lead to hazards, and rates their risks.