Public defense PhD Christine Vanovermeire - January 2014

10 January 2014 at 4 pm

Supply chains are expected to be increasingly efficient. This does not only encompass low costs related to transport, storage and so on. More and more, supply chains are also judged on their service levels and carbon footprint.

A possible solution is to collaborate horizontally. Companies with similar transportation needs can plan their logistics together, such as companies that share many mutual clients, or companies that transport long-haul to similar regions.

It is crucial that when horizontally collaborating, partners adapt themselves to their partners and are flexible with regard to the terms of their delivery. It is however often not possible for every partner to show the same level of flexibility.  This research demonstrates the importance of flexibility to generate collaborative profits. Moreover, contrarily to previous research investigating the impact of flexibility on the level of one company or one logistic alliance, this research focuses on the interactions between characteristics of the different partners. Indeed, an important profitable combination that resulted from this research is a combination  of a company with little flexibility, and a company that has a high degree of flexibility.

However, when companies display a different degree of flexibility, the issue of gain sharing becomes even more difficult. What share of the gains should a flexible partner receive in comparison to his non-flexible partner? This research analyses different gain sharing methods, and develops a method that incentivizes flexibility.

Finally, this research focuses on collaborative planning. An optimization model  that integrates the cost allocation is proposed. In this way, the model guarantees an efficient operational plan, for which the changes to the terms of delivery are acceptable for all partners.  


ORBEL 29 conference announcement - April 2014

We are pleased to announce that the next ORBEL conference will take place in Antwerp on 5-6 February 2015!

Article on data mining in Universiteit Antwerpen Magazine - December 2013

Read about big data and our data mining research team in the Universiteit Antwerpen Magazine of December 2013.

Public defense PhD Dirk van der Linden - June 2014

27 June 2014 at 3 pm

Complex software systems typically have a modular structure, from which the modules are possibly developed in a distributed manner. These modules need to collaborate to fulfill their functional tasks. This implies that modules have an impact on other modules, via the coupling they need to collaborate.

Changes or extensions in such a system often cause an exponential increase of development, maintenance and test costs. Consequently, the integration of innovative ideas in the existing system is restrained. Over time, extensions become very expensive, it might even become 'cheaper' to re-write the system, in order to re-enable further extensions. 

In the research group the Normalized Systems Theory is developed to enable the building of systems which are immune for this phenomenon. In other words, systems of which the impact (effort) of changes relates to the new functionality, not to the size of the existing system.

In this work the application of Normalized Systems Theory in the domain of automation systems was investigated. How should this theory be applied? What implications can be expected for typical development environments, architectures and existing industrial standards? Is it possible to introduce constraints on existing system in order to achieve a higher evolvability? 

This research started from a number of reference models, i.e., three industrial standards and the Normalized Systems Elements, which are theoretic definitions of evolvable software building blocks. The results include software artifacts that comply both with the industrial standards and Normalized Systems Theory.

During the realization of a number of proof of concept systems various difficulties have cropped up, the solution of which has led to the definition of two additional theorems about instance traceability. Finally, a set of design rules is formulated that facilitates the implementation of Normalized Systems Theory in automation systems. 

Article on the Corporate social responsibility project in Universiteit Antwerpen Magazine - March 2014

Read about the Corporate social responsibility project for students, which has been developed by our department, in the Universiteit Antwerpen Magazine of March 2014.

EURO Award for the Best EJOR Paper 2014 for David Martens and co-authors - July 2014

Prof. David Martens and co-authors received the EURO Award for the Best EJOR Paper 2014 with the paper:

Verbeke W., Dejaeger K., Martens D., Hur J., Baesens B., "New insights into churn prediction in the telecommunication sector: A profit driven data mining approach".

The awards were presented at the closing session of the IFORS 2014 conference in Barcelona (Spain) in July 2014.


Article on the bike request scheduling project in the newspaper De Standaard - August 2014

Read the article on the bike request scheduling project of Prof. dr. Kenneth Sörensen in De Standaard of 29 August 2014 (pdf - 2,6 Mb).


Prof. Dr. Aviel Verbruggen in the national VRT news about possible power shortages - August 2014

App predicts dance hits through datamining - November 2014

Dorien Herremans and Prof. Kenneth Sörensen explain in the national VTM news of 26 November 2014 how their app can calculate the probability that a song will become a dance hit.

Also an article on this app appeared in the newspaper De Standaard of 26 November 2014 (pdf - 0,632 Mb).

Public defense PhD Enric Junqué de Fortuny - October 2014

14 October 2014 at 5:45 pm

Data Mining for Auditing and Fraud Detection - November 2014

In cooperation with the Antwerp Tax Academy David Martens organizes a study day on Data Mining for Auditing and Fraud Detection on 6 November 2014.

More information and registration page can be found on the Antwerp Tax Academy website.


Public defense PhD Dorien Herremans - December 2014

12 December 2014 at 6 pm

Can a computer compose music? Can a computer give us insight into what makes a dance song a hit? These questions are examined in this research, by applying quantitative methods from the domain of operations research to problems from music.
In the first part, a music generation system is developed that can generate a continuous stream of classical music. The music adheres to the strict counterpoint rules, a musical style that was developed in the 17th century and that is at the basis of contemporary Western music. The system is based on a variable neighbourhood search algorithm and was implemented as an Android app [FuX]. The name of the app refers to Johan FuX, the 17th century composer that is considered as the founder of counterpoint. The app is freely available in the Google Play store. 
In the second part of this research models are created that give more insight into a certain music style. This enables us to generate music automatically based on a certain style, without having a formal description of the style from music theory. Musical style characteristics of three composers where extracted by scanning a large collection of existing musical pieces. Based on this data a model was created that allows us to classify musical pieces per composer (Bach, Beethoven en Haydn). This model was plugged into the FuX app, which enables the generation of music with characteristics of a certain composer. The user of the system can configure the proportion in which the generated music contains the musical characteristics of the three composers. Bach, but with a hint of Haydn; 50% Beethoven and 50% Bach, every combination is possible. 
Complex Markov models were also made that describe the counterpoint style and music for bagana, and Ethiopian lyre. Based on these models, different evaluation metrics were developed that can be used by the music generation system to generate music in a certain style. This was combined with an efficient technique to generate cyclic, structured music. 
In the last part of this research, data mining was used to build classification models that could predict if a dance song is going to be a top 10 hit versus a lower listed position. This system was implemented as a free online tool (, which enables the user to upload their own mp3. The system returns the probability that the song will be a top 10 hit. 

Peter Goos received the Shewell Award of the American Society for Quality - November 2014

In November 2014 Peter Goos received the Shewell Award  for the best presentation (I-Optimal Designs for Mixture Experiments) and supporting documents at the Fall Technical Conference  (San Antonio, 17-18 October 2014). Co-autors are Utami Syafitri (University of Antwerpen) and Bradley Jones (SAS Institute).  It is the second time Peter Goos receives the Shewell award (2004 in Roanoke).

Article on the future of nuclear power in the newspaper "De Standaard" - September 2014

Read the article on the future of nuclear power by prof. dr. Aviel Verbruggen in De Standaard of 20 September 2014 (Pdf - 1,3 Mb).

Public defense PhD Marco Castro - January 2015

7 January 2015 at 2 pm

The travelling salesperson problem with hotel selection (TSPHS) is a recently proposed variant of the travelling salesperson problem. The motivation of the TSPHS is that it is not always possible to visit each of the customers or cities assigned to a salesperson in a single work shift. The TSPHS has several practical applications which include the case of employees or salespeople in a company that need to design their work trips, the programming of a fleet of trucks that have to travel long distances and need to split the entire journey into several days, the routing of electrical vehicles that need to recharge at one of the available stations, among others. In this dissertation, three variants of the TSPHS are tackled, namely the single travelling salesperson problem with hotel selection, the multiple travelling salesperson problem with hotel selection and the travelling salesperson problem with multiple time windows hotel selection. Efficient metaheuristics are developed for the first two variants while an exact solution approach is developed for the last variant.


David Martens talks about the dance hit app in the TV programme Reyers Laat - December 2014

David Martens explains about the dance hit app in  the TV programme Reyers laat of 1 December 2014.

You can watch the entire episode. The interview with David Martens starts at 19:03 min. 

Article about the necessity of enterpreneurship courses in the Universiteit Antwerpen Magazine of December 2014

Read the article of Johan Braet about entrepreneurship courses in the Universiteit Antwerpen Magazine of December 2014.

Public defense PhD Luca Tallarico - February 2015

18 february 2015 at 4 pm

The physical transportation of cash plays a vital role in our daily lives. In 2012, worldwide cash transactions were estimated to $11.6 trillion, with a growth of 1.75% between 2008 and 2012. Due to the nature of the transported goods, crime is a significant challenge and carriers are constantly exposed to serious security threats such as robberies. Attacks on vehicles are absolutely not rare although the number of episodes and the average losses are different from country to country. Despite the attention that researchers have devoted to vehicle routing problems, the issue of security during the transportation of cash has gained greater consideration in the academic world only very recently. However, specific contributions in this field are still limited.

This thesis aims to develop risk-effective tools to support decision makers when planning safe and cost effective vehicle routes. The proposed models and the related optimization techniques find their natural applicability in the cash-in-transit sector, even though they can be easily extended to other domains such as the transportation of dangerous goods and/or the design of patrol routes for security agents. A variety of vehicle routing problems are described, where a risk constraint limits the risk exposure of each vehicle. Several real-life settings such as the presence of hard time windows and/or multiple depots are also considered. In addition, a multi-objective decision model, where both risk and travel costs need to be minimized, is developed to improve the decision process. Finally, a generalization of the peripatetic routing problem is presented to increase security by enhancing route unpredictability. 

Several effective algorithms based on metaheuristics are developed to solve such complex optimization problems. These approaches are  tested in an academic context,  even though their implementation may be very useful in real-life applications, thus allowing the route planner to save  precious time to be dedicated to further added value activities. Since these metaheuristics produce near optimal solutions in a limited computation times, they could be embedded inside companies' ICT systems so as to bridge the gap between the academic world and business practitioners.

Orbel 29 conference organised at the University of Antwerp - 5 and 6 February 2015

The Orbel 29 conference took place at the University of Antwerp 2015 on 5 and 6 February 2015 and was organised by Kenneth Sörensen and his team.

Click below image for a photo impression of the Orbel 29 conference!

Public defense PhD Daniel Palhazi Cuervo - June 2015

Experimentation is arguably one of the fundamental pillars that enables the creation of new knowledge. The appropriate execution and analysis of a carefully controlled experiment is the main (and perhaps the only) way to establish a cause-effect relationship. The purpose of an experiment is to identify the influence that a set of experimental variables has on the process under study. The design of an experiment mainly consists in determining the number of experimental runs, the settings of the experimental variables in each run, and the sequence in which the runs need to be executed. This should be done with the purpose of maximizing the amount of information produced by the experiment. Several standard experimental designs have been proposed to achieve this goal. Although these designs have very good properties, they cannot always be applied to the complex experimental scenarios found in practice. A better strategy is to generate a custom design that is specifically tailored to the characteristics of the process. This approach is called optimal design of experiments and its goal is to find the best possible design that can be carried out for the experimental scenario at hand. To this end, this approach treats the generation of a design as an optimization problem and makes use of different optimization algorithms to solve it.

The benefits of using algorithmic techniques for the optimal design of experiments have been extensively documented in the literature. This approach, however, has been criticized by important members of the statistical community and is not yet considered a routine practice. One of their main arguments is that the designs generated by algorithmic methods do not always match the quality of the standard experimental designs. Many statisticians are therefore reluctant to opt for this approach and remain faithful to standard designs.

This dissertation addresses such criticism levelled against this approach: it proposes new and more efficient algorithmic techniques for the generation of optimal experimental designs. These techniques are based on a family of optimization algorithms known as metaheuristics. As shown by an extensive set of computational experiments, the proposed algorithms are able to generate designs with better quality and in shorter execution times than other algorithmic methods. Additionally, it is also shown how the flexibility of these algorithms can be leveraged in order to generate new designs that, in many cases, have better properties than the standard designs.

This dissertation is divided into two parts. The first part focusses on the generation of optimal designs of industrial experiments. These experiments are widely used for quality control in the development and improvement of products and processes. The second part focuses on the generation of optimal designs of stated choice experiments. These experiments are widely used to study how people make choices and to identify the elements that drive people’s preferences.

Public defense Phd Utami Dyah Syafitri - October 2015

19th of October 2015

A mixture experiment is an experiment in which the experimental factors are ingredients of a mixture, and the response depends only on the relative proportions of the ingredients.  Special features of mixture experiments are the two main constraints that the levels of the experimental factors all lie between zero and one, and that the sum of the levels of all experimental factors is one.

In this dissertation, we discuss and address two main topics. The first topic is inspired by the fact that, although mixture experiments usually are intended to predict the response(s) for all possible formulations of the mixture and to identify optimal proportions for each of the ingredients, little research has been done concerning their I-optimal design. In this dissertation, we provide the first detailed overview of the literature on the I-optimal design of mixture experiments and identify several contradictions. We present I-optimal continuous designs for various Scheffé models and contrast them with the published results.

The second topic is related to additional constraints in mixture experiments. The problems are inspired by De Ketelaere, Goos, and Brijs (2011) who discuss a baking experiment. In some cases, the available stock of certain ingredients is  limited. This type of constraint substantially complicates the search for optimal designs for mixture experiments. We propose a variable neighborhood descent algorithm to tackle the problem.

Another problem in the baking experiment is that, in order to create "new" flours to bake bread from, the experimenters not only used the pure flours but they also mixed the sample flours. The experiment was complicated because the chemical component proportions could not be manipulated directly, but only indirectly, by mixing the sample flours. To tackle the problem, we propose a modified coordinate-exchange algorithm. 

Prof.dr. Kenneth Sörensen received the Research Award of the Faculty Applied Economics - October 2015

On the Faculty Meeting of Applied Economics on October 12th 2015 prof.dr. Kenneth Sörensen received the Research Award of the Faculty (FAE) for his results achieved on the research domain.

Prof. Steven Van Passel co-organised an interdisciplinary Phd course on sustainability assessment for the low-carbon economy

Prof. Steven Van Passel co-organised an interdisciplinary Phd course on sustainability assessment for the low-carbon economy.

This interdisciplinary PhD expert course for young researchers on the state of the art in Techno-Economic-Assessment (TEA), Life Cycle Analysis (LCA), and Integrated Assessments (IA) took place Between May 30 and June 1 (2017).

The course attracted about 30 enthusiastic PhD researchers from 7 different countries and 15 different institutions, spanning across several disciplines such as engineering, economics, science, and architecture.

Click here to find out more details.

Prof. David Martens wins "CIONET European Research Paper of the Year 2017"

The CIONET European Research Paper of the Year is an award that distinguishes the best European research paper according but not limited to criteria like inspiring new practices and paradigms, provocative thinking, rigidity of research or research method and pratical implications.

The jury decided to grant the award of European Research Paper of the Year 2017 to the paper:

Mining Massive Fine-Grained Behavior Data to Improve Predictive Analytics


Martens, David – University of Antwerp

Provost, Foster – New York University

Clark, Jessica – New York University

Junqué de Fortuny, Enric – Erasmus University Rotterdam (former Phd student at the University of Antwerp)


For more details please click the following link.