Research team

Engineering Management

Expertise

I specialize in statistical modelling in business and industry, design and analysis of industrial experiments, design and analysis of choice experiments or stated preference studies, and operations research.

Integrating inventory and transportation planning in horizontal logistic collaborations: operational and strategic support for companies to form more efficient coalitions. 01/10/2017 - 31/08/2022

Abstract

The aim of this project is to study the effect of inventory policy and its interactions with vehicle routing in horizontal collaborations of different companies. This will be done through a series of rigorously designed experiments, using statistical design of experiments methodology.

Researcher(s)

Research team(s)

Empirical and methodological challenges in choice experiments 01/01/2017 - 31/12/2021

Abstract

Economic values are usually revealed in the market place. However, no such mechanism exists to reveal people's relative values for goods and services that are currently not being bought and sold in the marketplace. Still, scientists would like to know the monetary value people attribute to them. We want to be able to carry out cost-benefit analysis to determine the welfare effects of technological innovations or public policy, to forecast new product success, and to understand the degree to which behavior is consistent with preferences and beliefs. Choice experiments (CEs) are arguably the most popular method currently used in preference and willingness to pay (WTP) elicitation studies, both in hypothetical and non-hypothetical settings. Originally, the method was developed for marketing and transport studies, but in the last two decennia, it has spread to environmental and resource economics, agricultural and food economics, and health economics. The ever growing body of literature on CEs emphasizes the increasing role they are playing. In this elicitation method, respondents are generally asked to make choices between multiple alternatives, also called profiles, which are described by a number of attributes with different levels. Consequently, through nonlinear regression models, generally based on random utility theory (RUT), the utility each attribute (level) contributes to the good or service under study can be quantified and translated into (marginal) willingness to pay. To a large extent, the design of the CE drives the precision and the validity of the conclusions and it is therefore considered to be a key aspect of the planning of a CE. Designing a CE involves selecting the profiles to be used in the experiment The current state of the art is the Bayesian optimal design method. However, the design and analysis methods for CEs are constantly improving, which goes along with the improvement of the discrete choice models and the increasing number of applications in different fields. Research on empirical and methodological advances in CEs faces the following challenges. First, RUT assumes the respondent to act in a fully compensatory manner based on stable preferences. This has been found to be a demanding assumption. Hence, it is up to empirical research to determine what causes these assumptions to be violated and how sensitive the obtained estimates are to them. Second, the debate concerning what drives (out) hypothetical bias, being the difference between what people say they are willing to pay in a hypothetical survey question and what they will actually pay in a non-hypothetical experiment when money is really on the line or in real-life situations, has not been closed. Third, most CEs are hitherto single-site and/or single-case studies. By consequence, spatial and socio-cultural effects are often ignored, which impedes generalization. Despite the vast amount of studies, findings often remain context-specific and cross-case comparisons are limited. Researchers from various applied economic disciplines continuously keep improving the way of designing, collecting and analyzing choice data in search of behavioral insights as well as efficient policy development. While some informal connections between several of the participating groups are already in place, a more formal setup would provide a driving force for more rapid knowledge dissemination and state of the art development of expertise. Therefore, it is important for Flanders to create a united and multi-disciplinary platform to keep up to date with the latest developments on CEs and to gather sufficient critical mass to be able to compete with other consortia for publications and project funding. Moreover, with this scientific research network, we aim to provide a platform for postdoctoral researchers to exchange knowledge and to more easily and intensively collaborate intra- and internationally.

Researcher(s)

Research team(s)

Task complexity, framing effects and post-hoc individual-level model analysis in discrete choice experiments. 01/10/2015 - 30/09/2018

Abstract

This project deals with three important issues in discrete choice experiments (DCEs) which are widely used to study preferences for attributes of competing products or services in various areas of economics. To maximize the information content of the data from DCEs, it is crucial to design the experiments optimally. In our search so far, we have focused on improving the statistical quality of DCEs. However, the statistical quality is not the only aspect to consider. The response quality of a DCE is at least as important and depends on whether respondents can answer the choice questions well, that is, whether the choice questions are not too complex. Also, the framing or the labelling of the attributes and attribute levels plays a key role. Positive frames generally stimulate risk-averse responding as opposed to negative frames. Accounting for each of these two difficulties in the design and analysis of DCEs each makes up a part of this project. The designs we aim to construct will score well on overall quality, which includes both statistical quality and response quality. A final part of the project is devoted to post-hoc individual-level discrete choice modelling in which we show how to use individual preferences for market segmentation and the construction of indifference maps.

Researcher(s)

Research team(s)

Discretion and work conditionality in welfare practice in Europe. 01/01/2014 - 31/12/2017

Abstract

This research project has a substantive and a methodological goal. With regard to the substantive goal, the investigation of the implementation of activation practices has remained largely one-sided. We aim to explore interactions between three levels of characteristics: client, social worker and agency. The main research question is, 'How important is the discretionary freedom of social assistance agencies and social workers in deciding on a claimant's duty to work?'

Researcher(s)

Research team(s)

An integer linear programming approach to construct costefficient multi-stratum experimental designs for product and process innovation. 01/01/2013 - 31/12/2016

Abstract

This project's objective is to develop methods for creating cost-efficient plans for industrial experiments involving large numbers of factors that yield more and better information and thus enable faster innovation. To this end, we exploit the class of non-regular orthogonal arrays to construct designs for multi-stratum experiments and apply state-of-the-art integer linear programming optimization techniques from the field of operations research/management science in the area of design of experiments.

Researcher(s)

Research team(s)

Cost-efficient staggered-level designs for product and process innovation. 01/10/2012 - 30/09/2016

Abstract

The purpose of this project is to develop new cost-efficient experimental plans that yield more and better information and thus enable faster innovation in the presence of factors with hardto-change levels. To this end, we use the new family of staggered-level experimental designs. We develop construction methods for staggered-level designs with any number of hard-tochange factors. A novel feature of our approach is that, unlike published work on the design of split-(split-)plot experiments, it aims at a precise estimation of all factor effects as well as at a precise estimation of the variance components in the estimated model. To this end, we apply state-of-the-art combinatorial optimization techniques to the design of experiments.

Researcher(s)

Research team(s)

Discovering housing preferences using discrete choice experiments 01/01/2012 - 31/12/2012

Abstract

The aim of this project is to run an online discrete choice experiment to quantify housing preferences of young people in Antwerp. We will ask a panel of respondents to choose between different housing accommodations ranging from collective housing to individual studios. The study will allow us to validate our latest Bayesian design methodology and to define a policy plan in the housing market.

Researcher(s)

Research team(s)

Design of discrete choice experiments adapted to the respondent's cognitive process. 01/10/2011 - 30/09/2015

Abstract

Discrete choice experiments (DCEs), which involve respondents choosing among alternatives presented in choice sets, are widely used to study preferences for attributes of products or services in various economic fields. To maximize the power of the statistical inference from data from DCEs, it is crucial to design the experiments optimally. Most research in this area focuses on optimizing the design of DCEs under the simplifying assumption that respondents make compensatory decisions. This means that unattractive levels of an attribute can be compensated for by attractive levels of another attribute. However, the assumption of compensatory decision-making often proves to be unrealistic. This research project studies three scenarios in which respondents depart from the compensatory decision rule when making choices: (i) the scenario where respondents ignore attributes in the decision making because there are too many, (ii) the scenario where respondents favor certain attributes because of their position in the description of the alternatives and (iii) the scenario where respondents favor certain alternatives because of their position in the choice set. Pro-actively accounting for respondents' cognitive processes when constructing optimal DCEs in these scenarios will result in more practical designs for DCEs, with applications in marketing, transportation, environmental and health economics.

Researcher(s)

Research team(s)

Tools to improve the Six Sigma Process for total quality management . 01/08/2011 - 31/01/2012

Abstract

This project represents a research contract awarded by the University of Antwerp. The supervisor provides the Antwerp University research mentioned in the title of the project under the conditions stipulated by the university.

Researcher(s)

Research team(s)

Design and validation of models for short-term and long-term media mix investment optimization. 01/01/2011 - 31/12/2014

Abstract

The project investigates the impact of advertising investments, media mix allocation and advertising share of voice on short-term advertising and brand effects, on consumer activation (gathering extra information, visiting websites and generating word-of-mouth), and on their impact on long-term brand effects.

Researcher(s)

Research team(s)

Online conjoint choice experiments for the detection of latent market segments. 01/08/2010 - 31/05/2011

Abstract

This project aims at developing adaptive approaches for conducting online conjoint choice studies to detect an unknown number of latent market segments. The new approach will build on optimal design of experiments methodology.

Researcher(s)

Research team(s)

DEVACOE - Design of experiments for variance component estimation. 01/04/2010 - 31/03/2012

Abstract

The envisaged achievement of this project is to perform groundbreaking work in the optimal design of experiments for estimating variance components in random effects models and for the joint estimation of fixed effects and variance components in mixed effects models, in general, and splitplot, strip-plot and split-split-plot models, in particular. The design of experiments for an efficient estimation of variance components is one of the remaining challenges in the field of optimal experimental design, so that the successful completion of this project would thus be a major breakthrough in statistical design of experiments.

Researcher(s)

Research team(s)

StatUA, a forum for applied statistics. 01/01/2010 - 31/12/2013

Abstract

This project represents a research contract awarded by the University of Antwerp. The supervisor provides the Antwerp University research mentioned in the title of the project under the conditions stipulated by the university.

Researcher(s)

Research team(s)

Improving the Six Sigma Total Quality Management Program. 20/10/2009 - 19/10/2010

Abstract

This project proposal aims at facilitating and improving the Measure phase and the Improve phase of the DMAIC strategy. The Measure phase deals with the quality of the measurement equipment. This is important because it is impossible to collect high quality data without a proper measurement system. We will seek novel methods for carrying out measurement studies (also known as gauge R&R studies) that will improve the data quality while reducing the cost in the Measure phase. We will also develop methodology for conducting better experiments in the Improve phase. The focus will be on cost-efficient data collection by means of nested experimental designs. Our work on the Measure and the Improve phases of the DMAIC strategy is of utmost importance for the success of Six Sigma quality improvement projects because, without adequate measurements and high-quality data, it is impossible to take well-informed decisions to improve quality. A common feature of the research problems in the Measure phase and the Improve phase is that they require an optimal data collection approach or experimental design. More specifically, it is important that the collected data allow a precise quantification of variability.

Researcher(s)

Research team(s)

Innovation and reduction of time-to-market through designed experimentation. 01/01/2009 - 31/12/2012

Abstract

This projects aims at developing novel search algorithms for finding optimal experimental designs. For that purpose, the newest methods for combinatorial optimization in operational research, involving metaheuristics, will be used in a new application area, the design of experiments. Both problems with a single objective as problems withmultiple objectives will receive our attention.

Researcher(s)

Research team(s)

Optimal experimental design for quantitative electron microscopy. 01/01/2008 - 31/12/2011

Abstract

The aim of this research project is to apply state-of-the-art methods from the oprimal design of experiments in the field of elektron microscopy. These methods will allow electron microscopists to evaluate, to compare, and to optimize experiments in terms of the attainable precision with which structure parameters, the atom positions in particular, can be measured. Moreover, statistical experimental design provides the possibility to decide if new instrumental developments result in significantly higher attainable precisions. The highest attainable precision determines the theoretical limit to quantitative electron microscopy.

Researcher(s)

Research team(s)

Robust Statistical process control with applications to control the quality of animal feed (Type3). 01/12/2007 - 30/11/2009

Abstract

The goal of this project is to develop a framework for building robust calibration models for heterogeneous spectral data in the context of a quality control process. Because of the complex, high-dimensional nature of spectral data, more advanced multivariate statistical models than multiple linear regression models will be needed. As a matter of fact, the complex and heterogeneous nature of the data necessitates the use of robust versions of principal components regression and partial least squares regression. One objective of the project is to determine the method that leads to the best calibration model and to suggest improvements to the robust principal components method currently used. During the development of the framework for building calibration models, a substantial amount of attention will also be paid to robust preprocessing techniques (including data clustering methods) and to model validation, and to an out-of-control action plan that will allow the predictions made by the calibration model to be used on the work floor. By using robust statistical techniques, we hope to build calibration models for a more reliable monitoring of whether or not a production process is in statistical control. Also, the calibration models should allow a correct identification of batches that are out of the specification limits. For these purposes, the graphical tools accompanying the robust statistical methods will be exploited. These tools, which are called regression outlier maps, classify the samples into regular data, vertical outliers, bad leverage points or good leverage points. For quality control purposes, it is crucial to inspect the vertical outliers and bad leverage points because they correspond to samples with a large response value with a normal spectrum and samples with abnormal response values and aberrant spectra. The use of robust preprocessing techniques and robust data clustering will contribute to the predictive quality of the calibration models. The newly developed procedures will be tested extensively on simulated data sets, real data sets from companies such as SESVANDERHAVE and on two data sets of Aveve Veevoeding. One Aveve data set contains measurements of grinded samples, while the other contains spectra of ungrinded samples. The data were collected for the purpose of building calibration models for quality control purposes. Building a satisfactory calibration model using these data with standard methods available in software, however, turned out to be impossible. This is due to the heterogeneous structure of the data, and suggests that the use of robust preprocessing methodology and robust multivariate statistical techniques, some of which are still to be developed, is required. An additional interesting research question at Aveve is to find out whether the data set with measurements for the ungrinded samples could be used to construct good calibration models. An affirmative answer would allow Aveve to use ungrinded samples only, and to skip the operator-dependent and time-consuming grinding operation. Furthermore, the Aveve case study can be used to assess the usefulness of the data clustering methods for improving the predictive power of the robust calibration models. This is because each of the two data sets contains measurements for different kinds of animal feed. Ideally, the robust framework developed in this project should lead to calibration models that can readily be implemented at Aveve's production plants. This practical implementation will require an automated out-of-control action plan, the development of which is one of the main objectives of this project.

Researcher(s)

Research team(s)

Optimal run orders for central composite experimental designs in the presence of serial correlation. 01/01/2006 - 31/12/2009

Abstract

The aim of the project is to search for run orders for the cental composite design that allow an efficient estimation of the fixed parameters of the statistical model when the observations are serially correlated. Objectives : 1) Computation of optimal run orders for known serial correlation structures. 2) Computation of so-called robust orders that perform well for several correlation structures. Adopt and compare (i) a Bayesian approach and (ii) a maximin approach.

Researcher(s)

Research team(s)

Client-oriented vehicle routing. 01/01/2006 - 31/12/2009

Abstract

The goals of this project are (1) to develop a branch-and-price algorithm based on a suitable mathematical programming formulation, in order to solve small and medium sized instances and (2) to develop a metaheuristic to quickly find good solutions to large-scale instances.

Researcher(s)

Research team(s)

Robust multivariate methods with missing data. 01/05/2005 - 30/04/2009

Abstract

Missing values appear in real world data due to temporary malfunctioning or because of technical limitations of the measurement equipment. It also happens in survey experiments that people fail to answer every question from the poll or that patients fail for their weekly visit at the doctor and thus produce a missing value in the data set. Multivariate statistical methods like principal component analysis (PCA), principal component regression (PCR) or partial least squares regression (PLS) are the appropriate way to handle high-dimensional data sets. Since real world data sets can also contain outliers, which influence the estimates made by standard statistical methods, we will focus on the use of robust alternatives for PCA [Hubert, Rousseeuw and Verboven 2002; Hubert, Rousseeuw, and Vanden Branden 2005], PCR [Hubert and Verboven 2003], and PLS [Hubert and Vanden Branden 2003]. However, these algorithms are not able to deal with missing values yet. One approach to treat missing data is to use a multiple imputation (MI) method [Rubin 1987]. MI methods generate M complete data sets and combine the results of the M analyses into a final result. First we will compare the MI method for classical PCA, PCR with other approaches. Next we will propose a new algorithm that is computationally fast and can be easily adapted towards the robust PCA and PCR algorithms. Finally, these new robust PCA and PCR methods for missing data will be thoroughly tested on simulated data as well as real data.

Researcher(s)

Research team(s)

    The design of paired comparison studies in the presence of order effects. 01/05/2005 - 31/12/2006

    Abstract

    In order to quantify the extent to which a group of people value the different characteristics of products, services, travel modes or health states, researchers in marketing, transportation or health economics perform paired comparison experiments in which respondents have perform one or several comparisons of two alternatives. The purpose of this project is to develop a methodology to design this type of experiment in a statistically efficient way when order effects may influence the outcomes of the comparisons.

    Researcher(s)

    Research team(s)

      Optimal design of marketing experiments. 01/01/2005 - 31/12/2008

      Abstract

      In marketing, numerous experiments are carried out to explore the preferences of potential consumers. Typically, these experiments present a series of ¿mostly hypothetical¿ products or services to a number of respondents. The respondents are then requested to specify their preference in terms of a choice for a certain product or service or in terms of the price they are willing to pay for the products or services. If respondents receive a choice task, then the experiment is called a discrete choice experiment. If respondents have to value each of the administered products or services, then the experiment is denoted as a conjoint study. Despite the wide dissemination of both types of experiments, as well as of the sophisticated estimation methods for the associated statistical models, research on proper designs for discrete choice experiments and conjoint studies is rather scant. The aim of this project is to develop efficient experimental designs for both kinds of studies.

      Researcher(s)

      Research team(s)