Research team

Expertise

Designing and developing efficient numerical algorithms for large- to huge-scale (non)smooth (non)convex optimization problems, investigating complexity and convergence analysis of such algorithms, and applying them to real-world problems in the fields of inverse problems, machine learning, and systems biology.

HugeOPT: Krylov accelerated splitting methods for huge-scale network optimization. 01/10/2023 - 30/09/2027

Abstract

Efficiently solving huge-scale optimization problems is undoubtedly very important in science and technology. Many optimization problems can be described using some underlying network structure. For example, the optimal assignment of a crew to a given flight schedule can be formulated as an optimization problem over a very large graph. Similarly, constraint based modelling of biochemical networks also leads to optimization problems with millions of unknowns. A network structure typically induces useful structure in the constraints, which then of course can be exploited by using suitably chosen linear algebra techniques. Current off-the-shelf software is not able to efficiently solve such problems when the number of variables is very large, especially when the objective function is nonconvex and possibly contains a nonsmooth term. Hence, there is a need to develop high-performance structure exploiting algorithms. In this project we aim to develop a wide range of efficient optimization algorithms for both convex and nonconvex problems, possibly containing a nonsmooth term in the objective function, by exploiting structure that arises from the networks. High-performance implementations of the resulting algorithms will be made available in an open- source software package such that non-expert practitioners can easily them.

Researcher(s)

Research team(s)

Project type(s)

  • Research Project

Evaluation complexity of non-Euclidean optimization methods. 01/10/2022 - 30/09/2026

Abstract

In the last two decades, the emergence of big data mathematical modeling leads to the appearance of large- to huge-scale optimization problems with special structures. Moreover, a wide range of such practical problems does not have Lipschitz (Holder) continuous derivatives. Due to this and the existence of a huge number of data, the classical optimization methods cannot be applied to these types of problems, which increase the demand for new algorithmic developments that are convergent and also computationally reasonable for solving these structured optimization problems. As such, designing, analyzing, and implementing efficient optimization algorithms for nonsmooth and nonconvex problems is the subject of investigation in this proposal. In the other words, we assume that some parts of the objective functions are (high-order) relatively smooth and develop first-, second-, and high-order non-Euclidean methods using generalized Bregman distances to find approximate (high-order) critical points of the objective functions. In addition, we analyze the evaluation complexity of these non-Euclidean methods, which is used as a measure of efficiency. We finally apply our developed algorithms to many applications from signal and image processing, machine learning, data science, and systems biology.

Researcher(s)

Research team(s)

Project type(s)

  • Research Project

Taming Nonconvexity in Structured Low-Rank Optimization. 01/01/2022 - 31/12/2025

Abstract

Recent advances have made possible the acquisition, storing, and processing of very large amounts of data, strongly impacting many branches of science and engineering. A way to interpret these data and explore their features is to use techniques that represent them as certain low-dimensional objects that allow for an intuitive interpretation by domain-specific experts. Such techniques typically factorize the data as two or more structured objects—e.g., orthogonal matrices, sparse tensors—with a lower rank than the original data. The factorizations can usually be formulated as solutions to large-scale nonconvex optimization problems; it is of interest to develop fast algorithms to solve them and, in particular, algorithms for which one can prove that they always converge to useful solutions. One cannot enforce this guarantee in many methods; however, we argue that recent developments made by the applicants on Bregman proximal envelopes and on block and multi-block relative smooth functions are excellent tools to develop such algorithms. In short, this project aims (i) to introduce and study a very general formulation for nonsmooth, structured low-rank optimization, (ii) to establish conditions under which this formulation is tractable (even if nonconvex), (iii) to design provably convergent algorithms to address it, and (iv) to apply and test the new model and algorithms in problems from two domains: image processing and genomic analysis.

Researcher(s)

Research team(s)

Project type(s)

  • Research Project