Multimodal Relational Interpretation for Deep Models. 01/05/2020 - 30/04/2024


Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. Model interpretation consists on getting an insight on the information learned by a model from a set of examples. Model explanation focuses on justifying the predictions made by a model on a given input. While there a is a continuously increasing amount of efforts addressing the task of model explanation, its interpretation counterpart has received significant less attention. In this project we aim taking a solid step forward on the interpretation and understanding of deep neural networks. More specifically, we will focus our efforts on four complementary directions. First, by reducing the computational costs of model interpretation algorithms and by improving the clarity of the visualizations they produce. Second, by developing interpretation algorithms that are capable of discovering complex structures encoded in the models to be interpreted. Third, by developing algorithms to produce multimodal interpretations based on different types of data such as images and text. Fourth, by proposing an evaluation protocol to objectively assess the performance of algorithms for model interpretation. As a result, we aim to propose a set of principles and foundations that can be followed to improve the understanding of any existing or future deep complex model.


Research team(s)

Project type(s)

  • Research Project