Abstract
Explainable AI (XAI) is a set of tools to support the interpretation of Machine Learning (ML) algorithm predictions. In the financial industry, research and applications of these tools mainly propose solutions and provide insights from the firms’ perspective. XAI2BIP’s objectives are threefold. First, it aims to validate the use of state-of-the-art explainable ML algorithms. Second, it intends to design a database infrastructure that would enable explainable ML methods to be deployed, providing tools for customers. Finally, it plans to simulate and evaluate markets where banks and financial institutions have a similar level of information to make decisions than retail and commercial customers. XAI2BIP will fill three main gaps in the literature. First, it proposes benchmark XAI methods for unsupervised algorithms, which are not much explored as the XAI methods for supervised models. Second, it delivers a design infrastructure and tools, following the most recent regulations on privacy and ethics in AI. Third, it simulates a market where the power of information to support decision-making is equally distributed between financial institutions and customers. This will provide insights into the market’s economic growth and in what are the most common reasons for that. To accomplish these objectives, first, XAI2BIP accounts for implementing ML pipelines and XAI tools to forecast CRM metrics. Secondly, XAI2BIP considers a database infrastructure that accounts for many sustainable factors (e.g., ethics and FAIR). Then, XAI2BIP proposes XAI frameworks that can be applied to unsupervised ML algorithms. Finally, a virtual environment of a market where customers and businesses have the same level of information (enabled through XAI frameworks) will be simulated using agent-based models. This will allow the analysis of the economic impact and the behavior of the players in this market
Role UT
The University of Twente is a coordinator in the project. The team from UT includes the following individuals: