Authors:
Moncef Garouani
;
Adeel Ahmad
and
Mourad Bouneffa
Affiliation:
Univ. Littoral Côte d’Opale, LISIC, Laboratoire d’Informatique Signal et Image de la Côte d’Opale, France
Keyword(s):
Explainable Artificial Intelligence, Meta-Learning, Shapley Values, Autoencoder, Meta-Features Importance.
Abstract:
Meta-learning, or the ability of a machine learning model to adapt and improve on a wide range of tasks, has gained significant attention in recent years. A crucial aspect of meta-learning is the use of meta-features, which are high-level characteristics of the data that can guide the learning process. However, it is a challenging task to determine the importance of different meta-features in a specific context. In this paper, we propose the use of Shapley values as a method for explaining the importance of meta-features in meta-learning process. Whereas, Shapley values is a well-established method in game theory. It has been used for fair distribution of payouts among a group of individuals, based on the separate contribution of meta-features to the overall payout. Recently, these have been also applied to machine learning to understand the contribution of different features in a model’s prediction. We observe that a better understanding of meta-features, using the Shapely values, c
an be gained to evaluate their importance. In the context of meta-learning it may aid to improve the performance of the model. Our results demonstrate that Shapley values can provide insight into the relative importance of different meta-features and how they interact in the learning process. This can fairly optimize the meta-learning models, resulting in more accurate and effective predictions. Overall, this work conclude that Shapley values can be a useful tool in guiding the design of meta-features and these can be used to improve the performance of the meta-learning algorithms.
(More)