Risk Sensitivity and Entropy Regularization I Prototype-based Learning
Authors : Christos Mavridis and Erfaun Noorani
Conference : 30th Mediterranean Conference on Control and Automation pp. 194-199 , Athens, Greece
Date: June 28 - July 01, 2022
Prototype-based learning methods have been extensively studied as fast, recursive, data-driven, interpretable, and robust learning algorithms. We study the effect of entropy regularization in prototype-based learning regarding (i) robustness with respect to the dataset and the initial conditions, and (ii) the generalization properties of the learned representation. A duality relationship, with respect to a Legendre-type transform, between free energy and Kulback-Leibler divergence measures, is used to show that entropy-regularized prototype-based learning is connected to exponential objectives associated with risk-sensitive learning. We use these results to incentivize the development of entropy-regularized prototype-based learning algorithms by highlighting its properties, including (i) memory and computational efficiency, (ii) gradient-free training rules, and (iii) the ability to simulate an annealing optimization process that results in progressively growing competitive-learning neural network architectures.
Download Full Paper