Stefan Wildhagen1, Simon Michalowsky1, Jan Feiling2, Christian Ebenbauer1
11:00 - 11:20 | Mon 17 Dec | Dazzle | MoA01.4
We consider perturbation-based extremum seeking, which recovers an approximate gradient of an analytically unknown objective function through measurements. Using classical needle variation analysis, we are able to explicitly quantify the recovered gradient in the scalar case. We reveal that it corresponds to an averaged gradient of the objective function, even for very general extremum seeking systems. From this, we create a recursion which represents the learning dynamics along the recovered gradient. These results give rise to the interpretation that extremum seeking actually optimizes a function other than the original one. From this insight emerges a new perspective on global optimization of functions with local extrema: because the gradient is averaged over a certain time period, local extrema might be evened out in the learning dynamics. Moreover, a multidimensional extension of the scalar results is given.