Simple Distributed Control of Networked CPS: Indirect and Direct Communications
Date: April 01 - April 01, 2014
We consider a system of N agents, each picking actions from a finite set and receiving a payoff depending on the actions of all agents. The exact form of the payoffs is unknown and only their values can be measured by the respective agents. A decentralized algorithm was recently proposed by Young et al and in the author’s earlier work, that, in this setting, leads to the agents picking welfare optimizing actions under some restrictive assumptions on the payoff structure. We have very recently improved the algorithm to incorporate exchange of certain bit-valued information between the agents over a directed communication graph. An interaction graph is introduced to encode known interactions in the system. Restrictions on the payoff structure are eliminated and conditions that guarantee convergence to welfare minimizing actions w.p. 1 are derived under the assumption that the union of the interaction and communication graphs is strongly connected. We next consider the continuous time and continuous state space version of the problem based on ideas from extremum seeking control. A result of local convergence of the proposed algorithm to an arbitrarily small neighborhood of a local minimizer of the welfare function is proved. We include a novel analysis of a dynamic consensus algorithm that may be of independent interest. Stochastic methods are employed throughout in the proofs. Our results show how indirect communications (signaling between the agents via their interactions through the system) and direct communications (direct messages sent between the agents) can complement each other and lead to simple distributed control algorithms with remarkably good performance. There are many applications including management of wind farms, coverage problems in sensor networks, robotic swarm formation control. We close by describing current and future research directions.