Skip to main content

Budget Allocation in Binary Opinion Dynamics

01 January 2017

New Image

During the last decades a lot of research effort has been devoted to model the underlying process of opinion formation of agents that interact through a social network. In this respect, DeGroot's [5] or Friedkin- Johnsen's [8] models are classic references. Another classic model is the voter model [4, 9] which considers that each agent holds a binary opinion, 0 or 1, and at each time step, each agent chooses one of its neighbors at random and adopts that opinion as its own. Other works, based on the voter model, incorporate stubborn agents [12], and biased agents [10]. Moreover, the last few years there has been an increasing literature about manipulation of opinion in social networks [6, 7, 11]. In this work, we are interested in finding the most efficient use over time of a budget in order to manipulate a social network. The idea is to promote an opinion by paying agents to supplant their true opinions. We model opinions as two values, 0 or 1, with 1 (0) representing supportive (non-supportive) opinion. We frame the problem of designing sequential payment strategies as a discounted Markov decision process (DMDP). DMDPs have been widely used to formulate many decision making problems in science and engineering (see, e.g., [1, 2, 3]). One of the main applications of DMDP models is the computation of optimal decisions (i.e., actions) over time to maximize the expected reward (analogously, minimize the expected cost). First of all, we focus on a fully connected network where agents change their opinion following a voter model. We provide the correspondent Bellman equations to solve this problem and we show through an example how to solve the stated problem in practice. We provide a structural characterization of the associated value function and the optimal payment strategy. Then, we compute the optimal payment using dynamic backward programing.