Antimatroid, The

thoughts on computer science, electronics, mathematics

Multi-agent ordering

leave a comment »

Given a set of agents A = \{ a_{0}, a_{1}, \dotsc, a_{n} \} and a set of entities V = \{ v_{0}, v_{1}, \dotsc, v_{m} \}, agents independently provide preferences I_a \colon V^{2} \to \{ -1, 0, +1\} between different pairs of entities for a subset S_{a} of the possible E = V^{2} pairs. From these preferences, we wish to form a poset that contains the entities in order of least preferred to most preferred.

An agent may prefer an individual pair e_{i,j} in the following ways: -1, the agent prefers v_{j} to v_{i}; 0, the agent is indifferent; +1, the agent prefers v_{i} to v_{j}. Let G_{a} = (V, S_{a}) be the undirected graph that represents the preferences of an individual agent.

To form a consensus we are interested in G= \sum_{a \in A} G_{a}. Let the adjacency matrix M represent G. In doing so, we may define M in the following way: M_{i,j} = \sum_{a \in A} I_a (e_{i,j}). It should be evident that M_{i,i} = 0 and M_{i,j} = -M_{j,i}. We wish to simplify M by turning it into a directed graph whereby all negative entries are removed. We shall define N_{i,j} = \max{(0, M_{i,j})} to be this directed graph.

Let deg_{j}^{in} = \sum_{i = 0}^{m} N_{i, j} be the sum of weights into vertex v_{j}, deg_{i}^{out} = \sum_{j = 0}^{m} N_{i,j} be the sum of weights out of vertex v_{i} and \delta_i = (deg_{i}^{in}, deg_{i}^{out}) be the set of tuples for each vertex in V. Thus, let \delta along with the relation: increasing first element, then by decreasing second element be the poset P of entities order from least preferred to most preferred.

Consider the following example: on a merchant’s website, customers A = \{a,b,c,d,e,f\} independently provide comparisons between products that the merchant sells. The merchant wishes to know his customers’ preferences about his products V = \{1,2,3,4,5,6\}.

Customers have provided the following set of preferences:

multi_agent_preferences

The consensus graph is produced from these preferences, followed by the directed graph until \delta = \begin{pmatrix} (1,4) & (4,3) & (2,0) & (1,4) & (0,2) & (4,0) \end{pmatrix} is computed, which, when used to form P, will yield P = \{6,2,3,1,4,5\}. The merchant can infer from this analysis that customers least preferred product is 6, and most preferred product is 5.

Now, this model assumes that the agents are being truthful in their preferences. It is conceivable that a consortium of agents could force an entity to become the least or most preferred entity by providing false preferences. This premise presents two questions: how do we identity fallacious behavior, and two: how do we prevent it. The answer to the later of these two questions is simple: we add an additional term \tau_a \in [0,1] when constructing M_{i,j} = \sum_{a \in A} (1 - \tau_a) I_a (e_{i,j}) where 0 indicates that the agent provides factual preferences and 1 indicates non factual preferences. The prior of these two questions is more involved and ties into calculating \tau_a.

Lets consider two plausible hypotheses: a single agent tells the truth \phi_{a} or he does not tell the truth about a single comparison e. An agent is telling the truth about a single comparison if his preference I_{a}(e) is identical to the majority preference \sum_{a \in A} I_{a}(e). An agent is not telling the truth if his preference is not identical to the majority preference. Let \phi_{a}(e) = 1 when I_{a}(e)\sum_{\alpha \in A} I_{\alpha}(e) < 0, 0 otherwise formalize this membership. Thus, we may define \displaystyle \tau_{a} = \frac{\sum_{e \in S_{a}} \phi_{a}(e)}{ \lvert S_{a} \rvert } as the probability that a’s comparisons are majority comparisons.

This formalization assumes that there is an objective comparison and naturally, this formalization would not hold for subjective comparisons.

Consider the following example: seven robots are asked to order a collection of five blocks from smallest to largest and the result is an incorrectly ordered list. We wish to identify the defective robot(s), remove their preferences and reproduce the corrected list. The following table summarizes each robot’s preferences starting from the top left to right then down to the bottom left to right.

multi_agent_2_preferences

Going through the process above we arrive at an order of \{0, 1, 3, 2, 4\} which is incorrect, thus we look at what our values are for this first iteration: \tau_{a}^{(0)} = \begin{pmatrix}0 & 0.25 & 0 & 0 & 0 & 0.\overline{1} & 0\end{pmatrix}. Robots a_{1}, a_{5} are more likely than their piers to deviate from the established majority. So, applying these values, we continue to perform this process iteratively until we arrive at \{0,1,2,3,4\} using values of \tau_{a}^{(2)} = \begin{pmatrix}0 & 0.41\overline{6} & 0 & 0 & 0 & 0.\overline{2} & 0 \end{pmatrix}. This process can be visualized the iterations of N below. Edge weights represent the consensus. Thicker lines for the majority, thinner for the minority.

multi_agent_N_iterations

While not previously alluded to, it is possible that it may take several iterations to achieve a correct ordering. Which raises a couple questions: under what conditions will the correct ordering exist, how many iterations will it take to reach the correct ordering and are the number of iterations finite. The last one is left to the reader.

As a thought experiment, it is evident that there has to be at least m - 1 preferences for a correct ordering to be produced. Starting at any one vertex, there exists a path to every other vertex, otherwise a consensus cannot be achieved because G could be decomposed into a set of disjoint subgraphs and no ordering is possible.

If there is an objective order P^{\star}, it is still possible to arrive at a separate order P^{\prime} that is produced after successive iterations if the majority preferences are all incorrect with respect to P^{\star}. Thus, we cannot guarantee that the process will converge. If the preferences are correct with respect to P^{\star}, then convergence will be achieved as the weight of dissenting agent’s preferences tends towards zero.

Advertisements

Written by lewellen

2010-01-01 at 8:00 am

Posted in Graph Theory

Tagged with

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: