Posts Tagged ‘Topological sorting’
Multi-agent ordering
Given a set of agents and a set of entities
, agents independently provide preferences
between different pairs of entities for a subset
of the possible
pairs. From these preferences, we wish to form a poset that contains the entities in order of least preferred to most preferred.
An agent may prefer an individual pair in the following ways:
, the agent prefers
to
;
, the agent is indifferent;
, the agent prefers
to
. Let
be the undirected graph that represents the preferences of an individual agent.
To form a consensus we are interested in . Let the adjacency matrix
represent
. In doing so, we may define
in the following way:
. It should be evident that
and
. We wish to simplify
by turning it into a directed graph whereby all negative entries are removed. We shall define
to be this directed graph.
Let be the sum of weights into vertex
,
be the sum of weights out of vertex
and
be the set of tuples for each vertex in
. Thus, let
along with the relation: increasing first element, then by decreasing second element be the poset
of entities order from least preferred to most preferred.
Consider the following example: on a merchant’s website, customers independently provide comparisons between products that the merchant sells. The merchant wishes to know his customers’ preferences about his products
.
Customers have provided the following set of preferences:
The consensus graph is produced from these preferences, followed by the directed graph until is computed, which, when used to form
, will yield
. The merchant can infer from this analysis that customers least preferred product is 6, and most preferred product is 5.
Now, this model assumes that the agents are being truthful in their preferences. It is conceivable that a consortium of agents could force an entity to become the least or most preferred entity by providing false preferences. This premise presents two questions: how do we identity fallacious behavior, and two: how do we prevent it. The answer to the later of these two questions is simple: we add an additional term when constructing
where
indicates that the agent provides factual preferences and
indicates non factual preferences. The prior of these two questions is more involved and ties into calculating
.
Lets consider two plausible hypotheses: a single agent tells the truth or he does not tell the truth about a single comparison
. An agent is telling the truth about a single comparison if his preference
is identical to the majority preference
. An agent is not telling the truth if his preference is not identical to the majority preference. Let
when
,
otherwise formalize this membership. Thus, we may define
as the probability that a’s comparisons are majority comparisons.
This formalization assumes that there is an objective comparison and naturally, this formalization would not hold for subjective comparisons.
Consider the following example: seven robots are asked to order a collection of five blocks from smallest to largest and the result is an incorrectly ordered list. We wish to identify the defective robot(s), remove their preferences and reproduce the corrected list. The following table summarizes each robot’s preferences starting from the top left to right then down to the bottom left to right.
Going through the process above we arrive at an order of which is incorrect, thus we look at what our values are for this first iteration:
. Robots
are more likely than their piers to deviate from the established majority. So, applying these values, we continue to perform this process iteratively until we arrive at
using values of
. This process can be visualized the iterations of
below. Edge weights represent the consensus. Thicker lines for the majority, thinner for the minority.
While not previously alluded to, it is possible that it may take several iterations to achieve a correct ordering. Which raises a couple questions: under what conditions will the correct ordering exist, how many iterations will it take to reach the correct ordering and are the number of iterations finite. The last one is left to the reader.
As a thought experiment, it is evident that there has to be at least preferences for a correct ordering to be produced. Starting at any one vertex, there exists a path to every other vertex, otherwise a consensus cannot be achieved because
could be decomposed into a set of disjoint subgraphs and no ordering is possible.
If there is an objective order , it is still possible to arrive at a separate order
that is produced after successive iterations if the majority preferences are all incorrect with respect to
. Thus, we cannot guarantee that the process will converge. If the preferences are correct with respect to
, then convergence will be achieved as the weight of dissenting agent’s preferences tends towards zero.