In multiagent systems (MAS) the outcome of the actions of an agent usually depends on the actions of other agents, which may have conflflicting goals. Moreover, since the other agents may be unknown and may not be benevolent, an agent generally cannot assume the other agents would be willing to help without getting anything in return. If each agent would simply take those actions that are individually best, the result will often be sub-optimal for each of them, like in the well known prisoner’s dilemma. Therefore, agents in a MAS need to negotiate on what actions each will take