from 3.0 SOLUTION STRATEGIES 3.1 PAYOFF DETERMINATION/CALCULATION till section 3.8 REVIEW more like a private text-book at the best Abstract This paper deals with Game Theory. The fascination of the game theory emerges from the fact that it shows us how we cannot simply derive conclusions about outcomes in competitive settings from psychological facts about the competitors. The complete set utility function, along with specifications about the extent to which the agents are privy to one and there are utility functions, determines the equilibrium strategies available to them. These and many more are studies in this work. Instruction The Origin of games has been vaguely assigned to the inborn tendency of mankind to amuse itself. Games have no geometrical boundaries and game playing <> found in all parts of the world whether it be in the under developed areas of Africa or in developed countries (Adeosun and Adetunde (2008)). The Babylonian Tumud is the compilation of ancient law and tradition set down during the first five centuries A.D. which serves as the basis of Jewish religious, criminal and civil law. In 1985, it was recognized that the Talmud anticipates the modern theory of cooperative games. Each solution corresponds to the of an appropriately defined game. In a letter dated November 1713 James Waldegrave provided the first, known, mixed strategy solution to a two-person game. Waldegrave wrote the letter, about a two-person version of the card game le Her, to Pierre-Remond de Montmort who in turn wrote to Nicolas Bernoulli, incuding in his letter a discussion of the Waldegrave solution. Waldegrave's solution is minimax mixed strategy equilibrium, but he made no extension of his result to other games, and expressed concern that a mixed strategy "does not seem to be in the usual rules of play" of games of chance. The first theorem of game theory asserts that chess is strictly determined, i.e chess has only one individually rational payoff profile in pure strategies. This theorem was published by Zermelo (1913) in his paper Uber eine Anwendung der Mengenlehre auf die Theorie des schachspiels and hence is referred to as Zermelo's Theorem. Emile Borel published four notes on strategies of games and an erratum to one of them. Borel gave the first modern formulation a mixed strategy along with finding the minimax solution for two-person games with three or five possible strategies. Initially he maintained that games with more possible strategies would not have minimax solution, but by 1927, he considered this an open question as he had been unable to find a counter example. John Von Neumann (1928) proved the minimax theorem in his article Zur Theorie der Gessellschaftsspiele. It states that every two-person zero-sum game with finitely many pure strategies for each player is determined, <> when mixed strategies are admitted, this variety of game has precisely one individually rational payoff vector. The proof makes use of some topology and of functional calculus. This paper also introduced the extensive form of game. Publication of F. Zenthen's book (1930) "Problems of Monopoly and Economic Warfare". In Chapter IV he proposed a solution to the bargaining solution to the bargaining problem which Harsanyi later showed is equivalent to Nash's bargaining solution. Fisher, R.A (1934) independently discovers Waldegrave's solution to the card game le Her. Fisher reported his work in the paper "Randomisation and an Old Enigma of card Play" John Von Neumann and Oskar Morgentem publication (1944) expound two-person zero-sum theory, this book is the work in area game theory such as the notion of a cooperative game, with transferable utility (TU), its coalitional form and its Von Neumann-Morgenstem stable sets. It was also the account of axiomatic utility theory given here led to its wide spread adoption within economics. In January 1950 Melvin Dresher and Merrill Flood carried out, at the Rand Corporation, the experiment which introduced the game now known as the Prisoner's Dilemma. The famous story associated with this game is due to A.W. Tucker, 'A Two-person Dilemma', (memo, Standford University). Howard Raiffa independently conducted, unpublished, experiments with the Prisoner's Dilemma. In four papers between 1950 and 1953 John Nash made contributions to both non-cooperative game theory and to bargaining theory. In two papers, "Equilibrium Points in N-Person Games (1950) and Non-cooperative Games (1951)", Nash prove the existence of a strategic equilibrium for non-cooperative games - the Nash equilibrium - and proposed the "Nash program", in which he suggested approaching the study of cooperative games via their reduction to non-cooperative form. In his two papers on bargaining theory, "The Bargaining problem" (1950) and "Two-person Cooperative Games" (1953), he founded axiomatic bargaining theory, proved the existence of the Nash bargaining solution and provided the first execution of the Nash program. George W. Brown (1951) described and discussed a simple iterative method for approximating solutions of discrete zero-sum games in his paper "Iterative Solutions of Games by Fictitious Play." John Charles C. Mckinscy (1952) published the first textbook on game theory "Introduction to the Theory of Games". The notion of the Core as a general concept was developed by Shapley, L.S and Gillies, D.B (Some Theorems on N-Person Games). This Core is the set of allocations that cannot be improved upon by any coalition. Lloyd Shapley (1953) in his paper "A value for N-Person games" characterized, by a set of axioms, a solution concept that associates with each coalitional game, a unique out-come, This solution is now known as the Sharpley value. Also, Lloyd Shapley (1953) in his paper "Stochastic games" showed that for the strictly competitive case, with future payoff discounted at a fixed rate, such games are determined and that they have optimal strategies that depend only on the game being played, not on the history or even on the date, i.e. the strategies are stationary. Extensive form games allow the modeler to specify the exact order in which players have to make their decisions and to formulate the assumptions about the information possessed by the players in all stages of the game Kuhn's H.W (1953) paper "Extensive Games and the problem of Information" includes the formulation of extensive form games which is currently used, and also some basic theorems pertaining to this class of games. Differential Games were developed by Rufus Isaacs in the early 1950s. they grew out of the problem of forming and solving military pursuit games. The notion of a strong Eqilibrium was introduced by R.J. Aumann (1959) in the paper "Acceptable Points in General Cooperative N-Person Games". The relationship between Edgeworth's idea of the contract curve and the Core was pointed out by Martin Shubik (1959) in his paper "Edgeworth Market games". One limitation with this paper is that Shubik worked within the confines of TU games whereas Edgeworth's idea is more appropriately modeled as an NTU game. One of the first books to take an explicitly non-cooperative game theoretic approach to modeling oligopoly is the publication of Martin Shubik's "Strategy and Market Structure: Competition, Oligopoly, and the Theory of Games". It also contains an early statement of the Folk Theorem. Near the end of this decade came the first studies of repeated games. The main result to appear at this time was the Folk Theorem. This states that the equilibrium outcomes in an infinitely repeated game coincide with the feasible and strongly individually rational outcomes of the one-shot game on which it is based. The development of NTU (non-transferable utility) games made cooperative game theory more widely applicable. Von Neumann and Morgenstern stable sets were investigated in the NTU context in the Aumann and Peleg (1960) paper "Von Neumann and Morgenstern solutions to cooperative Games without side Payments". In Kari Borch (1962) paper "Automobile Insurance", the article indicates how game theory can be applied to determine premiums for different classes of insurance, when required total premiums for all classes are given. Borch suggests that the Shapley value will give reasonable premiums for all classes of risk. Bondareva, O.N. (1963) established that for a TU game its core is non-empty if and only if it is balanced. The reference, translates as some Applications of Linear Programming Methods to the Theory of Cooperative Games. Aumann, R.J (1964) introduced and discussed idea of the Bargaining Set in his paper "The Bargaining Set for Cooperative Games". The bargaining set includes the core but unlike it, is never empty for TU games. Carlton E. Lemke and J.T. Howson, Jr (1964) describe an algorithm for finding a Nash equilibrium in a bimatrix game. Thereby giving a constructive proof of the existence of an equilibrium point, in their paper "Equilibrium Points in Bimatrix Games". The paper also shows that, except for degenerate situations, the number of equilibra in a bimatrix game is odd. Selten, R (1965) in his article "Spielheoretische Behandlung eines Oligopolmodellsmit Nachfragetraegheit" introduced the idea of refinements of the Nash equilibrium with the concept of (Subgame) perfect equilibra. Infinitely repeated game with incomplete information were born in a paper by Aumann, R.J. and Maschler, M. in 1966 titled "Game-Theoretic Aspects of Gradual Disarmament". In his paper "A General Theory of Rational Behaviour in Game Situations". John Harsanyi (1966) gave the, now, most commonly used definition to distinguish between cooperative and non-cooperative games. A game is cooperative if commitments - agreements, promises, threats - are fully binding and enforceable. It is non-cooperative if commitments are not enforceable. In the article "The Core of a N-Person Game", Scarf, H.E. (1967) extended the notion of balancedness to NTU games, then showed that every balanced NTU game has a non-empty core. In a series of three papers, "Games with Incomplete Information Played by Bayesian Players" Part I, II and III, John Harsanyi constructed the theory of games of incomplete information. This laid the theoretical groundwork for information economics that has become one of the major themes of economics and game theory. William Lucas (1968) in his paper "A Game with no Solution" answered the long-standing question as to whether stable sets always exist. David Schmeidler (1969) introduced the Nucleolus in his paper "The Nucleolus of characteristic Game". The Nucleolus always exits, is unque, is a member of the Kermel and for any non-empty core is always in it. For a coalitional game to be a market game it is necessary that it and all its subgames must have non-empty cores, i.e. that the game be totally balanced. In "Market Games" L.S. Shapley and Martin Shubik (1969) prove that this necessary condition is also sufficient. In 1972, Oskar Morgenstern founded International Journal of Game Theory. John Maynard Smith (1972) introduced the concept of an Evolutionarily stable strategy (ESS) to evolutionary game theory in an essay 'Game theory and the Evolution of Fighting'. The ESS concept has since found increasing use within the economics (and biology) literature. In the traditional view of strategy randomization, the players use a randomizing device to decide to their actions. John Harsanyi (1973) was the first to break away from this view with his paper "Games with Randomly Disturbed Payoffs: A New Rationale for Mixed Strategy Equilibrium Point". For Harsanyi, nobody really randomizes. The appearance of randomization is due to the payoff being exactly known to all; each player, who known his own payoff exactly, has a optimal action against his estimate of what the others will do. Publication of Aumann R.J and Shapley L.S (1974) "values of Non-Atomic Games" deals with values for large games in which all the players are individually insignificant (non-atomic games). Aumann R.J. (1974) proposed the concept of a correlated equilibrium in his paper "Subjectivity and Correlation in Randomized Strategies". The introduction of trembling hard perfect equilibria occurred in the paper "Reexamination of the Perfectness Concept for Equilibrium Points in Extensive games by Reinhard Selten (1975)". This paper was the true catalyst for the "refinement industry" that has developed around the Nash equilibrium. Kalai E. and Smorodinsky M. (1975), in their article "Other Solutions to Nash's Bargaining Problem", replace Nash's independence of irrelevant alternatives axiom with a monotonicity axiom. The resulting solution is known as the kalai-Smorodinsky solution. Littlechild S.C and Thompson G.F (1977) are among the first to apply the nucleous to the problem of cost allocation with article "Aircraft Landing Fees: A game Theory Approach". They use the nucleolus, along with the Core and Shapley value, to calculate fair and efficient landing and take-off fees for Birmingham airport. Aumann, R.J. (1981) published a survey of Repeated Games. This survey firstly proposed the idea of applying the notion of an automaton to describe a player in a repeated game. A second idea from the survey is to study the interactive behaviour of bounded players by studying a game with appropriately restricted set of strategies. These ideas have given birth to a large and growing literature. Divid M. Kreps and Robert Wilson (1982) extend the idea of a subgame perfect information. They call this extended idea of equilibrium sequential. It is detailed in their paper "Sequential equilibria". Rubinstein, A (1982) considered a non-cooperative approach to bargaining in his paper "Perfect Equilibrium in a Bargaining Model". He considered an alternating offer game where offers are made sequentially until one is accepted. There is no bound on the number of offers that can be made but there is a cost to delay for each player. Rubinstein showed that the subgame perfect equilibrium is unique when each player's cost of time is given by some discount factor delta. Following the work of Gale and Shapley, A.E. Roth (1984) applied to hospitals. In his paper "The Evolution of the Labour Market for Medical Interns and Residents: A case Study in Game Theory" he found that American hospitals developed in 1950 a method of assignment that is a point in the core. For a Bayesian game the question arises as to whether or not it is possible to construct a situation for which there is no sets of types large enough to contain all the private information that players are supposed to have. In their paper "formulation of Bayesian Analysis for games with Incomplete Information" J.F Mertens and Zamir, S (1985) show that it is not possible to do so. Following Aumann, the theory of automata is now being used to formulate the idea of bounded rationality in repeated games. Two of the first articles to take this approach were A. Neyman's 1985 paper "Bounded Complexity Justifies Cooperation in the Finitely Repeated Prisoner's Dilemma" A few games that have been programmed for play on digital computers are identified below. There are rules for playing these games: 1) Tic-Tac-Toe Many special purpose machines of today now play Tic-Tac-Toe game. program have been written for many digital computers. CharlesBabbage conceived as far back as 1800's the idea of playing Tic-Tac-Toe on 2) Go This Japanese game is a very popular game among computer people. The game with black and white stones on a board containing 361 intersection points. The rules of Go are simple and no mathematical theory of the game is known. It is Estimated that there are around 10172 different board positions during the course of a game. It is easily seen that it would be impossible to calculate all the various borad configurations during the course of a game. This is one of the reasons that GO is such an interesting game to play on a computer. 3) Pentominoes A polyomino is a figure formed by joining unit squares along their edges. Pentominoes are five-square polyominoes and it is possible to construct 12 different pentominoes. A pentomino game is played by arranging the 12 pentomines into various size rectangular boxes: 3 by 20, 4 by 15, 5 by 12, or 6 by 10. Computers have been used to generate many solution to the pentomio game. In fact, a computer program found that there are two solutions for the 3 by 20 configuration, 1010 for the 5 by 12 configuration and 2339 for the most popular size, the 6 by 10 rectangular configuration. 4) Knight's Tour The strange moves of the Chess Knight make his operations fascinating. We are permitted to move two or one rows up or down and one or two columns left or right on the Chessboard. An interesting game is to move the knight to every square on the chessboard without landing in any square twice. There are many different tours and digital computers have been used to determine many of them. 5) Go-Moko is a two-player game played on a by 19 lined Go board. Each player has 180 stones and places the stones, on alternate moves on an intersection of the board. The object is to obtain five adjacent stone in a row either vertically, horizontally, or diagonally. The player doing this wins the game. Several computer programs have been written to play this game. 6) < Puzzle> => 15 puzzle It consists of a square box containing squares with the numbers 1 to 15 and one blank square. Any one of the numbers to the immediate right, left, top, or bottom of the blank square can be moved into the blank space. The object of the puzzle is to start with a specific number arrangement and finish with a different arrangement. There is one slight catch to the Puzzle there are 10, 461, 394, 944, 000 number arrangements that are impossible to obtain. There are also the same numbers of possible arrangements. A computer program of around 100 machine language statements determine if a specified number arrangement of the 15 puzzle is possible or impossible. 7) NIM This is an ancient mathematical game. It is played by two people or one person and computer playing alternately. Before the play starts, an arbitrary number of objects is put in an arbitrary number of piles, in no specific order. Then each player in his turn removes as many objects as he wishes from any pile (but from only one pile) and at least one object. The player who takes the chip is the winner of the game. 8) Slot Machines A computer is used to simulate the operation of a slot machine. Instead of pulling the handle as one would do on a real Slot Machine, the action was started by pointing alight pen at a start position on the display console. The computer generated a three-symbol combination composed of the following symbols: chaerries, oranges, melons, bars, bells, lemons and plums. This symbol combination, along with an indicated payoff, was displayed on the cathode ray tube of the display console. The computer system provides a printed listing of the Slot's identification, the money invested in the machine by slot enthusiasts and the amount of payoff. The computer can easily keep track of the operation several hundred Slot Machines. 9) Prime Numbers An integer greater than one is called a Prime Number if and only if the only positive integers that exactly divide it are itself and the number one. How does one determine if is prime? One way is to write down a large number of integers and simply cross off the composite numbers (numbners that are divisible by numbers other than themselves and the number 1). This simple procedure is relatively easy to use when one wants to determine only a few Prime Numbers; however, it would be a rather lengthy operation to determine all the prime less than 200,000 or to determine if 209267 is a prime number. A computer can easily determine if a number is prime by using a method similar to that of Eratostheness. A computer was used to determine a 961 - digit prime number (211213 - 1) was a 3376 - digit Prime number. 10) Magic Squares Magic squares were known to the ancients and were thought to possess mystic and magical powers because of their unsual nature. These magical squares have little practical value; however, they provide stimulating problems for programmer training. Other games that have been programmed for play on digital computers are: War Games, Checkers, Chess, Blackjack, Roulette, etc. ================================================================================================== 2. TYPES OF GAMES There are various types of gaming activities. The simplest type of game is one which has only two players, and where the gain of one is the loss of the other. Such a game is called a zero-sum two-person game. 2.1 Two-Person Game A game that involves only two players is called a two-person game. A player cannot play it and the number of players must not exceed two that is, two player are required to play this type of game at a time. 2.1.1 Zero-Sum Game This is the type of game whereby the sum of the gains together with the losses equal to zero. Here, the gains (payoffs) equal the losses (payoffs). If the gains are represented as positive values, the loss will be represented as negative values; they both have the same magnitude. 2.1.2 Two-Person Game This is a game involving only two players. In this type of game, there is just a play and the game is over. A player will lose and the other will gain if both use their best strategies thus, resulting in zero-sum when the payoffs to both are added together. 2.1.3 N-Person Game This is a game involving more than two players. This type of game does not give a zero-sum game that is, the magnitude of gains or losses to each player is not equal. In fact, individual player is rated according to his or her performance and at the end the results are computed. The chance of playing the game is more than one before results are computed as against the two-person game. 2.1.4 Non Zero-Sum Game Any game that has its result not equal to zero is called non zero-sum game. The payoffs of the players when added together give no zero-sum. That is, game and losses when added together give no zero-sum result. 2.2 COMPETITIVE SITUATION A competitive situation is called a game if it has, for example, the following properties: a) There are a finite number of participants, called players. b) Each player has a finite number of possible courses of action. c) A play occurs when each player chooses one of his courses of action (The choices are assumed to be made simultaneously, i.e., no player knows the choice of another until he has decided on his own). d) Every combination of courses of action determines an outcome which results in a gain to each player. (A loss is considered a negative gain). 2.3 SOLUTION OF A GAME The solution of a game involves finding: a) The best strategies for both players. b) The value of the game. In this situation, both players use their best strategies that are stable in the sense that neither player can increase his gain by deviating from his initial strategy once he becomes aware of his opponent's. 2.4 STRATEGY OF A PLAYER The strategy of a player is the decision rule he uses to decide which course of action he should employ. This strategy may be pure strategy or mixed strategy. 2.4.1 Pure Strategy A pure strategy is a decision always to select the same course of action. 2.4.2 Mixed Strategy A mixed strategy is a decision to choose at least two of his courses of action with fixed probabilities, i.e. if a player decides to use just two courses of action with equal probability, he might spin a coin to decide which one to choose. The advantage of a mixed strategy is that an opponent is always kept guessing as to which course of action is to be selected on any particular occasion. 2.5 BEST STRATEGY We define "best strategy" on the basis of the minimax criterion of optimality explained below. This states that if a player lists all his possible payoffs of all his potential courses of action he will choose that course of action which corresponds to the best of his outcome. The implication of this criterion is that the opponent is an extremely shrewd player who will ensure that, whatever any course of action picked, our gain is kept to a minimum. 2.6 VALUE OF A GAME The value of a game is the expected gain of player A if both players use their best strategies. 2.6.1 Minimax Criterion of Optimality Best strategy is defined on the basis of the minimax criterion of optimality. This states that if a player lists the worst possible outcomes of all his potential strategies, he will choose that strategy which corresponds to the best of these worst outcomes. The implication of this criterion is that the opponent is an extremely shrewd player who will ensure that, whatever our strategy, our gain is kept to a minimum. 2.7 STABLE SOLUTION A stable solution can only exit in terms of pure strategies when the payoff matrix has a saddle point. If there is no such saddle point the strategies are mixed strategies and the problem becomes one of evaluating the probabilities with which each course of action should be selected. Consider the following game of matching Pennies. Two players, A and B each put down a penny. If the coins match, i.e. both are heads or both are tail. A collects them both; otherwise B collects them both. The payoff matrix for this game is given below: Intuitively, it can be seen that it is not a good plan for either player to decide in advance to play either of his pure strategies. Success in this game lies in attempting to anticipate the opposing player's course of action. A player could score over his opponent if he detected any pattern in his opponent's strategy or noticed that his opponent had a preference for either heads or tails. The opponent may only obviate such detection by selecting his courses of action at random such that the probability of choosing either heads or tails is. Such a strategy may be represented as < >. A player may employ this strategy, for example by tossing the coin. If player A used his strategy he would win, on average, as often as he would lose, and his average or expected gain would be zero. This would be true whatever strategy player B adopted, whether he played heads throughout, tails throughout, or used the same strategy as A. if player A uses the strategy he cannot lose whatever B decide to do. Similar reasoning also hold for player B. as there is no strategy for either player which will ensure a positive gain, the strategy is the optimal strategy for both players according to the minimax criterion. The situation where both players use this strategy is stable in the sense that when either player realizes the other's strategy he has no incentive to change his own. This intuitive analysis affords s clue to the solution of games which do not have saddle points. 2.8 WEIGHTED AVERAGE OF THE POSSIBLE OUTCOMES Consider the game with the following payoff matrix If this game is to have no saddle point the two largest elements of the matrix must constitute one of the diagonals. We assume that this is so and, therefore both players use mixed strategies. Our task is to determine the probabilities with which both players choose their course of action. Let player A use his first course of action with probability x and, therefore his second course of action with probability (1-x). Let player B's strategy, similarly, be (y, 1-y). The expected gain to A if B plays his course of action I throughout is ax + c(1-x). Similarly, the expected gain to A if B plays his course of action II throughout is bx + d(1-x). Thus A's expected gain if B plays (y, 1-y) is < > 2.9 Informal Definition A game is a set of acts by 1 to n rational Dennettian agents and possibly an<> a rational Dennettian agent (a random mechanism) called nature where at least one Dennettian agent has control over the outcome of the set of acts and where the Dennettian agents are potentially in conflict, in the sense that one Dennettian agent could rank outcomes differently from the others. A strategy for a particular Dennettian agent i is a vector that specifies the acts that i will take in response to all possible act by other agents. A Dennettian agent i is rational if and only if for given strategies of other agents the set of acts specified by i's strategy is such that it secures the available consequence which is most highly ranked by i. Nature is a generator of probabilistic influences on outcomes: technically it is the unique Dennettian agent in a game that is not rational. Dennettian Agents: A Dennettian agent is a unit acts, where an act is any move that potentially influences future allocations. Game may be represented either ion extensive form, that is, using a "tree" structure of the sort that is familiar to decision theorists, where each player's strategy is a path through the tree, or in strategic form. A game in strategic form is a list: Where " N is the set of players and the index I designates a particular agent i. " S is the strategy space for the agents Where Si is the set of possible strategies for i. " (s) is a vector of payoff function one for each agent, excluding player O. each payoff function specifies the consequence for the agent in question of the strategies specified for all agents. 2.10 APPLICATIONS Game theory has of course, been extensively used in microeconomic analysis where its record of accurate predictions has been impressive in areas such as industrial organization theory, the theory of the firm, and auction theory. In macroeconomics and political science its use has been more controversial, since in such applications it is often difficult to establish that the specified game is in fact an accurate representation of the empirical phenomenon being modeled. For example, it has been common place to suggest that the nuclear standoff between the United States and the Soviet Union during the cold war was a case of the Prisoner's Dilemma. However it is far from obvious that the leaderships in either country in fact attached the necessary payoffs in their utility functions - preferring the destination of the world to their own unique destination. Game Theory has also been fruitfully applied in evolutionary biology, where species and/or genes are treated as players.