The explanation below is a tentative. Later I will put a more elaborate PDF-version in my web-page. When we simulate robot in a grid-world in PC, we have to design two things. [I] One is to design a world where robots explore. Let's take an example: (1) Very simple grid where we have no obstacle like corridor, wall etc. Then two robots play like Tom and Jerry. Tom trys to catch Jerry and Jerry trys not to be caught; (2) A grid world where many obstacles and/or foods are loacated. A robot explore the world to find a specific position which might be called a goal/exit (See jeferson.pdf -- I'll put it on my web-page later); (3) A simple grid like desert where robot in its jeep should maximize a penetration distance with a limited fuels; (4) A road-map in a small town where roads are not so complicated but they have a couple of trafic signals. multipul car robot mome the roads following trafic signals. The task should be to avoid a trafic jam; and (5) Etc. [2] The other is how we make robot(s) move. We have several options. (1) Random walker; (2) Finite State machine (See also jeferson.pdf); (3) Neural network where inputs are what robot see immediately in front of it (E.g. food, gas, obstacle, etc) and out put is an action (E.g. Go straight, Turn Right/Left, Take the food, Do nothing, etc.); (4) Fuzzy controler (later I will find an appropriate paper and put them in my web-page). (5) Autonomous agents who learn how to move by Reinforcement-Learning. (6) Quantum robots (This would be too specific for you but one of my current interest.) Anyway define a set of actions (Go straight, Turn-right, Take-fuel, etc) and specify how they choose one of its actions (E.g. At random, by Genetic Algorithm, Reinforcement-learning etc.) So, we have enormous choices. If your topic is still open internationally, let's wright a paper together. Expecting your good job. Akira