assume multiple optimal behavious always identical behavior out of many even if it's optimum can be said to be inteligent it should be hold not only for changing envilonment but also in stable envilonment where adaptation is not necessary) how we can deep in a deseart with a limited oils 1) reccurent 2) adaptive => what if jeep can go back to base to refill? R. Aharonov-Barki, T. Beker, and E. Ruppin "Emergence of memory driven command neurons in evolved artificial agents." fully recurrent ANN controllers The life cycle of an agent lasts 150 time steps, each step consisting of one sensory reading, network updating, and one motor action. ================================================================================================== Evolutionary Robots with On-line Self-Organization and Behavioral Fitness Dario Floreano1 and Joseba Urzelai2 The combination of evolution and learning is typically achieved by evolving neural controllers that learn with an off-the-shelf algorithm, such as reinforcement learning. ... Here we suggest to evolve the adaptive characteristics of a controller instead of combining evolution with off-the-shelf algorithms. The method consists of encoding on the genotype a set of four local Hebb rules for each synapse, but not the synaptic weights, and let these synapse use these rules to adapt their weights online starting always from random values at the beginning of the life. In other words, these controller can rely less on genetically-inherited invariants and must develop on-the-fly the connection weights necessary to achieve the task. However, as time goes the synapses start to change their value using the genetically specified rules every 100 ms (the time necessary for a full sensory-motor loop on the physical robot). Notice that synaptic adaptation occurs on-line while the robot moves and that the network self-organizes without external supervision and reinforcement signals. The controller we have used in our experiments is a fully-recurrent discrete-time neural network. Evolving Adaptive Neural Networks with and without Adaptive Synapses Kenneth O. Stanley Dept. of Computer Sciences The University of Texas at Austin Austin, TX 78705 kstanley@cs.utexas.edu Bobby D. Bryant Dept. of Computer Sciences The University of Texas at Austin Austin, TX 78705 bdbryant@cs.utexas.edu Risto Miikkulainen ==== Spikes that count: rethinking spikiness in neurally embedded systems Keren Saggiea;. , Alon Keinana , Eytan Ruppina;b ================================================================================================== Keren Saggie-Wexler Alon Keinan School of Computer Science Tel-Aviv University Ramat-Aviv, Tel-Aviv, 69978 Israel kerensa@post.tau.ac.il keinanak@post.tau.ac.il Eytan Ruppin Neural Processing of Counting in Evolved Spiking and McCulloch-Pitts Agents We evolve agents with two types of neurocontrollers: networks of McCulloch-Pitts neurons, and spiky networks of discrete-time integrate-and-fire neurons. The goal of this study is to single out the effects of integration and memory in the integrate-and-fire model. The goal of this study is to single out the effects of integration and memory in the integrate-and-fire model. This type of dynamics may be useful in performing tasks that require memory, such as the counting task. agent is equipped with a set of sensors, motors, and a fully recurrent neurocontroller. Network updating is synchronous: in every time step a sensory reading occurs, network activity is updated, and a motor action is taken according to the resulting activity in the designated output neurons. That is, the mouth motor neuron should be open (a value of 0) in the (K . 1)th waiting step, and closed (a value of 1) in the Kth step, the state of this motor having no effect for the first K  2 steps. Hence, in essence, the agent has to learn to count precisely to K. The agent has a limited life span of 150 sensorimotor steps. In order to facilitate the evolution of the agents performing the counting task, waiting steps (steps in which the agent does not move or turn) are not counted as part of the life span. In both types of networks, a neuron fires if its voltage exceeds a threshold (set to 0.05 in all evolutions). After firing, the voltage of a spiky neuron is reset to the resting voltage, with zero refractory period. A genetic algorithm is used to evolve the synaptic weights Wi, j (in the range [1, 1]) of both types of networks. For the spiky neurocontrollers, the memory factors Ei are evolved as well in the range [0,1], allowing different neurons to perform a different amount of integration over time. done before 4.4 ================================================================================================== Evolving Adaptive Neural Networks with and without Adaptive Synapses Kenneth O. Stanley, Bobby D. Bryant, Risto Miikkulainen Thus, one way to achieve adaptive solutions is to evolve neural networks with plastic synapses, i.e. adaptive networks. Yet although synaptic plasticity facilitates such adaptation, recurrent networks with fixed weights can also respond differently in changing environmental conditions. Accordingly, we ran five 350-generation runs with fixed connection weights and five 500-generation runs with plastic synapses (the former runs took fewer generations to converge). All runs were free to utilize recurrent connections but only the adaptive runs could utilize adaptive synapses. Accordingly, we ran five 350-generation runs with fixed connection weights and five 500-generation runs with plastic synapses (the former runs took fewer generations to converge). All runs were free to utilize recurrent connections but only the adaptive runs could utilize adaptive synapses.