plenary lectures at ACS 2007 many approaches to modeling of semantic memories have been proposed they are not very useful in real life applications because they lack knowledge comparable to the common sense that humans have, and they cannot be implemented in a computationally efficient way. [maybe in my paper some day] You have one, too, unless it broke and you haven't replaced it (understandable but unusual). Shouldn't you be using it for more than reheating coffee? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I'd decided to make a combination frittata and Spanish-style tortilla (impure, I know, but they're close enough), ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I don't know about the vitamins (although Harold McGee makes the same point in his article today ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ on the science of microwaving), but in other respects, she's right. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ My conclusion to that point was, if you can steam it, you can microwave it. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Like Ms. Kafka's, it takes things too far. But that is understandable: When we want to prove a point, we become extreme. -------------------------------------------------------------------------------------------------- We've come to the point where we're not looking at the microwave only to simplify but to find what produces the best results,... -------------------------------------------------------------------------------------------------- [yet another] The textual record begins with "Mechanical Problems," ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ moves to Rome and then through the medieval Islamic world to the Renaissance. It ends, finally, with Newton, who described many of the basic laws of mechanics in the 18th century. => According to (Rote et al. 1996) the textual record of the jeep problem begun with the 52nd problem in the "{\it Propositiones ad acuendos inventes}" (in Latin) attributed to Alcuin of York (BC 732-804) as a problem of camel carrying grain in a desert, and ends with currently popular form of a mathematical game in which we have its analytical solution. Rote, G. and G. Zhang (1996) "Optimal Logistics for Expeditions: the Jeep Problem with Complete Refilling." Bericht No. 71. pp ???-???. There are a surprising number of old, and extremely old, scientific texts that have survived the ravages of time in one form or another. The Archimedes Web site lists far more than 100, including Euclid's geometry, Hero of Alexandria's Roman-era technical manual on crossbows and catapults, medieval treatises on algebra and mechanics by Jordanus de Nemore and Galileo's 17th-century defense of a heliocentric solar system. "Mechanical Problems" arrived later in the Renaissance, along with Greek copies of Aristotle's works, rediscovered in libraries, monasteries and other Middle East repositories. It inspired many commentaries by Renaissance scholars and was read by Galileo and other theorists. Indeed, "Mechanical Problems" is in many respects as useful today as it was 2,500 years ago, as anyone who has twiddled the weights on a health club scale can attest. [yet another] What sparked his interest, Mr. Paglen recalled, were Vice President Dick Cheney's remarks as the Pentagon and World Trade Center smoldered. On "Meet the Press," he said the nation would engage its "dark side" to find the attackers and justice. "We've got to spend time in the shadows," Mr. Cheney said. "It's going to be vital for us to use any means at our disposal, basically, to achieve our objective." "Black World," a 2006 display of his photographs at Bellwether, a gallery in Chelsea, showed "anonymous-looking buildings in parched landscapes shot through a shimmering heat haze," Holland Cotter wrote in The New York Times, adding that the images "seem to emit a buzz of mystery as they turn military surveillance inside out: here the surveillant is surveilled." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ His book explores this idea and seeks to decode the symbols. Many patches show the Greek letter sigma, which Mr. Paglen identifies as a technical term for how well an object reflects radar waves, a crucial parameter in developing stealthy jets. A patch from a Groom Lake unit shows the letter sigma with the "buster" slash running through it, as in the movie "Ghost Busters." "Huge Deposit -- No Return" reads its caption. Huge Deposit, Mr. Paglen writes, "indicates the bomb load deposited by the bomber on its target, while 'No Return' refers to the absence of a radar return, meaning the aircraft was undetectable to radar." [do not alwasy use "on the other hand"] in contrast, rated nothing higher than a (tactfully worded) "development opportunity." nyt Many a Shanghai dumpling gets slurped to the accompaniment of chat about superdelegates. The shift is not seismic but shows the world is looking beyond the Bush administration. ~~~~~~~~~~~~~ America-in-Asia remains a Japanese priority, ugly incidents at Okinawa notwithstanding. (in spite of) ~~~~~~~~~~~~~~~ iccnai population based search => done Thus,publishing a special issue to integrate multidisciplinary studies on BMI is very timely for catalyzing further development in this active field of research. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ => We hope this article will catalyze ... dice-throwing nature: Dr. Arkani-Hamed said concerning worries about the death of the Earth or universe, "Neither has any merit." He pointed out that because of the dice-throwing nature of quantum physics, there was some probability of almost anything happening. There is some minuscule probability, he said, "the Large Hadron Collider might make dragons that might eat us up." => icnnai title 2-d jeep problem --- a jeep can survive in a desert? section 2 ... goal of this research The result solves a problem of indisputable difficulty in its field. => Techniques of genetic and evolutionary computation are being increasingly applied to difficult real-world problems - often yielding results that are not merely interesting, but competitive with the work of creative and inventive humans. => done reference -------------------------------------------------------------------------------------------------- w.pdf Kenneth O. Stanley, Bobby D. Bryant, and Risto Miikkulainen (2003) "Evolving Adaptive Neural Networks with and without Adaptive Synapses." Proceedings of the 2003 IEEE Congress on Evolutionary Computation (CEC-2003) pp ???-??? => done "A potentially powerful application of evolutionary computation (EC) is to evolve neural networks for automated control tasks. However, in such tasks environments can be unpredictable and fixed control policies may fail when conditions suddenly change. Thus, there is a need to evolve neural networks that can adapt, i.e. change their control policy dynamically as conditions change. In this paper, we examine two methods for evolving neural networks with dynamic policies. The first method evolves recurrent neural networks with fixed connection weights, relying on internal state changes to lead to changes in behavior. The second method evolves local rules that govern connection weight changes. The surprising experimental result is that the former method can be more effective than evolving networks with dynamic weights, calling into question the intuitive notion that networks with dynamic synapses are necessary for evolving solutions to adaptive tasks." Though Stanley et al wrote "environments can be unpredictable and fixed control policies may fail when conditions suddenly change. Thus, there is a need to evolve neural networks that can adapt, i.e. change their control policy dynamically as conditions change. In In our paper => Though an empty desert will not suddenly change, we want is not a deterministic result of behavior after learning further The first method evolves recurrent neural networks with fixed connection weights, relying on internal state changes to lead to changes in behavior. we are not interested in solutions which are static after applying a machine-learnign techniqu or computational evolution. \begin{quote} "... fixed control policies may fail when conditions suddenly change. Thus, there is a need to ... change their control policy dynamically as conditions change." \end{quote} even if fixed control policies may not fail since in our case conditions will not suddenly change and no need change thier control policy dynamically, we don't think this behaviour is intelligent. koko {\it "Natural organisms are constantly faced with unforeseen circumstances and generally adapt to them very well. They can do it because their nervous systems are plastic, i.e. not fixed at birth. Thus, one way to achieve adaptive solutions is to evolve neural networks with plastic synapses, i.e. adaptive networks."} Well, let's see their method a little more in detail They quoted Floreano and Urzelai (2000) which read "The local learning rules ... helped networks quickly alter their functionality, facilitating a policy transition from one task to another." Floreano, D., and Urzelai, J. (2000) "Evolutionary robots with online self-organization and behavioral fitness. " Neural Networks No.13, pp. 4431--4434 (?). Evolutionary-robots-with-online-self-organization-and-behavioral-fitness a policy change during the networkfs lifetime. the connection weights could change over the networkfs lifetime according to different evolved rules at each connection. They confirmed {\it ... by evolving local Hebbian learning rules for specific connections in the network, the connection weights could change over the network's lifetime according to different evolved rules at each connection. The results indeed confirmed that networks with evolved plasticity were able to adapt to change in the environment."} More interesting finding by them are "The results indeed confirmed that networks with evolved plasticity were able to adapt to change in the environment. However, the experiments also yielded a surprising result: Recurrent networks with fixed connection weights could also solve the task. Moreover, the fixed-weight networks did so faster and more reliably than the adaptive networks. Floreano's local rule rules included a plain Hebbian rule, which strengthens the connection proportionally to correlated activation, and rules for weakening the connection when activations do not correlate." That way, only two parameters are needed to express a rule, keeping the search space to a minimum. Let  and  be the activities of the incoming and outgoing neurons, respectively, and be the highest weight magnitude in the network. If the co nnection is excitatory, the change in weight magnitude can be expressed as Eq. (2) where $\eta_1$ is the Hebbian learning rate and $\eta_2$ is the decay rate, which controls how fast the connection weakens when the presynaptic node does not affect the postsynaptic node. Inhibitory connections adapt as Eq. (3) Thus they evolved cromosomes including $\eta_1$, $\eta_2$ and weight values of all the connections in the network, like $((\eta_{11}, \eta_{21}, w_1), (\eta_{12}, \eta_{22}, w_2) \cdots, (\eta_{1n}, \eta_{2n}, w_n) )$. assuming $n$ connections in total and $(\eta_{1k}, \eta_{2k})$ is $\eta_1$ and $\eta_2$ for $k$ th connection. concluded The fixed-weight solutions exploited a clever strategy of switching their internal state, represented by recurrent connections, while adaptive evolution found more complex holistic solutions. -------------------------------------------------------------------------------------------------- Stoll et al.~\cite{stol} also showed an interesting experiment exploiting a Reinforcement Learning Stolle, M. and D.~Precup (????) "Learning Options in Reinforcement Learning." We present empirical studiy in a robot navigation in which environment is an empty gridworld with no obstacles. Markov Decision Processes goal is the agent will be asked to perform different goal-achievement tasks in an environment that is otherwise the same over time. Reinforcement Learning: => done the agent takes a sequence of primitive actions in a discrete, fixed time scale. On each time step, the agent observes the state of its environment and chooses an action Then one time step later, the agent receives a reward and the environment transitions to a next state. The transition to a new state is according to the probability, regardless of the path taken by the agent before the current state. Policy, is defined as probability distribution for picking actions in each state. The goal of the agent is to find a policy that maximizes the total reward received over time. ================================================================================================== In such tasks the correct policy depends on an aspect of the environment that varies randomly from one trial to the next and is not immediately observable but must be discovered through exploration. In such tasks the correct policy depends on an aspect of the environment that varies randomly from one trial to the next and is not immediately observable but must be discovered through exploration. The results indeed confirmed that networks with evolved plasticity were able to adapt to change in the environment. However, the experiments also yielded a surprising result: Recurrent networks with fixed connection weights could also solve the task. Moreover, the fixed-weight networks did so faster and more reliably than the adaptive networks. Local Learning Rules Each connection in an adapting network follows a rule that governs how its weight changes. The space of possible rules included a plain Hebbian rule, which strengthens the connection proportionally to correlated activation, and rules for weakening the connection when activations do not correlate. An important question for future research is whether fixed-weight recurrent networks can scale up to more difficult tasks, or whether there exist some tasks for which dynamic synapses and holistic solutions are necessary. Characterizing those situations where dynamic synapses provide an advantage will contribute to our general understanding of adaptation and help to explain its use in nature. (2) Markov Decision Processes (MDPs) and reinforcement learning (RL) The second environment is an empty gridworld with no obstacles. Although the environment does not contain bottleneck states, our approach still finds useful options, which essentially allow the agent to travel around the environment more quickly. the agent will be asked to perform different goal-achievement tasks in an environment that is otherwise the same over time. not different but identical task but in a different way than before. In this paper, we assume that the agent is confronted with goal-achievement tasks, which take place in a fixed environment. This setup is quite natural when thinking about human activities. For instance, every day we might be cooking a different breakfast, but the kitchen layout is the same from day to day. prematurely converged to a meaningless solution we will take it a concideration on how nn, fsm, ... etc will approach to this issue al-mighty genetic-programming it will also inspire some future scientists. I don't recall growing too excited about the old textbook problems involving locomotives lumbering at different velocities out of cities A and B. But I would have paid attention to two cars traveling 200 miles an hour separated by inches. [concluding remarks] to put it frankly we have had no good results as of today. => done [From The little prince] => done -------------------------------------------------------------------------------------------------- He answered, "Oh come on! Your know!" as if we were talking about something quite obvious. And I was forced to make a great mental effort to understand this problem all by myself. => done A geographer is too important to go wandering about. He never leaves his study. But he receives the explorers there. He questions them, and he writes down what they remember. => done From a mountain as high as this one, he said to himself, I'll get a viewo of the whole planet and all the people on it... But he saw nothing but rocky peaks as sharp as needles. => done I made an exasperated gesture. It is absurd looking for a well, at random, in the vastness of the desert. But even so, we started walking. => done This domain requires adaptation because a fixed network cannot change its policy in midcourse. ==================== FSM is ok but redundant and deterministic as well, so probabilistic ... quat at the head of section Dan Browns' scean where she searched for switch in perfectly dark room ohp human case -- when I ask pardon, one guy repeat same phrase with a little louder, other more intelligent guy changes the phrase.