done missing -------------------------------------------------------- Akira Imada 03 07 Alexander Doudkin -- -- 06 07 Alexander Kolesnikov -- -- 04 08 Anastasia Papastergiou 03 04 09 Anatoly Sachenko -- -- 13 18 Anna Samborska-Owczarek 10 13 Athanasios Hatzigaidas 03 04 05 Bora Kumova 05 09 17 Daichi Sirano 05 10 19 20 21 22 23 24 25 Helmut Mayer 02 08 11 Hubert Roth -- -- 16 17 Irina Bausova 02 07 Izabela Rejer 03 15 17 Juan Santos Lipo Wang 06 15 16 Olga Kurasova 09 10 17 Poramate Manoonpong 02 14 24 Qiangfu Zhao 13 18 Saulius Maskeliunas 12 Sergey Listopad 05 11 15 Sevil Sen 02 14 Sung-Bae Cho 10 15 Vincenzo Piuri 07 08 11 Vladimir Golovko 04 08 16 18 Vladimir Red'ko 06 14 16 Volodymyr Turchenko 09 12 14 Wieslaw Pietruszkiewicz 04 06 11 Xu Lisheng 08 12 18 Yuichi Sakumura 12 13 (62 as of 13 April 2010) -------------------------------------------------------------------------------------------------- later done ----------------------------------------------------------------- 02 => 2 2 plus 2 -2 (sevil, helmut) 03 => 1 -3 1 1 => reject 04 => 1 2 2 2 05 => 2 2 1 -1 06 => 2 2 2 07 => 1 2 -2 08 => 2 2 2 plus 2 (helmut) 09 => 2 2 2 plus 2 (turchenko) 10 => 2 -2 2 2 11 => -1 2 -3 -2 => reject 12 => 1 1 1 plus 1 (turchenko) 13 => 3 2 2 14 => 0 0 0 plus 1 (turchenko) 15 => 3 2 2 1 16 => 2 -1 3 17 => 2 1 -2 18 => 0 1 2 (62) 19 => -1 cyrano 20 => -2 cyrano 21 => -1 cyrano 22 => 2 cyrano 23 => -2 cyrano 24 => 3 poramate 0 cyrano 25 => -2 cyrano (8) ================================================================================================== If you have specific comment for a specific paper please send me its txt ================================================================================================== First of all, we are very sorry for this delay to announce the review results to the authors. We planned to assign each of the submitted papers to four reviewers in order for the review to be fair and to be helpful to refine it toward the camera ready version. However, despite of one week extension of the review deadline, we still have papers with less than four reviews as of today. So, there's no way of waiting now, and it's a time to make a decision with currently available review results. Thank you for your being patient. And we now are very happy to be able to notify that your paper was accepted. A late arrival review, if any, will send to the authors, which would not affect the decision any more, though. Then camera ready due is supposed to 1 May, but we have to admit to extend it to 7 May at the latest. After that we'd have a problem in printing the conference proceedings by the start of the conference. So, hopefully before the extended due. We'd appreciate it if you'd check the author's instruction carefully once again. We changed it a little, such as, the title and the text in the Abstract should be in bold font this time. Or, page number should be excluded. (The author's kit has already been changed.) Thank you in advance for your great cooperation. Expecting your wonderful elaborated camera ready version, and so does our proceedings. See you soon, Akira, PS Later will send you all the letter of invitation both via e-mail and surface mail, which will be necessary to apply entry visa to Belarus. only text was sent to the authors> ================================================================================================== Subject Comment later sent from other reviewers Heddre When we sent notification to the authors, we were not able to wait for the result(s) by the late-reviewers, any more. Now some of the late-reviews have arrived to us. The results does not affect the decision of acceptance/rejection, but here we send only the comments. We hope it helps to re-edit the camera-ready. -------------------------------------------------------------------------------------------------- Applying genetic algorithm techniques together with neural network is a promising approach for active queue management in networks using TCP. The authors here present their approach and compare it with some other techniques. - In the abstract, the authors should mention about applying genetic algorithms. The abstract should be improved. - The related work could be more comprehensive. The authors should mention other research done in the literature beside the works they used for comparison. - The work being criticized (using BP) needs more analysis, instead of stating the work as "wrong". I suggest the author to use a better way of criticizing the work. - The authors have space. They could use this space to extend the section 3 and 4. Furthermore, a better analysis of the results is needed. Please put more comments on the results and add more explanation for the figures 9, 10, 11. It is an interesting research, but the authors need to improve the presentation of the work. Especially the authors should explain the experimental results. The authors have enough space to extend their paper. -------------------------------------------------------------------------------------------------- you present an approach to congestion control in TCP/IP networks using an artificial neural network (ANN) controller generated by an evolutionary algorithm. The topic is definitely interesting and within the scope of the conference, the general structure of the paper is good, however, there are a number of serious flaws, which makes acceptance of the paper in its current form rather improbable. First, the linguistic quality is not sufficient. Many sentences sound like a literal translation from your native language. Second, most approaches are described in a very condensed way without giving the reader any chance to understand the details. It is a very good idea to present the conventional congestion control algorithms, however, you are just throwing all kinds of detail at the reader without giving any explanations and general descriptions. Either, you just describe it shortly without giving details, or you present the details in a logical, structured, and precise manner. Third, as far as I can get it from the paper, you made only a single evolutionary run. As you know, an evolutionary algorithm is a non-deterministic method, thus only in a number of repetitions (runs) you can get a good statistical result concerning the performance of your method. I will address some of these points in more detail below. In order to improve the quality of the paper I would like to suggest the following: - thoroughly improve the language - improve the description of conventional algorithms and the ANN approach - print out abbreviations when you first use them Some specific remarks (please, excuse my short cut style) Abstract - print out TCP, RED, PID, etc. (later you can use the abbreviations) Intro - "In reference [3]" -> "In [3]" - you state that training a linear network with BP is "wrong", it is not, as you certainly can train a linear network with BP, however, in general, a nonlinear network may have better performance, but it is not "wrong" to use BP for a linear network - in English you only use capital letters at the beginning of a sentence (or e.g., for names) 2 - explain the model equ(1), it is not enough to throw the equations to the reader 3 - as said before, either, only give a short general description or a precise, detailed one, but again, just writing down formulas and numbers is certainly not sufficient 4 - in essence you are using a feed-forward network with a single recurrent connection (which makes it already a recurrent network, you do not use these widely recognized terms), for your specific architecture, which is essentially an Elman/Jordan network, you can apply BP (through time) - again, you do not explain the fitness function, plus here it is Q, before it was q, I am not sure what exactly the error signal e is, and I do not understand why N is an input the network, when N is constant, as you suggest later in the experiment. - no details are given on the evolutionary algorithm, encoding, crossover method, mutation method, selection method, parameters like mutation probability and so on Generally the paper looks interesting. However some important issues, from the point of view of neural network (NN) usage, are not addressed well. For example, it is not clear how NN is working. What the architecture of MLP? How much input, hidden and output neurons? What is the training algorithm of MLP? The authors mentioned that they use an adaptive number of hidden neurons - it is very general - but how much? How we can adapt them to the input task in the described case? We can prove and then choose the number of hidden neurons to do some task - this is a normal research issue - but this issue is completely missed in the paper. The main question - how the authors put (preprocess) the data about bankruptcy in order to use them with NN? How the authors convert back the response of MLP to some explanation of feature selection? Finally the sentence "The results of the experiments present the accuracy for experiments with 3?fold cross?validation or done on the training set (Figure 4) and (Figure 4 respectively)" mentions the same pictures - this is some typo which should be corrected. I mark this paper "rather accept" - I suggest Organizing Committee do not accept it without detailed explanation of the asked questions above. - you evolved the network with a constant queue size, but later it is also better with varying queue size, this is a nice result, and you should stress it more, also you should discuss a bit more what you mean by "performing more effectively" Good luck! -------------------------------------------------------------------------------------------------- Dear author(s), you present an approach to function approximation using Bayesian regularization with orthogonal basis functions. Though, I cannot check the correctness of all the derivations, the paper makes an excellent impression. It is well-organized, written in a good English (some problems see below), and the algorithm is presented in a clear and precise manner. The only point of critique is concerned with section 6, where the examples and the corresponding results should be discussed in more detail. Also, some remarks on the quality of approximation compared to some standard methods should be made. Some specific remarks (please, excuse my short cut style) Title - better "Bayesian Regularization of Function Approximation Using OrthogonAlized BasEs" the A is a typo, and the E is the plural, which is better here, or you could say "an orthogonalized basis"...this also refers to the only language problem, often you use incorrect articles...as a rule of thumb, in singular you use an article e.g., "we analyze the/an example" or "we analyze examples" (no article in plural), of course, there are exceptions, but in most cases you will be correct using this simple rule Abstract - do not give citations in abstract Intro - "able to produce(s)" - "If there is (a) noise", well, here the first exception..;-) The "a" would be correct with the rule above, but with "noise" it is not, as in essence it is a word in singular, but with plural meaning (I am no linguist, so there may be more correct words describing this situation) - "Mathematically, ..." - print out EM = Expectation Maximization (I believe) - "eightieths" -> "eighties" - last sentence is unclear, "constrainTs" 2 - I would suggest to number all equations - "choosing (in) the model H." - "P(D" -> "P(D)" 3 - Let's = Let us - "it is dependes" -> "it depends" - the Z is not explained, it is quite complex, as can be seen later - is there some rationale behind choosing \Omega? It is essential for P(h|H), and it is more or less an ad-hoc assumption for this prior, which may or may not be justified? But I guess this comes to the old discussion on the problems of Bayesian statistics..;-) 4 - "Maximization of the total probability....", in this equation, where does the right side come from, who chose the M? It is very close to equ (1)? 5 - just looking at equation (6), it must hold that L >= N, which you state later, but does it follow directly from equ(6)? - equ(9) and later, what is S? - last paragraph, some words got lost, and I do not get the meaning, OK I think I get it, but it should be rephrased 6 - "on A regular mesh" - "velocity" could be omitted - "for and 64" -> "for 16 and 64" - in the case of non-orthogonal basis functions the computational cost seems to increase quite dramatically (as you have to compute the inverse of a matrix)? - "Fourier basis...", words missing? Anyway, you should explicitly give the functions, the Haar wavelet and the Fourier xy...if I remember correctly the basis functions for Fourier analysis are orthogonal!? Also, L is missing - in the last paragraph, it is not clear, if you compare approximation of your method with the approximation done by an RBF, or if you use the basis functions of an RBF for your approximation? Anyway, section 6 should be made more precise and all relevant details should be mentioned. Good luck! -------------------------------------------------------------------------------------------------- The paper is in the field of ICNNAI 2010. I ranked this paper as eweak acceptf because it is too few experimental results about face detection and face recognition are presented. We can see some result in Table 2, but it is presented for only 45 test images which is too small in my opinion. And it is not clear how many images are used for the training of the model. If this paper will be accepted by the organizers - more results on more cases should be included in the experimental section. One of the possible solutions for face detection using convolutional neural network and the cascade of neural network classifiers [see the references below] should be mentioned in the analysis of the state-of-the-art because this method showed better detection rate than Viola and Jones and therefore it could be used within the detection stage. The conclusion "Our computer face recognition system is able to handle large amounts of data due to proposed "the biggest face" approach and regions that most likely contain eyes." is not proven in the paper. 1. Paliy I., Kurylyak Y., Sachenko A., Madani K., Chohra A. Improved Neural Network-based Face Detection Method using Color Images Proceedings of the Third International Workshop on Artificial Neural Networks and Intelligent Information Processing (ANIIP 2007). ? Angers (France), 2007. ? P. 107-114 2. Paliy I., Sachenko A., Kurylyak Y., Boumbarov O., Sokolov S. Combined Approach to Face Detection for Biometric Identification Systems // Proceedings of 5th IEEE International Workshop on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, 21-23 September 2009, Rende (Cosenza), Italy, pp. 425-429. -------------------------------------------------------------------------------------------------- Generally the paper looks interesting. However some important issues, from the point of view of neural network (NN) usage, are not addressed well. For example, it is not clear how NN is working. What the architecture of MLP? How much input, hidden and output neurons? What is the training algorithm of MLP? The authors mentioned that they use an adaptive number of hidden neurons - it is very general - but how much? How we can adapt them to the input task in the described case? We can prove and then choose the number of hidden neurons to do some task - this is a normal research issue - but this issue is completely missed in the paper. The main question - how the authors put (preprocess) the data about bankruptcy in order to use them with NN? How the authors convert back the response of MLP to some explanation of feature selection? Finally the sentence "The results of the experiments present the accuracy for experiments with 3?fold cross?validation or done on the training set (Figure 4) and (Figure 4 respectively)" mentions the same pictures - this is some typo which should be corrected. I mark this paper "rather accept" - I suggest Organizing Committee do not accept it without detailed explanation of the asked questions above. -------------------------------------------------------------------------------------------------- This is a very general paper. The author(s) describe well-known parallel models for parallelization and distribution of evolutionary computations but in my opinion this describing attempt is not very informative. They described the models theoretically like "by words" giving the reader a general imagination of the models. But this issue is already described several times in other papers therefore I have a doubt on usefulness of this paper. I mark this paper as "weak accept" because this is a very urgent topic. I recommend organizing committee to accept this paper only in a case when authors will add to each of the described model, global, island, cellular and hybrid, a numerical estimation of some parameters of parallelization, for example speed-up or efficiency of parallelization. This numerical estimation collected from the referenced papers allows the reader to imagine the technical characteristic of each model and therefore the value of this paper will be increased which is good for the ICNNAI proceedings. Also the authors should pay attention of the reference usage: the paper has 22 references, first 10 are outdated. But the authors refer only to 3,4,5,6, and 22 references, other references are not referred in the text of the paper. I recommend to decrease a number of references and refer only to one - two papers per each model. ================================================================================================== (1) What's in my mind currently is, No. 19, 20, 21, 22, 24 will be accepted. No. 25 will be rejected because of its very poor presentation in addition to the unusually late submission. No. 23 will be rejected because of its problematic contents. As for the No. 23, I wouldn't have an intention to do it, but what if I asked de Castro, for example, to review this paper? This is not so abnormal. Not so often but once in a while, a journal or a conference sends its submission paper to me, in which one of my old paper was cited. In most cases, the text is full of results of "copy and paste" from my paper cited there in. Then I just send a simple review saying "It's a plagiarism." Nevertheless, if your opinion is an acceptance EITHER 25 or 24, OR BOTH, then I can agree with you, reluctantly though. (2) I hope this e-mail will reach you within a daytime, and I will wait for your reply until 6 p.m. on Thursday. You know, our mail-server has not worked poperly for a couple of these days. As we are running out of time, even in case no reply from you I will send notification to each of the authors, except for No. 23 & 25. It will be made at 6 p.m. on Thursday. As for No. 23 & 25, however, I will not send any notification to the authors until I receive your opinion. Akira, except for No.22 ================================================================================================== Dear Authors, First of all, it is our pleasure to announce that your paper is accepted to be presented in the ICNNAI-2010 conference. We are very sorry for this delay to notify the review results to the authors of late-submission. We had sent those papers submitted after the submission due, to the reviewers who already had started or even had completed the review of the normally submitted papers. Some are reluctant to make those additional reviews. Actually, some reviewers politely refused to make it as a matter of fact. It was psychological, and we understood how they felt. Thus far, we have made an effort to assign the paper to as many reviewers as possible in order for the review to be as fair as possible, and also in order to help the authors to improve the submission toward its camera-ready version. But the fact is, we are still waiting for the reviews for the newly assigned papers. But considering reality that we are running out of time, we have decided to make a decision now even with a lack of an enough amount of reviews. All the late submitted paper has now at least one review, while normally submitted paper had got four reviews. Not a few late submitted papers have got a negative review more or less. But with our policy that we want to avoid the risk of rejecting an implicitly good paper judging from an explicitly poor presentation. So, it is our pleasure now to announce your paper is accepted to be presented in the ICNNAI-2010 conference. The camera ready due was supposed to be on the 1st of May, but we have to admit to extend it to the 7th of May at the latest. After that we'd have a problem in printing the conference proceedings by the start of the conference. So, hopefully before the extended due. The earlier the better. We'd appreciate it if you'd check the author's instruction carefully once again. We changed it a little, such as, the title and the text in the Abstract should be in bold font this time. Also the page number should be excluded. (The author's kit has already been changed.) Thank you in advance for your great cooperation. We expect your wonderful elaborated camera ready paper so that we can create a wonderful looking proceedings. Sorry again for this late notification as well as without an enough amount of reviews. The reviews sent to us hereafter, if any, will be sent to you via e-mail. See you soon, ICNNAI-2010 Chair Vladimir Golovko & Akira Imada, PS Later we will send you the official invitation letter via both e-mail and surface mail, which will be also necessary to apply entry visa to Belarus for the participants if you live outside Belarus or Russia. So please let me know your preferred physical address to us as soon as possible. ================================================================================================== delete from the above Not a few late submitted papers have got a negative review more or less. But with our policy that we want to avoid the risk of rejecting an implicitly good paper judging from an explicitly poor presentation.