Though the section Introduction has been improved a lot, still the draft of the newest version resists to be easily understood. As the Chapter 1. Artificial Neural Network (ANN) is a general explanation of what is ANN, and most part of the description has no direct relation of the theme of this dissertation --- Devide and Conquer Approach, I start my comments with the Chapter 2. As for English, the author starts with: Apart from explicit solution of problem by specialized "one-piece" algorithm, there exist a number of solutions, which have modular structure. In modular structure modules could have some defined and regularized structure or be more or less randomly connected, ending up at completely independent and individual modules. Let me try to paraphrase, or I might say one possible interpretation is: Besides an explicit solution usually obtained when we solve a problem by a so-called one-piece-algorithm, sometimes there exists a number of implicit solutions which have a modular structure in which each of the modules can be some of those already defiened and usually completely independent with each other, is connected to some of the other modules at random, more or less, and constructs a whole regularized structure. Still long long one sentence though. However, I assume this english version of the dissertation is a complementary one which compliments the original French version. So let us now forget the issue of English representation, for the time being. We are running out of time, are we not? Well, Devide and Conquer approach is, essencially made up of the following three procedures. 1) Divide the target problem into some sub-problems of similar structure. 2) Solve each of the sub-problems 3) Combine the results to create the solution to the original problem. Though author descirbe the a couple of previous devide and conquer approach in the research field of ANN. Those are * Mixture of Expert (no reference) * Multi-modelling * Hybridization as a general approach, and * Intelligent Hybrids Systems * Neural Network Ensemble Concept * Models of experts Mixture, * Dynamic Cell Structure Architecture; and * Active learning. as the one particularly based on ANN. Although the purpose is described like it is to reduce the complexity and run-time of the algorithm, description of each method is not given and remain totally unclear. As it is difficult to guess what does each of these methods look like from the name of the method, summary, and hopefully example of applying it to a problemm, from one method to the next is more inportant than a very general detailed expalanation of ANN in the chapter 1. Then author gives a formula to estimate time-complexity this section is followed by two alternatives of the divide and conquer approach, i.e., Committee Machines and Malti agent System (MAS). What I cannnot understand is that author says those two alternatives are less important than the Devide & Conquer and despite of that each of these two less important are given much more detailed explanation, including pseud-code for the algorithm. Then section goes to "Clustering" which has not even been explained so far in this chapter. Decomposition modules could be based on any efficient clustering algorithm. We have chosen to use for decomposition ANN structures similar in activity to well-known k-means clustering algorithm, Clustering is described in details in the section 2.4 of this work in connection with modular algorithms. And in fact, there is a description of "Clustering" in the section 2.4 but the method --- How an ANN could be used to "decompose" the problem. Chapter 3 is for "Complexity Estimation" As author writes at the beginning of the chapter Complexity estimation is a supportive tool for classification the task of decomposision of a problem starts with estimating the complesity of the problem. 3.5 Presentation of selected methods 3.5.1 Indirect Bayes error estimation After the preparation with 6 subsections in 3.5.7 Shannon's entropy is given. but is this a measure of complexity? 3.5.2 Non-Parametric Bayes error estimation and bounds In the last subsection of this subsection the formula of "overlap-sum". But again it's not the explist measure of complexity. 3.5.3 Measures related to space partitioning Class Discriminability Measures Purity measure Neighborhood Separability Collective entropy] 3.5.4 Other Measures 3.5.4.1 Correlation-based approach 3.5.4.2 Fisher discriminant ratio 3.5.4.3 Interclass distance measures (scatter matrices) 3.5.4.4 Volume of the overlap region 3.5.4.5 Feature efficiency 3.5.4.6 Minimum Spanning Tree (MST) 3.5.4.7 Inter-intra cluster distance 3.5.4.8 Space covered by epsilon neighborhoods After a series of sub-sub-sub-sections, in the last sub-section 3.5.5 Ensemble of estimators, the author write The computation of several methods at once is potentially more difficult than one, but using several simple methods could be faster than one complex method. According to this each of them above a method of complex estimation and author seems to propose "pick up som simple methods and combine them somehow." However, in my humble opinion none of the 7 methods is not a method for extimate complexity. 3.6 Complexity Estimation Conclusion This section start with Classification complexity estimation methods present great variability. Hence, we now know what has so far called Complexity Estimation is classification complexity. No wonder colmogorof --- the most popular complexity mesure --- was not referred to. Then this section was concluded Not every method is suitable to estimate a complexity of multi-class classification problem - some are designed only to two-class problems, and as such they need special procedures to accommodate them to multi-class problem (like counting the average of complexi-ties of all two-class combinations) what usually result in combinatorial increase in computational complexity. That's it. My guess seems to be correct. This is the end of Part I and we could expect How the author estimate classification complexity in the Part II. Finally, Chapter 5. Let's expect a concrete implementation of T-DTS. As before it is needed to be paraphrase the original text. Decomposition Unit is for the purpose of dividing the database into several sub-databases. The unit is constructed using Kohonen's Self-Organizing Maps. ...... Finally, all the su-databases are feeded to any of Processing Unit. either of LVQ, MLP, LN, Perceptrons, GRNN or Probabilistic Networks The selection of which is by "assignment rules". The author introduces here a couple of diferent rules, but which one is used is remain unclear. The trained model is used later in processing of the data patterns assigned to the Processing Unit by assignment rules, as specified in the previous section. Processing Unit models used in this work are: LVQ, MLP, LN, Perceptrons, GRNN, and Probabilistic Networks. Here the author gives examples from two categories (1) model identification and (2) data classification. 2 from the 1st category 3 from the 2nd category. The first one is ... The problem is