Automatica, August 2001, Volume 37, No. 8

Today the term neural network, or more appropriately artificial neural network, has come to mean any computing architecture that consists
of massively parallel interconnections of simple computing elements. As an area of research it is of great interest due to its potential for providing insights into the kind of highly parallel computation that is carried out by physiological nervous systems. Over the past decades, it has drawn researchers from across the scientific spectrum who are attracted by the opportunity for doing cooperative research among scholars from different fields. The main motivation has come from the fact that a variety of problems such as speech recognition and image classification, that are complex and difficult to solve using digital computers, are accomplished easily by people and animals, suggesting the existence of different and more efficient computational principles.

In the 1980s elaborate feedforward networks were constructed and empirically demonstrated (using simulation studies) to approximate quite well nearly all functions encountered in practical applications. This led Hornik, Stinchcombe, and White to raise the following question in their seminal paper [1] concerning the ultimate capabilities of such networks: "Are the successes observed to date reflective of some deep and fundamental approximation capability, or are they mere flukes....?" The rest, as we well know, is history. As a result of the work of numerous authors [2]-[4], it came to be realized that neural networks are capable of universal approximation in a very precise and satisfactory sense, and the study of neural networks left its empirical origins to become a mathematical discipline.

Even as the above theoretical developments were in progress, empirical investigations continued and were concerned almost entirely with problems in function approximation, optimization, and pattern recognition. In a paper in 1990 [5], Narendra and Parthasarathy suggested that feedforward neural networks could also be used as components in feedback systems. The approximation capabilities of such networks could be used in the design of both identifiers and controllers. Following the publication of [5], there was a frenzy of activity in the area of neural-network based control. A profusion of methods was suggested for controlling nonlinear systems. As in the case of function approximation described earlier, much of the research was heuristic in nature, but it provided empirical evidence that neural networks could outperform traditional methodologies in many applications. History repeated itself, and it soon became evident that more formal methods would be needed to quantitatively assess the scope and limitations of neural-network based control.

From a systems theoretic point of view, neural networks are finitely parametrized, effectively computable, and practically implementable families of transformations. Consequently, they contain the essential characteristics needed for the design of identifiers and controllers for complex systems, where nonlinearities, uncertainty,and complexity play a major role. Such systems are arising with increasing frequency due to the demands of a rapidly advancing technology. Neural-network based control naturally leads to problems in nonlinear control and nonlinear adaptive control. For the above reasons, the past decade has witnessed great activity in the field, with increased awareness on the part of researchers that such problems can be addressed within the framework of mathematical control theory. The fact that neural networks raise a wealth of interesting theoretical questions has also been responsible for attracting experts from well established areas of control theory, and this, in turn, has helped in making the field more rigorous from a theoretical standpoint. Finally, the publication of books e.g. [6], which collect important contributions to the field, attests to the fact that the evolution of neural-network based control from an art to a scientific discipline is finally underway.

This issue offers a glimpse of the status of research in the field as of 2001. From a large number of submitted papers, thirteen were selected for inclusion in this special issue after a very detailed review process. These papers can be broadly classified into three groups. The first group consists of six papers which address the output regulation and tracking problems in nonlinear systems. The first paper by Gang and Yao uses robust control philosophy to design performance oriented control laws. Following this, Zhang and Wang use a power series approximation method and propose a class of recurrent neural networks for output regulation. The paper by Arslan and Basar investigates robust controller design for a class of nonlinear systems with structurally unknown dynamics, while Wang and Huang design a control law to achieve disturbance rejection and asymptotic tracking. Calise, Hovakimyan, and Idan propose a direct adaptive design procedure for systems with both structured parametric uncertainty and unmodeled dynamics. A new approach to the tracking problem, based on the estimation of the derivative of a Lyapunov function, is proposed by Rovithakis, for a class of nonlinear systems that are affine in the control.

The second set of four papers extend currently existing ideas of neural-network based control in new directions. The use of neural networks for obtaining numerical solutions to optimal control problems that arise in systems described by partial differential equations is the subject of the paper by Padhi, Balakrishnan, and Randolph. Wang and Wan describe a structured neural network implementing the gradient projection algorithm to solve a quadratic programming problem in constrained model predictive control. Chen and Narendra use the multiple model approach proposed by the second author to switch between a linear and a nonlinear controller to assure both stability and performance. A first attempt at identifying nonlinear stochastic continuous-time systems using differential neural networks is made in the paper by Poznyak and Ljung.

The third set of papers propose methods which have direct application in nonlinear control problems. Selmic and Lewis describe a dynamic inversion compensation scheme for control in the presence of backlash. Parisini and Sacone propose a two-level hybrid control scheme for nonlinear control systems, and apply it to traffic control on freeways. Finally, Sundararajan and Saratchandaran describe an on-line learning neuro-control scheme for aircraft controller design.

Taken together, the thirteen papers make an excellent collection, providing a snapshot of the current status of the field. They demonstrate the great progress that has been made since the publication of [5]. At the same time, they also reveal that we are a long way off from developing systematic procedures for the design of neural-network based controllers.

Kumpati S. Narendra

Center for Systems Science

Electrical Engineering Department

Yale University.

Frank L. Lewis

Automation and Robotics Research Institute

The University of Texas at Arlington.

**References**

[1] K. Hornik, M. Stinchcombe and H. White (1989). Multilayer feedforward networks are universal approximators. Neural Networks, 2, 359-366.

[2] K. Funahashi (1989). On the approximate realization of continuous mappings by neural networks. Neural Networks, 2, 183-192.

[3] G. Cybenko (1989). Approximation by superpositions of a sigmoidal function. Math. Contr., Signals, Syst., 2, 303-314.

[4] K. Hornik, M. Stinchcombe and H. White (1990). Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks, 3, 551-560.

[5] K. S. Narendra and K. Parthasarathy (1990). Identification and control of dynamical systems using neural networks. IEEE Transactions on Neural Networks, 1, 4-27.

[6] F. L. Lewis, S. Jagannathan and A. Yesildirek (1999). Neural Network Control of Robot Manipulators and Nonlinear Systems. Taylor & Francis, UK.