Mạng thần kinh thường xuyên cho dự đoán P7
Số trang: 19
Loại file: pdf
Dung lượng: 287.24 KB
Lượt xem: 10
Lượt tải: 0
Xem trước 2 trang đầu tiên của tài liệu này:
Thông tin tài liệu:
Stability Issues in RNN Architectures PerspectiveThe focus of this chapter is on stability and convergence of relaxation realised through NARMA recurrent neural networks. Unlike other commonly used approaches, which mostly exploit Lyapunov stability theory, the main mathematical tool employed in this analysis is the contraction mapping theorem (CMT), together with the fixed point iteration (FPI) technique. This enables derivation of the asymptotic stability (AS) and global asymptotic stability (GAS) criteria for neural relaxive systems. For rigour, existence, uniqueness, convergence and convergence rate are considered and the analysis is provided for a range of activation functions and recurrent neural networks architectures....
Nội dung trích xuất từ tài liệu:
Mạng thần kinh thường xuyên cho dự đoán P7 Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic, Jonathon A. Chambers Copyright c 2001 John Wiley & Sons Ltd ISBNs: 0-471-49517-4 (Hardback); 0-470-84535-X (Electronic)7Stability Issues in RNNArchitectures7.1 PerspectiveThe focus of this chapter is on stability and convergence of relaxation realised throughNARMA recurrent neural networks. Unlike other commonly used approaches, whichmostly exploit Lyapunov stability theory, the main mathematical tool employed inthis analysis is the contraction mapping theorem (CMT), together with the fixedpoint iteration (FPI) technique. This enables derivation of the asymptotic stability(AS) and global asymptotic stability (GAS) criteria for neural relaxive systems. Forrigour, existence, uniqueness, convergence and convergence rate are considered and theanalysis is provided for a range of activation functions and recurrent neural networksarchitectures.7.2 IntroductionStability and convergence are key issues in the analysis of dynamical adaptive sys-tems, since the analysis of the dynamics of an adaptive system can boil down to thediscovery of an attractor (a stable equilibrium) or some other kind of fixed point. Inneural associative memories, for instance, the locally stable equilibrium states (attrac-tors) store information and form neural memory. Neural dynamics in that case can beconsidered from two aspects, convergence of state variables (memory recall) and thenumber, position, local stability and domains of attraction of equilibrium states (mem-ory capacity). Conveniently, LaSalle’s invariance principle (LaSalle 1986) is used toanalyse the state convergence, whereas stability of equilibria are analysed using somesort of linearisation (Jin and Gupta 1996). In addition, the dynamics and conver-gence of learning algorithms for most types of neural networks may be explained andanalysed using fixed point theory. Let us first briefly introduce some basic definitions. The full definitions and furtherdetails are given in Appendix I. Consider the following linear, finite dimensional,116 INTRODUCTIONautonomous system 1 of order N N y(k) = ai (k)y(k − i) = aT (k)y(k − 1). (7.1) i=1Definition 7.2.1 (see Kailath (1980) and LaSalle (1986)). The system (7.1)is said to be asymptotically stable in Ω ⊆ RN , if for any y(0), limk→∞ y(k) = 0, fora(k) ∈ Ω.Definition 7.2.2 (see Kailath (1980) and LaSalle (1986)). The system (7.1) isglobally asymptotically stable if for any initial condition and any sequence a(k), theresponse y(k) tends to zero asymptotically. For NARMA systems realised via neural networks, we have y(k + 1) = Φ(y(k), w(k)). (7.2)Let Φ(k, k0 , Y0 ) denote the trajectory of the state change for all k k0 , withΦ(k0 , k0 , Y0 ) = Y0 . If Φ(k, k0 , Y ∗ ) = Y ∗ for all k 0, then Y ∗ is called an equi-librium point. The largest set D(Y ∗ ) for which this is true is called the domain ofattraction of the equilibrium Y ∗ . If D(Y ∗ ) = RN and if Y ∗ is asymptotically stable,then Y ∗ is said to be asymptotically stable in large or globally asymptotically stable. It is important to clarify the difference between asymptotic stability and abso-lute stability. Asymptotic stability may depend upon the input (initial conditions),whereas global asymptotic stability does not depend upon initial conditions. There-fore, for an absolutely stable neural network, the system state will converge to oneof the asymptotically stable equilibrium states regardless of the initial state and theinput signal. The equilibrium points include the isolated minima as well as the maximaand saddle points. The maxima and saddle points are not stable equilibrium points.Robust stability for the above discussed systems is still under investigation (Bauer etal. 1993; Jury 1978; Mandic and Chambers 2000c; Premaratne and Mansour 1995). In conventional nonlinear systems, the system is said to be globally asymptoticallystable, or asymptotically stable in large, if it has a unique equilibrium point which isglobally asymptotically stable in the sense of Lyapunov. In this case, for an arbitraryinitial state x(0) ∈ RN , the state trajectory φ(k, x(0), s) will converge to the uniqueequilibrium point x∗ , satisfying x∗ = lim φ[k, x(0), s]. (7.3) k→∞Stability in this context has been considered in terms of Lyapunov stability and M -matrices (Forti and Tesi 1994; Liang and Yamaguchi 1997). To apply the Lyapunovmethod to a dynamical system, a neural system has to be mapped onto a new systemfor which the origin is at an equilibrium point. If the network is stable, its ‘energy’ willdecrease to a minimum as the system approaches and attains its equilibrium state. Ifa function that maps the objective function onto an ‘energy function’ can be found,then the network is guaranteed to converge to its equilibrium state (Hopfield and 1 Stability of systems of this type is discussed in Appendix H.STABILITY ISSUES IN RNN ARCHITECTURES 117 6 5 y=x 4 K(x),y 3 2 ...
Nội dung trích xuất từ tài liệu:
Mạng thần kinh thường xuyên cho dự đoán P7 Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic, Jonathon A. Chambers Copyright c 2001 John Wiley & Sons Ltd ISBNs: 0-471-49517-4 (Hardback); 0-470-84535-X (Electronic)7Stability Issues in RNNArchitectures7.1 PerspectiveThe focus of this chapter is on stability and convergence of relaxation realised throughNARMA recurrent neural networks. Unlike other commonly used approaches, whichmostly exploit Lyapunov stability theory, the main mathematical tool employed inthis analysis is the contraction mapping theorem (CMT), together with the fixedpoint iteration (FPI) technique. This enables derivation of the asymptotic stability(AS) and global asymptotic stability (GAS) criteria for neural relaxive systems. Forrigour, existence, uniqueness, convergence and convergence rate are considered and theanalysis is provided for a range of activation functions and recurrent neural networksarchitectures.7.2 IntroductionStability and convergence are key issues in the analysis of dynamical adaptive sys-tems, since the analysis of the dynamics of an adaptive system can boil down to thediscovery of an attractor (a stable equilibrium) or some other kind of fixed point. Inneural associative memories, for instance, the locally stable equilibrium states (attrac-tors) store information and form neural memory. Neural dynamics in that case can beconsidered from two aspects, convergence of state variables (memory recall) and thenumber, position, local stability and domains of attraction of equilibrium states (mem-ory capacity). Conveniently, LaSalle’s invariance principle (LaSalle 1986) is used toanalyse the state convergence, whereas stability of equilibria are analysed using somesort of linearisation (Jin and Gupta 1996). In addition, the dynamics and conver-gence of learning algorithms for most types of neural networks may be explained andanalysed using fixed point theory. Let us first briefly introduce some basic definitions. The full definitions and furtherdetails are given in Appendix I. Consider the following linear, finite dimensional,116 INTRODUCTIONautonomous system 1 of order N N y(k) = ai (k)y(k − i) = aT (k)y(k − 1). (7.1) i=1Definition 7.2.1 (see Kailath (1980) and LaSalle (1986)). The system (7.1)is said to be asymptotically stable in Ω ⊆ RN , if for any y(0), limk→∞ y(k) = 0, fora(k) ∈ Ω.Definition 7.2.2 (see Kailath (1980) and LaSalle (1986)). The system (7.1) isglobally asymptotically stable if for any initial condition and any sequence a(k), theresponse y(k) tends to zero asymptotically. For NARMA systems realised via neural networks, we have y(k + 1) = Φ(y(k), w(k)). (7.2)Let Φ(k, k0 , Y0 ) denote the trajectory of the state change for all k k0 , withΦ(k0 , k0 , Y0 ) = Y0 . If Φ(k, k0 , Y ∗ ) = Y ∗ for all k 0, then Y ∗ is called an equi-librium point. The largest set D(Y ∗ ) for which this is true is called the domain ofattraction of the equilibrium Y ∗ . If D(Y ∗ ) = RN and if Y ∗ is asymptotically stable,then Y ∗ is said to be asymptotically stable in large or globally asymptotically stable. It is important to clarify the difference between asymptotic stability and abso-lute stability. Asymptotic stability may depend upon the input (initial conditions),whereas global asymptotic stability does not depend upon initial conditions. There-fore, for an absolutely stable neural network, the system state will converge to oneof the asymptotically stable equilibrium states regardless of the initial state and theinput signal. The equilibrium points include the isolated minima as well as the maximaand saddle points. The maxima and saddle points are not stable equilibrium points.Robust stability for the above discussed systems is still under investigation (Bauer etal. 1993; Jury 1978; Mandic and Chambers 2000c; Premaratne and Mansour 1995). In conventional nonlinear systems, the system is said to be globally asymptoticallystable, or asymptotically stable in large, if it has a unique equilibrium point which isglobally asymptotically stable in the sense of Lyapunov. In this case, for an arbitraryinitial state x(0) ∈ RN , the state trajectory φ(k, x(0), s) will converge to the uniqueequilibrium point x∗ , satisfying x∗ = lim φ[k, x(0), s]. (7.3) k→∞Stability in this context has been considered in terms of Lyapunov stability and M -matrices (Forti and Tesi 1994; Liang and Yamaguchi 1997). To apply the Lyapunovmethod to a dynamical system, a neural system has to be mapped onto a new systemfor which the origin is at an equilibrium point. If the network is stable, its ‘energy’ willdecrease to a minimum as the system approaches and attains its equilibrium state. Ifa function that maps the objective function onto an ‘energy function’ can be found,then the network is guaranteed to converge to its equilibrium state (Hopfield and 1 Stability of systems of this type is discussed in Appendix H.STABILITY ISSUES IN RNN ARCHITECTURES 117 6 5 y=x 4 K(x),y 3 2 ...
Tìm kiếm theo từ khóa liên quan:
Mạng thần kinh Artificial neural network mạng lưới thần kinh dự đoán mạng lướiGợi ý tài liệu liên quan:
-
Short-term load forecasting using long short-term memory network
4 trang 48 0 0 -
Nghiên cứu hệ thống điều khiển thông minh: Phần 1
232 trang 35 0 0 -
Applications of artificial neural network in textiles
10 trang 30 0 0 -
Bài giảng Nhập môn Học máy và Khai phá dữ liệu: Chương 8 - Nguyễn Nhật Quang
69 trang 28 0 0 -
Artificial intelligence approach to predict the dynamic modulus of asphalt concrete mixtures
10 trang 27 0 0 -
8 trang 25 0 0
-
68 trang 24 0 0
-
Ebook Sustainable construction and building materials: Select proceedings of ICSCBM 2018 - Part 2
446 trang 23 0 0 -
Lecture Introduction to Machine learning and Data mining: Lesson 8
68 trang 23 0 0 -
Sử dụng mạng nơron thần kinh nhân tạo để tính toán, dự đoán diện tích gương hầm sau khi nổ mìn
8 trang 23 0 0