Mạng thần kinh thường xuyên cho dự đoán P3
Số trang: 16
Loại file: pdf
Dung lượng: 181.43 KB
Lượt xem: 11
Lượt tải: 0
Xem trước 2 trang đầu tiên của tài liệu này:
Thông tin tài liệu:
Network Architectures for Prediction Perspective The architecture, or structure, of a predictor underpins its capacity to represent the dynamic properties of a statistically nonstationary discrete time input signal and hence its ability to predict or forecast some future value. This chapter therefore provides an overview of available structures for the prediction of discrete time signals.
Nội dung trích xuất từ tài liệu:
Mạng thần kinh thường xuyên cho dự đoán P3 Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic, Jonathon A. Chambers Copyright c 2001 John Wiley & Sons Ltd ISBNs: 0-471-49517-4 (Hardback); 0-470-84535-X (Electronic)3Network Architectures forPrediction3.1 PerspectiveThe architecture, or structure, of a predictor underpins its capacity to represent thedynamic properties of a statistically nonstationary discrete time input signal andhence its ability to predict or forecast some future value. This chapter therefore pro-vides an overview of available structures for the prediction of discrete time signals.3.2 IntroductionThe basic building blocks of all discrete time predictors are adders, delayers, multipli-ers and for the nonlinear case zero-memory nonlinearities. The manner in which theseelements are interconnected describes the architecture of a predictor. The foundationsof linear predictors for statistically stationary signals are found in the work of Yule(1927), Kolmogorov (1941) and Wiener (1949). The later studies of Box and Jenkins(1970) and Makhoul (1975) were built upon these fundamentals. Such linear structuresare very well established in digital signal processing and are classified either as finiteimpulse response (FIR) or infinite impulse response (IIR) digital filters (Oppenheimet al. 1999). FIR filters are generally realised without feedback, whereas IIR filters 1utilise feedback to limit the number of parameters necessary for their realisation. Thepresence of feedback implies that the consideration of stability underpins the design ofIIR filters. In statistical signal modelling, FIR filters are better known as moving aver-age (MA) structures and IIR filters are named autoregressive (AR) or autoregressivemoving average (ARMA) structures. The most straightforward version of nonlinearfilter structures can easily be formulated by including a nonlinear operation in theoutput stage of an FIR or an IIR filter. These represent simple examples of nonlinearautoregressive (NAR), nonlinear moving average (NMA) or nonlinear autoregressivemoving average (NARMA) structures (Nerrand et al. 1993). Such filters have immedi-ate application in the prediction of discrete time random signals that arise from some 1 FIR filters can be represented by IIR filters, however, in practice it is not possible to representan arbitrary IIR filter with an FIR filter of finite length.32 OVERVIEWnonlinear physical system, as for certain speech utterances. These filters, moreover,are strongly linked to single neuron neural networks. The neuron, or node, is the basic processing element within a neural network. Thestructure of a neuron is composed of multipliers, termed synaptic weights, or simplyweights, which scale the inputs, a linear combiner to form the activation potential, anda certain zero-memory nonlinearity to model the activation function. Different neuralnetwork architectures are formulated by the combination of multiple neurons withvarious interconnections, hence the term connectionist modelling (Rumelhart et al.1986). Feedforward neural networks, as for FIR/MA/NMA filters, have no feedbackwithin their structure. Recurrent neural networks, on the other hand, similarly toIIR/AR/NAR/NARMA filters, exploit feedback and hence have much more potentialstructural richness. Such feedback can either be local to the neurons or global to thenetwork (Haykin 1999b; Tsoi and Back 1997). When the inputs to a neural network aredelayed versions of a discrete time random input signal the correspondence betweenthe architectures of nonlinear filters and neural networks is evident. From a biological perspective (Marmarelis 1989), the prototypical neuron is com-posed of a cell body (soma), a tree-like element of fibres (dendrites) and a long fibre(axon) with sparse branches (collaterals). The axon is attached to the soma at theaxon hillock, and, together with its collaterals, ends at synaptic terminals (boutons),which are employed to pass information onto their neurons through synaptic junc-tions. The soma contains the nucleus and is attached to the trunk of the dendritictree from which it receives incoming information. The dendrites are conductors ofinput information to the soma, i.e. input ports, and usually exhibit a high degree ofarborisation. The possible architectures for nonlinear filters or neural networks are manifold.The state-space representation from system theory is established for linear systems(Kailath 1980; Kailath et al. 2000) and provides a mechanism for the representationof structural variants. An insightful canonical form for neural networks is providedby Nerrand et al. (1993), by the exploitation of state-space representation whichfacilitates a unified treatment of the architectures of neural networks. 23.3 OverviewThe chapter begins with an explanation of the concept of prediction of a statisticallystationary discrete time random signal. The building blocks for the realisation of linearand nonlinear predictors are then discussed. These same building blocks are also shownto be the basic elements necessary for the realisation of a neuron. Emphasis is placedupon the particular zero-memory nonlinearities used in the output of nonlinear filtersand activation functions of neurons. An aim of this chapter is to highlight the correspondence between the structuresin nonlinear filtering and neural networks, so as to remove the apparent boundariesbetween the work of practitioners in control, signal processing and neural engineering.Conventional linear filter models for discrete time random signals are introduced and, 2 ...
Nội dung trích xuất từ tài liệu:
Mạng thần kinh thường xuyên cho dự đoán P3 Recurrent Neural Networks for Prediction Authored by Danilo P. Mandic, Jonathon A. Chambers Copyright c 2001 John Wiley & Sons Ltd ISBNs: 0-471-49517-4 (Hardback); 0-470-84535-X (Electronic)3Network Architectures forPrediction3.1 PerspectiveThe architecture, or structure, of a predictor underpins its capacity to represent thedynamic properties of a statistically nonstationary discrete time input signal andhence its ability to predict or forecast some future value. This chapter therefore pro-vides an overview of available structures for the prediction of discrete time signals.3.2 IntroductionThe basic building blocks of all discrete time predictors are adders, delayers, multipli-ers and for the nonlinear case zero-memory nonlinearities. The manner in which theseelements are interconnected describes the architecture of a predictor. The foundationsof linear predictors for statistically stationary signals are found in the work of Yule(1927), Kolmogorov (1941) and Wiener (1949). The later studies of Box and Jenkins(1970) and Makhoul (1975) were built upon these fundamentals. Such linear structuresare very well established in digital signal processing and are classified either as finiteimpulse response (FIR) or infinite impulse response (IIR) digital filters (Oppenheimet al. 1999). FIR filters are generally realised without feedback, whereas IIR filters 1utilise feedback to limit the number of parameters necessary for their realisation. Thepresence of feedback implies that the consideration of stability underpins the design ofIIR filters. In statistical signal modelling, FIR filters are better known as moving aver-age (MA) structures and IIR filters are named autoregressive (AR) or autoregressivemoving average (ARMA) structures. The most straightforward version of nonlinearfilter structures can easily be formulated by including a nonlinear operation in theoutput stage of an FIR or an IIR filter. These represent simple examples of nonlinearautoregressive (NAR), nonlinear moving average (NMA) or nonlinear autoregressivemoving average (NARMA) structures (Nerrand et al. 1993). Such filters have immedi-ate application in the prediction of discrete time random signals that arise from some 1 FIR filters can be represented by IIR filters, however, in practice it is not possible to representan arbitrary IIR filter with an FIR filter of finite length.32 OVERVIEWnonlinear physical system, as for certain speech utterances. These filters, moreover,are strongly linked to single neuron neural networks. The neuron, or node, is the basic processing element within a neural network. Thestructure of a neuron is composed of multipliers, termed synaptic weights, or simplyweights, which scale the inputs, a linear combiner to form the activation potential, anda certain zero-memory nonlinearity to model the activation function. Different neuralnetwork architectures are formulated by the combination of multiple neurons withvarious interconnections, hence the term connectionist modelling (Rumelhart et al.1986). Feedforward neural networks, as for FIR/MA/NMA filters, have no feedbackwithin their structure. Recurrent neural networks, on the other hand, similarly toIIR/AR/NAR/NARMA filters, exploit feedback and hence have much more potentialstructural richness. Such feedback can either be local to the neurons or global to thenetwork (Haykin 1999b; Tsoi and Back 1997). When the inputs to a neural network aredelayed versions of a discrete time random input signal the correspondence betweenthe architectures of nonlinear filters and neural networks is evident. From a biological perspective (Marmarelis 1989), the prototypical neuron is com-posed of a cell body (soma), a tree-like element of fibres (dendrites) and a long fibre(axon) with sparse branches (collaterals). The axon is attached to the soma at theaxon hillock, and, together with its collaterals, ends at synaptic terminals (boutons),which are employed to pass information onto their neurons through synaptic junc-tions. The soma contains the nucleus and is attached to the trunk of the dendritictree from which it receives incoming information. The dendrites are conductors ofinput information to the soma, i.e. input ports, and usually exhibit a high degree ofarborisation. The possible architectures for nonlinear filters or neural networks are manifold.The state-space representation from system theory is established for linear systems(Kailath 1980; Kailath et al. 2000) and provides a mechanism for the representationof structural variants. An insightful canonical form for neural networks is providedby Nerrand et al. (1993), by the exploitation of state-space representation whichfacilitates a unified treatment of the architectures of neural networks. 23.3 OverviewThe chapter begins with an explanation of the concept of prediction of a statisticallystationary discrete time random signal. The building blocks for the realisation of linearand nonlinear predictors are then discussed. These same building blocks are also shownto be the basic elements necessary for the realisation of a neuron. Emphasis is placedupon the particular zero-memory nonlinearities used in the output of nonlinear filtersand activation functions of neurons. An aim of this chapter is to highlight the correspondence between the structuresin nonlinear filtering and neural networks, so as to remove the apparent boundariesbetween the work of practitioners in control, signal processing and neural engineering.Conventional linear filter models for discrete time random signals are introduced and, 2 ...
Tìm kiếm theo từ khóa liên quan:
Mạng thần kinh Artificial neural network mạng lưới thần kinh dự đoán mạng lướiGợi ý tài liệu liên quan:
-
Short-term load forecasting using long short-term memory network
4 trang 48 0 0 -
Nghiên cứu hệ thống điều khiển thông minh: Phần 1
232 trang 35 0 0 -
Applications of artificial neural network in textiles
10 trang 30 0 0 -
Bài giảng Nhập môn Học máy và Khai phá dữ liệu: Chương 8 - Nguyễn Nhật Quang
69 trang 28 0 0 -
Artificial intelligence approach to predict the dynamic modulus of asphalt concrete mixtures
10 trang 27 0 0 -
8 trang 25 0 0
-
68 trang 24 0 0
-
Ebook Sustainable construction and building materials: Select proceedings of ICSCBM 2018 - Part 2
446 trang 23 0 0 -
Lecture Introduction to Machine learning and Data mining: Lesson 8
68 trang 23 0 0 -
Sử dụng mạng nơron thần kinh nhân tạo để tính toán, dự đoán diện tích gương hầm sau khi nổ mìn
8 trang 23 0 0