图书介绍
神经网络与机器学习 英文版PDF|Epub|txt|kindle电子书版本网盘下载
![神经网络与机器学习 英文版](https://www.shukui.net/cover/1/33412064.jpg)
- (加)SimonHaykin著 著
- 出版社: 北京:机械工业出版社
- ISBN:9787111265283
- 出版时间:2009
- 标注页数:906页
- 文件大小:85MB
- 文件页数:939页
- 主题词:人工神经-神经网络-英文;机器学习-英文
PDF下载
下载说明
神经网络与机器学习 英文版PDF格式电子书版下载
下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!
(文件页数 要大于 标注页数,上中下等多册电子书除外)
注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具
图书目录
Introduction1
1.What is a Neural Network?1
2.The Human Brain6
3.Models of a Neuron10
4.Neural Networks Viewed As Directed Graphs15
5.Feedback18
6.Network Architectures21
7.Knowledge Representation24
8.Learning Processes34
9.Learning Tasks38
10.Concluding Remarks45
Notes and References46
Chapter 1 Rosenblatt's Perceptron47
1.1.Introduction47
1.2.Perceptron48
1.3.The Perceptron Convergence Theorem50
1.4.Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment55
1.5.Computer Experiment:Pattern Classification60
1.6.The Batch Perceptron Algorithm62
1.7.Summary and Discussion65
Notes and References66
Problems66
Chapter 2 Model Building through Regression68
2.1 Introduction68
2.2 Linear Regression Model:Preliminary Considerations69
2.3 Maximum a Posteriori Estimation of the Parameter Vector71
2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation76
2.5 Computer Experiment:Pattern Classification77
2.6 The Minimum-Description-Length Principle79
2.7 Finite Sample-Size Considerations82
2.8 The Instrumental-Variables Method86
2.9 Summary and Discussion88
Notes and References89
Problems89
Chapter 3 The Least-Mean-Square Algorithm91
3.1 Introduction91
3.2 Filtering Structure of the LMS Algorithm92
3.3 Unconstrained Optimization:a Review94
3.4 The Wiener Filter100
3.5 The Least-Mean-Square Algorithm102
3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter104
3.7 The Langevin Equation:Characterization of Brownian Motion106
3.8 Kushner's Direct-Averaging Method107
3.9 Statistical LMS Learning Theory for Small Learning-Rate Parameter108
3.10 Computer Experiment Ⅰ:Linear Prediction110
3.11 Computer Experiment Ⅱ:Pattern Classification112
3.12 Virtues and Limitations of the LMS Algorithm113
3.13 Learning-Rate Annealing Schedules115
3.14 Summary and Discussion117
Notes and References118
Problems119
Chapter 4 Multilayer Perceptrons122
4.1 Introduction123
4.2 Some Preliminaries124
4.3 Batch Learning and On-Line Learning126
4.4 The Back-Propagation Algorithm129
4.5 XOR Problem141
4.6 Heuristics for Making the Back-Propagation Algorithm Perform Better144
4.7 Computer Experiment:Pattern Classification150
4.8 Back Propagation and Differentiation153
4.9 The Hessian and Its Role in On-Line Learning155
4.10 Optimal Annealing and Adaptive Control of the Learning Rate157
4.11 Generalization164
4.12 Approximations of Functions166
4.13 Cross-Validation171
4.14 Complexity Regularization and Network Pruning175
4.15 Virtues and Limitations of Back-Propagation Learning180
4.16 Supervised Learning Viewed as an Optimization Problem186
4.17 Convolutional Networks201
4.18 Nonlinear Filtering203
4.19 Small-Scale Versus Large-Scale Learning Problems209
4.20 Summary and Discussion217
Notes and References219
Problems221
Chapter 5 Kernel Methods and Radial-Basis Function Networks230
5.1 Introduction230
5.2 Cover's Theorem on the Separability of Patterns231
5.3 The Interpolation Problem236
5.4 Radial-Basis-Function Networks239
5.5 K-Means Clustering242
5.6 Recursive Least-Squares Estimation of the Weight Vector245
5.7 Hybrid Learning Procedure for RBF Networks249
5.8 Computer Experiment:Pattern Classification250
5.9 Interpretations of the Gaussian Hidden Units252
5.10 Kernel Regression and Its Relation to RBF Networks255
5.11 Summary and Discussion259
Notes and References261
Problems263
Chapter 6 Support Vector Machines268
6.1 Introduction268
6.2 Optimal Hyperplane for Linearly Separable Patterns269
6.3 Optimal Hyperplane for Nonseparable Patterns276
6.4 The Support Vector Machine Viewed as a Kernel Machine281
6.5 Design of Support Vector Machines284
6.6 XOR Problem286
6.7 Computer Experiment:Pattern Classification289
6.8 Regression:Robustness Considerations289
6.9 Optimal Solution of the Linear Regression Problem293
6.10 The Representer Theorem and Related Issues296
6.11 Summary and Discussion302
Notes and References304
Problems307
Chapter 7 Regularization Theory313
7.1 Introduction313
7.2 Hadamard's Conditions for Well-Posedness314
7.3 Tikhonov's Regularization Theory315
7.4 Regularization Networks326
7.5 Generalized Radial-Basis-Function Networks327
7.6 The Regularized Least-Squares Estimator:Revisited331
7.7 Additional Notes of Interest on Regularization335
7.8 Estimation of the Regularization Parameter336
7.9 Semisupervised Learning342
7.10 Manifold Regularization:Preliminary Considerations343
7.11 Differentiable Manifolds345
7.12 Generalized Regularization Theory348
7.13 Spectral Graph Theory350
7.14 Generalized Representer Theorem352
7.15 Laplacian Regularized Least-Squares Algorithm354
7.16 Experiments on Pattern Classification Using Semisupervised Learning356
7.17 Summary and Discussion359
Notes and References361
Problems363
Chapter 8 Principal-Components Analysis367
8.1 Introduction367
8.2 Principles of Self-Organization368
8.3 Self-Organized Feature Analysis372
8.4 Principal-Components Analysis:Perturbation Theory373
8.5 Hebbian-Based Maximum Eigenfilter383
8.6 Hebbian-Based Principal-Components Analysis392
8.7 Case Study:Image Coding398
8.8 Kernel Principal-Components Analysis401
8.9 Basic Issues Involved in the Coding of Natural Images406
8.10 Kernel Hebbian Algorithm407
8.11 Summary and Discussion412
Notes and References415
Problems418
Chapter 9 Self-Organizing Maps425
9.1 Introduction425
9.2 Two Basic Feature-Mapping Models426
9.3 Self-Organizing Map428
9.4 Properties of the Feature Map437
9.5 Computer Experiments Ⅰ:Disentangling Lattice Dynamics Using SOM445
9.6 Contextual Maps447
9.7 Hierarchical Vector Quantization450
9.8 Kernel Self-Organizing Map454
9.9 Computer Experiment Ⅱ:Disentangling Lattice Dynamics Using Kernel SOM462
9.10 Relationship Between Kernel SOM and Kullback-Leibler Divergence464
9.11 Summary and Discussion466
Notes and References468
Problems470
Chapter 10 Information-Theoretic Learning Models475
10.1 Introduction476
10.2 Entropy477
10.3 Maximum-Entropy Principle481
10.4 Mutual Information484
10.5 Kullback-Leibler Divergence486
10.6 Copulas489
10.7 Mutual Information as an Objective Function to be Optimized493
10.8 Maximum Mutual Information Principle494
10.9 Infomax and Redundancy Reduction499
10.10 Spatially Coherent Features501
10.11 Spatially Incoherent Features504
10.12 Independent-Components Analysis508
10.13 Sparse Coding of Natural Images and Comparison with ICA Coding514
10.14 Natural-Gradient Learning for Independent-Components Analysis516
10.15 Maximum-Likelihood Estimation for Independent-Components Analysis526
10.16 Maximum-Entropy Learning for Blind Source Separation529
10.17 Maximization of Negentropy for Independent-Components Analysis534
10.18 Coherent Independent-Components Analysis541
10.19 Rate Distortion Theory and Information Bottleneck549
10.20 Optimal Manifold Representation of Data553
10.21 Computer Experiment:Pattern Classification560
10.22 Summary and Discussion561
Notes and References564
Problems572
Chapter 11 Stochastic Methods Rooted in Statistical Mechanics579
11.1 Introduction580
11.2 Statistical Mechanics580
11.3 Markov Chains582
11.4 Metropolis Algorithm591
11.5 Simulated Annealing594
11.6 Gibbs Sampling596
11.7 Boltzmann Machine598
11.8 Logistic Belief Nets604
11.9 Deep Belief Nets606
11.10 Deterministic Annealing610
11.11 Analogy of Deterministic Annealing with Expectation-Maximization Algorithm616
11.12 Summary and Discussion617
Notes and References619
Problems621
Chapter 12 Dynamic Programming627
12.1 Introduction627
12.2 Markov Decision Process629
12.3 Bellman's Optimality Criterion631
12.4 Policy Iteration635
12.5 Value Iteration637
12.6 Approximate Dynamic Programming:Direct Methods642
12.7 Temporal-Difference Learning643
12.8 Q-Learning648
12.9 Approximate Dynamic Programming:Indirect Methods652
12.10 Least-Squares Policy Evaluation655
12.11 Approximate Policy Iteration660
12.12 Summary and Discussion663
Notes and References665
Problems668
Chapter 13 Neurodynamics672
13.1 Introduction672
13.2 Dynamic Systems674
13.3 Stability of Equilibrium States678
13.4 Attractors684
13.5 Neurodynamic Models686
13.6 Manipulation of Attractors as a Recurrent Network Paradigm689
13.7 Hopfield Model690
13.8 The Cohen-Grossberg Theorem703
13.9 Brain-State-In-A-Box Model705
13.10 Strange Attractors and Chaos711
13.11 Dynamic Reconstruction of a Chaotic Process716
13.12 Summary and Discussion722
Notes and References724
Problems727
Chapter 14 Bayseian Filtering for State Estimation of Dynamic Systems731
14.1 Introduction731
14.2 State-Space Models732
14.3 Kalman Filters736
14.4 The Divergence-Phenomenon and Square-Root Filtering744
14.5 The Extended Kalman Filter750
14.6 The Bayesian Filter755
14.7 Cubature Kalman Filter:Building on the Kalman Filter759
14.8 Particle Filters765
14.9 Computer Experiment:Comparative Evaluation of Extended Kalman and Particle Filters775
14.10 Kalman Filtering in Modeling of Brain Functions777
14.11 Summary and Discussion780
Notes and References782
Problems784
Chapter 15 Dynamically Driyen Recurrent Networks790
15.1 Introduction790
15.2 Recurrent Network Architectures791
15.3 Universal Approximation Theorem797
15.4 Controllability and Observability799
15.5 Computational Power of Recurrent Networks804
15.6 Learning Algorithms806
15.7 Back Propagation Through Time808
15.8 Real-Time Recurrent Learning812
15.9 Vanishing Gradients in Recurrent Networks818
15.10 Supervised Training Framework for Recurrent Networks Using Nonlinear Sequential State Estimators822
15.11 Computer Experiment:Dynamic Reconstruction of Mackay-Glass Attractor829
15.12 Adaptivity Considerations831
15.13 Case Study:Model Reference Applied to Neurocontrol833
15.14 Summary and Discussion835
Notes and References839
Problems842
Bibliography845
Index889