View Table of ContentsCreate a BookmarkAdd Book to My BookshelfPurchase This Book Online
Skip to Book Content
Book cover image

Index

Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks
Russell D. Reed and Robert J. Marks II
Copyright © 1999 Massachusetts Institute of Technology

Previous Section Next Section

Index

S

SAB (self-adapting back propagation), 142
Sample size. See Size
Saturation. See Sigmoid saturation
Scaled conjugate gradient descent, 168
Scaling, 189. See also Gain scaling
Schemata theory, 188
Search then converge method, 147-148
Self-repair, 4
Sensitivity methods, for pruning, 220, 221-225, 237
Separability. See Linear separability
Sigmoid function, 1, 54, 315-318
bias weights and, 231
error surface and, 113-116, 118-119, 123, 132
for single-layer networks, 15
Sigmoid-like functions, 51, 315-318
Sigmoid saturation, 68, 69, 70
algorithm variations and, 141
gain scaling and, 133
ill-conditioning and, 128
random initialization and, 97, 102
Sigmoid scaling, 283-287
Sign function, 317
Simplex search, 161
Simplicity, 241. See also Complexity
Simulated annealing, 157, 176-178
Single-hidden-layer networks, 33-41, 124
Single-layer networks
hyperplane geometry and, 15-18, 20-23
importance of, 20
learning rules for, 23-30
limitations of, 18,19
linear separability and, 18-23
local minima for, 123-124
Size. See also Pruning methods
algorithm selection and, 157
capacity versus, 41-47
complexity and, 241-242, 244-247
constructive methods and, 197, 199
depth versus, 38-41
local minima and, 126
model mismatch and, 248
training time and, 68, 71, 199
Skeletonization, 221-222
Small-signal analysis, 90-95
Soft weight sharing, 269
Squashing function, 1, 51. See also Sigmoid function
Stacked-generalization, 273
Stair-steps, 113-114, 121. See also Step function
Standard optimization. See Classical optimization methods
Star topology, 114-115
Static noise, 290-291
Step function, 1, 113-114, 121, 317
in Adaline networks, 29
hard-limiters and, 134
momentum and, 92, 94-95
in perceptron learning algorithm, 24
for single-layer networks, 15, 24, 29
Step size parameter, 57
Stochastic methods, 60, 62, 72, 158, 175-179
Structure
constructive methods and, 197-217
pruning and, 219-237
selection of, 197
Subpopulations, and genetic algorithm, 192-193
Subproblems, 255
Sum of squared errors (SSE) function, 9, 50, 52, 54
error surface and, 117, 134
generalization and, 259
jitter and, 280
learning rate and, 71, 77
pruning and, 221, 227, 230
SuperSAB, 142
Supervised learning. See also Training
in back-propagation, 49
definition of, 7
introduction to, 7-14
model of, 7-11
Symbolic rule systems, 109-110. See also Rule-based systems
Symmetric functions, 40
Symmetries, weight-space, 118-119
Symmetry breaking, 97
System energy, and simulated annealing, 177

Top of current section
Previous Section Next Section
Books24x7.com, Inc. © 1999-2001  –  Feedback