There are a number of common activation functions in use with neural networks. This is not an exhaustive list.
the activation function of a node defines the output of that node given an input or set of inputs. A standard computer chip circuit can be seen as a digital network of activation functions that can be "ON" (1) or "OFF" (0), depending on input. This is similar to the behavior of the linear perceptron in neural networks. However, it is the nonlinear activation function that allows such networks to compute nontrivial problems using only a small number of nodes. In artificial neural networks this function is also called transfer function
The derivative of the activation functions, , varies among these functions. See the following table:
Activation Function
| ||
Linear
| ||
Logistic
| ||
Hyperbolic-tangent
| ||
Squash
|
No comments:
Post a Comment