layers.activations
layers.activations
Activation functions for neural networks.
Supported Activations
Argument name | Implementation (Docstring) |
---|---|
silu+moderate |
combine silu and moderate |
tanh |
pompon.layers.activations.tanh |
exp |
pompon.layers.activations.exp |
gauss |
pompon.layers.activations.gaussian |
erf |
pompon.layers.activations.erf |
moderate |
pompon.layers.activations.moderate |
silu |
pompon.layers.activations.silu |
How to add custom activation function?
- Modify pompon (recommended)
- Implement JAX function in
pompon.layers.activations
. - Add custom name in
pompon.layers.basis.Phi._get_activation
. - Specify the name as NNMPO argument.
- Give us your pull requests! (Optional)
- Implement JAX function in
- Override
activation
attribute- Define
func: Callable[[jax.Array], jax.Array]
object by JAX. - Set attribute
NNMPO.basis.phi{i}.activation = func
for i=0,1,…,f-1.
- Define
The 0-th basis is always 1 because of the implementation in pompon._jittables._forward_q2phi
Functions
Name | Description |
---|---|
Bspline | B-spline basis function |
chebyshev_recursive | Chebyshev polynomial basis function |
combine | Combine activation functions |
erf | Error function activation function |
exp | Exponential activation function |
extend_grid | Extend grid points for B-spline basis function |
gaussian | Gaussian activation function |
leakyrelu | Leaky rectified linear unit activation function |
legendre_recursive | Legendre polynomial basis function |
moderate | Moderate activation function |
polynomial | Polynomial basis function |
polynomial_recursive | Calculate polynomial basis recursively |
relu | Rectified linear unit activation function |
silu | Sigmoid linear unit activation function |
softplus | Softplus activation function |
tanh | Hyperbolic tangent activation function |
Bspline
B-spline basis function
- This activation is experimental and may not be stable.
- One should fix
w
andb
to 1.0 and 0.0, respectively. - The input
x
must be in [-1, 1].
\[ \phi_n(x) = B_{n,k}(x) \]
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | Array | input with shape (D, f) where D is the number of data points. | required |
grid | Array | grid points with shape (f, N) where N is the number of grid points. | required |
k | int | order of B-spline basis function | 0 |
chebyshev_recursive
Chebyshev polynomial basis function
- This activation is experimental and may not be stable.
- One should fix
w
andb
to 1.0 and 0.0, respectively. - The input
x
must be in [-1, 1].
\[ \phi_n(x) = T_n(x) \quad (n=1,2,\cdots,N-1) \]
- By using this function, the model can be regressed to a Chebyshev polynomial function.
- This function should be used with
functools.partial <https://docs.python.org/3/library/functools.html#functools.partial>
_ as follows:
combine
Combine activation functions
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | Array | input with shape (D, f) where D is the number of data points. | required |
funcs | tuple | list of activation functions | required |
Returns
Name | Type | Description |
---|---|---|
Array | Array | output with shape (D, f) |
erf
Error function activation function
\[ \phi(x) = \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt \]
- W. Koch et al., J. Chem. Phys. 141(2), 021101 (2014) adopted this function as multiplicative artificial neural networks.
- Can be analytically integrated with Gaussian basis
- Need large number of basis functions (empirically)
- Almost the same as sigmoid
exp
Exponential activation function
\[ \phi(x) = e^{|-x|} \]
extend_grid
Extend grid points for B-spline basis function
Parameters
Name | Type | Description | Default |
---|---|---|---|
grid | Array | grid points with shape (f, N) where N is the number of grid points. | required |
k_extend | int | order of B-spline basis function | 0 |
gaussian
Gaussian activation function
\[ \phi(x) = -e^{-x^2} \]
leakyrelu
Leaky rectified linear unit activation function
\[ \phi(x) = \max(\alpha x, x) \]
legendre_recursive
Legendre polynomial basis function
- This activation is experimental and may not be stable.
- One should fix
w
andb
to 1.0 and 0.0, respectively. - The input
x
must be in [-1, 1].
\[ \phi_n(x) = P_n(x) \quad (n=1,2,\cdots,N-1) \]
- By using this function, the model can be regressed to a Legendre polynomial function.
- This function should be used with
functools.partial <https://docs.python.org/3/library/functools.html#functools.partial>
_ as follows:
moderate
Moderate activation function
\[ \phi(x) = 1 - e^{-x^2} + \epsilon x^2 \]
- W. Koch et al. J. Chem. Phys. 151, 064121 (2019) adopted this function as multiplicative neural network potentials.
- Moderate increase outside the region spanned by the ab initio sample points
polynomial
Polynomial basis function
- This activation is experimental and may not be stable.
- One should fix
w
andb
to 1.0 and 0.0, respectively.
\[ \phi_n(x) = x^n \quad (n=1,2,\cdots,N-1) \]
- When \(N\) is too large, this function is numerically unstable.
- When \(N=1\), it is equivalent to linear activation function
- By using this function, the model can be regressed to a polynomial function.
- This function should be used with
functools.partial <https://docs.python.org/3/library/functools.html#functools.partial>
_ as follows:
polynomial_recursive
Calculate polynomial basis recursively
- This activation is experimental and may not be stable.
- One should fix
w
andb
to 1.0 and 0.0, respectively.
\[ \phi_n(x) = x^n = x^{n-1} \cdot x \]
Parameters
Name | Type | Description | Default |
---|---|---|---|
x | Array | input with shape (D, f) where D is the number of data points. | required |
N | int | maximum degree of polynomial basis | required |
k | int | current degree of polynomial basis | 1 |
Returns
Name | Type | Description |
---|---|---|
Array | Array | ϕ = output with shape (D, f) |
ϕ = D @ [x^1, x^2, …, x^N]
relu
Rectified linear unit activation function
\[ \phi(x) = \max(0, x) \]
This function is not suitable for force field regression.
silu
Sigmoid linear unit activation function
\[ \phi(x) = x \cdot \sigma(x) = x \cdot \frac{1}{1 + e^{-x}} \]
softplus
Softplus activation function
\[ \phi(x) = \log(1 + e^x) \]
tanh
Hyperbolic tangent activation function
\[ \phi(x) = \tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}} \]