layers.activations

layers.activations

Activation functions for neural networks.

Supported Activations

Argument name Implementation (Docstring)
silu+moderate combine silu and moderate
tanh pompon.layers.activations.tanh
exp pompon.layers.activations.exp
gauss pompon.layers.activations.gaussian
erf pompon.layers.activations.erf
moderate pompon.layers.activations.moderate
silu pompon.layers.activations.silu

How to add custom activation function?

  • Modify pompon (recommended)
    1. Implement JAX function in pompon.layers.activations.
    2. Add custom name in pompon.layers.basis.Phi._get_activation.
    3. Specify the name as NNMPO argument.
    4. Give us your pull requests! (Optional)
  • Override activation attribute
    1. Define func: Callable[[jax.Array], jax.Array] object by JAX.
    2. Set attribute NNMPO.basis.phi{i}.activation = func for i=0,1,…,f-1.
Warning

The 0-th basis is always 1 because of the implementation in pompon._jittables._forward_q2phi

Functions

Name Description
Bspline B-spline basis function
chebyshev_recursive Chebyshev polynomial basis function
combine Combine activation functions
erf Error function activation function
exp Exponential activation function
extend_grid Extend grid points for B-spline basis function
gaussian Gaussian activation function
leakyrelu Leaky rectified linear unit activation function
legendre_recursive Legendre polynomial basis function
moderate Moderate activation function
polynomial Polynomial basis function
polynomial_recursive Calculate polynomial basis recursively
relu Rectified linear unit activation function
silu Sigmoid linear unit activation function
softplus Softplus activation function
tanh Hyperbolic tangent activation function

Bspline

layers.activations.Bspline(x, grid, k=0)

B-spline basis function

Caution
  • This activation is experimental and may not be stable.
  • One should fix w and b to 1.0 and 0.0, respectively.
  • The input x must be in [-1, 1].

\[ \phi_n(x) = B_{n,k}(x) \]

Parameters

Name Type Description Default
x Array input with shape (D, f) where D is the number of data points. required
grid Array grid points with shape (f, N) where N is the number of grid points. required
k int order of B-spline basis function 0

chebyshev_recursive

layers.activations.chebyshev_recursive(x, N, k=1)

Chebyshev polynomial basis function

Caution
  • This activation is experimental and may not be stable.
  • One should fix w and b to 1.0 and 0.0, respectively.
  • The input x must be in [-1, 1].

\[ \phi_n(x) = T_n(x) \quad (n=1,2,\cdots,N-1) \]

  • By using this function, the model can be regressed to a Chebyshev polynomial function.
  • This function should be used with
    functools.partial <https://docs.python.org/3/library/functools.html#functools.partial>_ as follows:

combine

layers.activations.combine(x, funcs, split_indices)

Combine activation functions

Parameters

Name Type Description Default
x Array input with shape (D, f) where D is the number of data points. required
funcs tuple list of activation functions required

Returns

Name Type Description
Array Array output with shape (D, f)

erf

layers.activations.erf(x)

Error function activation function

\[ \phi(x) = \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt \]

exp

layers.activations.exp(x)

Exponential activation function

\[ \phi(x) = e^{|-x|} \]

extend_grid

layers.activations.extend_grid(grid, k_extend=0)

Extend grid points for B-spline basis function

Parameters

Name Type Description Default
grid Array grid points with shape (f, N) where N is the number of grid points. required
k_extend int order of B-spline basis function 0

gaussian

layers.activations.gaussian(x)

Gaussian activation function

\[ \phi(x) = -e^{-x^2} \]

leakyrelu

layers.activations.leakyrelu(x, alpha=0.01)

Leaky rectified linear unit activation function

\[ \phi(x) = \max(\alpha x, x) \]

legendre_recursive

layers.activations.legendre_recursive(x, N, k=1)

Legendre polynomial basis function

Caution
  • This activation is experimental and may not be stable.
  • One should fix w and b to 1.0 and 0.0, respectively.
  • The input x must be in [-1, 1].

\[ \phi_n(x) = P_n(x) \quad (n=1,2,\cdots,N-1) \]

  • By using this function, the model can be regressed to a Legendre polynomial function.
  • This function should be used with
    functools.partial <https://docs.python.org/3/library/functools.html#functools.partial>_ as follows:

moderate

layers.activations.moderate(x, ε=0.05)

Moderate activation function

\[ \phi(x) = 1 - e^{-x^2} + \epsilon x^2 \]

polynomial

layers.activations.polynomial(x, N)

Polynomial basis function

Caution
  • This activation is experimental and may not be stable.
  • One should fix w and b to 1.0 and 0.0, respectively.

\[ \phi_n(x) = x^n \quad (n=1,2,\cdots,N-1) \]

  • When \(N\) is too large, this function is numerically unstable.
  • When \(N=1\), it is equivalent to linear activation function
  • By using this function, the model can be regressed to a polynomial function.
  • This function should be used with
    functools.partial <https://docs.python.org/3/library/functools.html#functools.partial>_ as follows:
func = functools.partial(polynomial, N=3)

polynomial_recursive

layers.activations.polynomial_recursive(x, N, k=1)

Calculate polynomial basis recursively

Caution
  • This activation is experimental and may not be stable.
  • One should fix w and b to 1.0 and 0.0, respectively.

\[ \phi_n(x) = x^n = x^{n-1} \cdot x \]

Parameters

Name Type Description Default
x Array input with shape (D, f) where D is the number of data points. required
N int maximum degree of polynomial basis required
k int current degree of polynomial basis 1

Returns

Name Type Description
Array Array ϕ = output with shape (D, f)

ϕ = D @ [x^1, x^2, …, x^N]

relu

layers.activations.relu(x)

Rectified linear unit activation function

\[ \phi(x) = \max(0, x) \]

Note

This function is not suitable for force field regression.

silu

layers.activations.silu(x)

Sigmoid linear unit activation function

\[ \phi(x) = x \cdot \sigma(x) = x \cdot \frac{1}{1 + e^{-x}} \]

softplus

layers.activations.softplus(x)

Softplus activation function

\[ \phi(x) = \log(1 + e^x) \]

tanh

layers.activations.tanh(x)

Hyperbolic tangent activation function

\[ \phi(x) = \tanh(x) = \frac{e^x - e^{-x}}{e^x + e^{-x}} \]