optimizer.sweeper.sweep

optimizer.sweeper.sweep(
    tt
    basis
    y
    nsweeps=2
    maxdim=30
    cutoff=0.01
    optax_solver=None
    opt_maxiter=1000
    opt_tol=None
    opt_lambda=0.0
    onedot=False
    use_CG=False
    use_scipy=False
    use_jax_scipy=False
    method='L-BFGS-B'
    ord='fro'
    auto_onedot=True
)

Tensor-train sweep optimization

Parameters

Name Type Description Default
tt TensorTrain the tensor-train model. required
basis list[Array] the basis functions. required
y Array the target values. required
nsweeps int The number of sweeps. 2
maxdim (int, list[int]) the maximum rank of TT-sweep. 30
cutoff (float, list[float]) the ratio of truncated singular values for TT-sweep. When one-dot core is optimized, this parameter is not used. 0.01
optax_solver optax.GradientTransformation the optimizer for TT-sweep. Defaults to None. If None, the optimizer is not used. None
opt_maxiter int the maximum number of iterations for TT-sweep. 1000
opt_tol (float, list[float]) the convergence criterion of gradient for TT-sweep. Defaults to None, i.e., opt_tol = cutoff. None
opt_lambda float the L2 regularization parameter for TT-sweep. Only use_CG=True is supported. 0.0
onedot bool whether to optimize one-dot or two-dot core. Defaults to False, i.e. two-dot core optimization. False
use_CG bool whether to use conjugate gradient method for TT-sweep. Defaults to False. CG is suitable for one-dot core optimization. False
use_scipy bool whether to use scipy.optimize.minimize for TT-sweep. Defaults to False and use L-BFGS-B method. GPU is not supported. False
use_jax_scipy bool whether to use jax.scipy.optimize.minimize for TT-sweep. Defaults to False. This optimizer is only supports BFGS method, which exhausts GPU memory. False
method str the optimization method for scipy.optimize.minimize. Defaults to ‘L-BFGS-B’. Note that jax.scipy.optimize.minimize only supports ‘BFGS’. 'L-BFGS-B'
ord str the norm for scaling the initial core. Defaults to ‘fro’, Frobenuis norm. ‘max, maximum absolute value, 'fro', Frobenius norm, are supported. |’fro’| | auto_onedot | [bool](bool) | whether to switch to one-dot core optimization automatically once the maximum rank is reached. Defaults to True. This will cause overfitting in the beginning of the optimization. |True`