legendre_decomp#

Submodules#

Functions#

LD(X[, B, order, n_iter, lr, eps, error_tol, ngd, ...])

Compute many-body tensor approximation.

LD_MBA(X[, I, order, n_iter, lr, eps, error_tol, ...])

Compute many-body tensor approximation.

Package Contents#

legendre_decomp.LD(X, B=None, order=2, n_iter=10, lr=1.0, eps=1e-05, error_tol=1e-05, ngd=True, ngd_lstsq=False, verbose=True, gpu=True, exit_abs=False, dtype=None)#

Compute many-body tensor approximation.

Parameters:
  • X (numpy.typing.NDArray[numpy.float64]) – Input tensor.

  • B (numpy.typing.NDArray[numpy.intp] | list[tuple[int, Ellipsis]] | None) – B tensor.

  • order (int) – Order of default tensor B, if not provided.

  • n_iter (int) – Maximum number of iteration.

  • lr (float) – Learning rate.

  • eps (float) – (see paper).

  • error_tol (float) – KL divergence tolerance for the iteration.

  • ngd (bool) – Use natural gradient.

  • ngd_lstsq (bool) – Use natural gradient conputed by lstsq to avoid singular matrix.

  • verbose (bool) – Print debug messages.

  • gpu (bool) – Use GPU (CUDA or ROCm depending on the installed CuPy version).

  • exit_abs (bool) – Previous implementation (wrongly?) uses kl- kl_prev as iteration exit criterion. Use abs(kl - kl_prev) instead.

  • dtype (numpy.dtype | None) – By default, the data-type is inferred from the input data.

Returns:

KL divergence history. scaleX: Scaled X tensor. Q: Q tensor. theta: Theta.

Return type:

all_history_kl

legendre_decomp.LD_MBA(X, I=None, order=2, n_iter=100, lr=1.0, eps=1e-05, error_tol=1e-05, init_theta=None, init_theta_mask=None, ngd=True, ngd_lstsq=True, verbose=True, gpu=True, dtype=None)#

Compute many-body tensor approximation.

Parameters:
  • X (numpy.typing.NDArray[numpy.float64]) – Input tensor.

  • I (List[Tuple[int, Ellipsis]] | None) – A list of pairs of indices that represent slices with nonzero elements in the parameter tensor. e.g. [(0,1),(2,),(1,3)]

  • n_iter (int) – Maximum number of iteration.

  • lr (float) – Learning rate.

  • eps (float) – (see paper).

  • error_tol (float) – KL divergence tolerance for the iteration.

  • ngd (bool) – Use natural gradient.

  • verbose (bool) – Print debug messages.

  • order (int)

  • init_theta (numpy.typing.NDArray[numpy.float64] | None)

  • init_theta_mask (numpy.typing.NDArray[numpy.float64] | None)

  • gpu (bool)

  • dtype (numpy.dtype | None)

Returns:

KL divergence history. scaleX: Scaled X tensor. Q: Q tensor. theta: Theta.

Return type:

all_history_kl