legendre_decomp.module_mixture_mba#

Classes#

Functions#

xp_get(val)

mixQ(components[, xp])

MixLD_MBA(X, components[, n_round, n_iter, lr, eps, ...])

Compute many-body tensor approximation.

Module Contents#

legendre_decomp.module_mixture_mba.xp_get(val)#
class legendre_decomp.module_mixture_mba.LDComponent#
I: List[Tuple[int, Ellipsis]]#
theta: numpy.typing.NDArray[numpy.float64] | None = None#
theta_mask: numpy.typing.NDArray[numpy.float64] | None = None#
Q: numpy.typing.NDArray[numpy.float64] | None = None#
gamma: numpy.typing.NDArray[numpy.float64] | None = None#
pi: float = 1.0#
legendre_decomp.module_mixture_mba.mixQ(components, xp=cp)#
Parameters:

xp (types.ModuleType)

legendre_decomp.module_mixture_mba.MixLD_MBA(X, components, n_round=300, n_iter=100, lr=1.0, eps=1e-05, error_tol=1e-05, em_tol=1e-05, ngd=True, ngd_lstsq=True, verbose=True, verbose_ld=True, gpu=True, dtype=None)#

Compute many-body tensor approximation. :param X: Input tensor. :param I: A list of pairs of indices that represent slices with nonzero elements in the parameter tensor.

e.g. [(0,1),(2,),(1,3)]

Parameters:
  • n_round (int) – Maximum number of EM rounds.

  • n_iter (int) – Maximum number of iteration.

  • lr (float) – Learning rate.

  • eps (float) – (see paper).

  • error_tol (float) – KL divergence tolerance for the iteration.

  • em_tol (float) – KL divergence tolerance for the EM round.

  • ngd (bool) – Use natural gradient.

  • verbose (bool) – Print debug messages.

  • verbose_ld (bool) – Print debug messages.

  • X (numpy.typing.NDArray[numpy.float64])

  • components (LDComponent)

  • gpu (bool)

  • dtype (numpy.dtype | None)

Returns:

KL divergence history. scaleX: Scaled X tensor. Q: Q tensor. theta: Theta.

Return type:

all_history_kl