legendre_decomp.module_mba#

CUDA-enabled LegendreDecomposition calculations

Functions#

xp_get(val)

kl(P, Q[, xp])

Kullback-Leibler divergence.

get_eta(Q, D[, xp])

Eta tensor.

get_h(theta, D[, xp])

H tensor.

get_q(h[, gpu, xp])

Q tensor.

get_slice(key, D)

initialize_theta(keys, shape[, theta0_flag, xp])

compute_G(eta, mask[, xp])

LD_MBA(X[, I, order, n_iter, lr, eps, error_tol, ...])

Compute many-body tensor approximation.

get_weight(shape[, I_x, order, xp])

compute_nbody(theta, shape[, I_x, order, dtype, gpu, ...])

recons_nbody(X_out, D[, rescale, dtype, gpu])

Module Contents#

legendre_decomp.module_mba.xp_get(val)#
legendre_decomp.module_mba.kl(P, Q, xp=cp)#

Kullback-Leibler divergence.

Parameters:
  • P (numpy.typing.NDArray[numpy.float64]) – P tensor

  • Q (numpy.typing.NDArray[numpy.float64]) – Q tensor

  • xp (ModuleType) – Array module, either numpy (CPU) or cupy

Returns:

KL divergence.

Return type:

numpy.float64

legendre_decomp.module_mba.get_eta(Q, D, xp=cp)#

Eta tensor.

Parameters:
  • Q (numpy.typing.NDArray[numpy.float64]) – Q tensor

  • D (int) – Dimensionality

  • xp (ModuleType) – Array module, either numpy (CPU) or cupy

Returns:

Eta tensor.

Return type:

numpy.typing.NDArray[numpy.float64]

legendre_decomp.module_mba.get_h(theta, D, xp=cp)#

H tensor.

Parameters:
  • theta (numpy.typing.NDArray[numpy.float64]) – Theta tensor

  • D (int) – Dimensionality

  • xp (ModuleType) – Array module, either numpy (CPU) or cupy

Returns:

Updated theta.

Return type:

numpy.typing.NDArray[numpy.float64]

legendre_decomp.module_mba.get_q(h, gpu=True, xp=cp)#

Q tensor.

Parameters:
  • H – H tensor

  • h (numpy.typing.NDArray[numpy.float64])

  • xp (types.ModuleType)

Returns:

Updated Q.

Return type:

numpy.typing.NDArray[numpy.float64]

legendre_decomp.module_mba.get_slice(key, D)#
legendre_decomp.module_mba.initialize_theta(keys, shape, theta0_flag=False, xp=cp)#
Parameters:

xp (types.ModuleType)

legendre_decomp.module_mba.compute_G(eta, mask, xp=cp)#
Parameters:

xp (types.ModuleType)

legendre_decomp.module_mba.LD_MBA(X, I=None, order=2, n_iter=100, lr=1.0, eps=1e-05, error_tol=1e-05, init_theta=None, init_theta_mask=None, ngd=True, ngd_lstsq=True, verbose=True, gpu=True, dtype=None)#

Compute many-body tensor approximation.

Parameters:
  • X (numpy.typing.NDArray[numpy.float64]) – Input tensor.

  • I (List[Tuple[int, Ellipsis]] | None) – A list of pairs of indices that represent slices with nonzero elements in the parameter tensor. e.g. [(0,1),(2,),(1,3)]

  • n_iter (int) – Maximum number of iteration.

  • lr (float) – Learning rate.

  • eps (float) – (see paper).

  • error_tol (float) – KL divergence tolerance for the iteration.

  • ngd (bool) – Use natural gradient.

  • verbose (bool) – Print debug messages.

  • order (int)

  • init_theta (numpy.typing.NDArray[numpy.float64] | None)

  • init_theta_mask (numpy.typing.NDArray[numpy.float64] | None)

  • gpu (bool)

  • dtype (numpy.dtype | None)

Returns:

KL divergence history. scaleX: Scaled X tensor. Q: Q tensor. theta: Theta.

Return type:

all_history_kl

legendre_decomp.module_mba.get_weight(shape, I_x=None, order=2, xp=cp)#
Parameters:

xp (types.ModuleType)

legendre_decomp.module_mba.compute_nbody(theta, shape, I_x=None, order=2, dtype=None, gpu=True, verbose=False)#
legendre_decomp.module_mba.recons_nbody(X_out, D, rescale=True, dtype=None, gpu=True)#