legendre_decomp.naive#

Implementations found in the original Jupyter Notebook

Functions#

kl(P, Q)

Kullback-Leibler divergence.

get_eta(Q, D)

Eta tensor.

get_h(theta, D)

H tensor.

LD(X[, B, order, n_iter, lr, eps, error_tol, ngd, verbose])

Compute many-body tensor approximation.

Module Contents#

legendre_decomp.naive.kl(P, Q)#

Kullback-Leibler divergence.

Parameters:
  • P (numpy.typing.NDArray[numpy.float64]) – P tensor

  • Q (numpy.typing.NDArray[numpy.float64]) – Q tensor

Returns:

KL divergence.

Return type:

numpy.float64

legendre_decomp.naive.get_eta(Q, D)#

Eta tensor.

Parameters:
  • Q (numpy.typing.NDArray[numpy.float64]) – Q tensor

  • D (int) – Dimensionality

Returns:

Eta tensor.

Return type:

numpy.typing.NDArray[numpy.float64]

legendre_decomp.naive.get_h(theta, D)#

H tensor.

Parameters:
  • theta (numpy.typing.NDArray[numpy.float64]) – Theta tensor

  • D (int) – Dimensionality

Returns:

Updated theta.

Return type:

numpy.typing.NDArray[numpy.float64]

legendre_decomp.naive.LD(X, B=None, order=2, n_iter=10, lr=1.0, eps=1e-05, error_tol=1e-05, ngd=True, verbose=True)#

Compute many-body tensor approximation.

Parameters:
  • X (numpy.typing.NDArray[numpy.float64]) – Input tensor.

  • B (numpy.typing.NDArray[numpy.intp] | list[tuple[int, Ellipsis]] | None) – B tensor.

  • order (int) – Order of default tensor B, if not provided.

  • n_iter (int) – Maximum number of iteration.

  • lr (float) – Learning rate.

  • eps (float) – (see paper).

  • error_tol (float) – KL divergence tolerance for the iteration.

  • ngd (bool) – Use natural gradient.

  • verbose (bool) – Print debug messages.

Returns:

KL divergence history. scaleX: Scaled X tensor. Q: Q tensor. theta: Theta.

Return type:

all_history_kl