Inverse

Compute derivative table inverse

ndmap.inverse.inverse(order: tuple[int, ...], state: torch.Tensor, knobs: list[torch.Tensor], data: list, *, solve: Callable | None = None, jacobian: Callable | None = None) list[source]

Compute inverse of input derivative table

Note, input table is assumed to represent a mapping Which is assumed to map (parametric) zero to (parametric) zero Input state and knobs are deviations and should equal to zero

Parameters:
  • order (tuple[int, ...]) – computation order

  • state (State) – state fixed point

  • knobs (Knobs) – knobs value

  • solve (Optional[Callable]) – linear solver(matrix, vecor)

  • jacobian (Optional[Callable]) – torch.func.jacfwd (default) or torch.func.jacrev

Return type:

Table

Examples

>>> import torch
>>> from ndmap.derivative import derivative
>>> from ndmap.propagate import propagate
>>> def fn(x):
...     q, p = x
...     return torch.stack([q, p + q + q**2])
>>> x = torch.tensor([0.0, 0.0], dtype=torch.float64)
>>> t = derivative((2, ), fn, x)
>>> inverse((2, ), x, [], t)
[[],
tensor([[ 1.,  0.],
        [-1.,  1.]], dtype=torch.float64),
tensor([[[ 0.,  0.],
        [ 0.,  0.]],

[[-2., 0.], [ 0., 0.]]], dtype=torch.float64)]

>>> import torch
>>> from ndmap.derivative import derivative
>>> from ndmap.propagate import propagate
>>> def fn(x, k):
...     q, p = x
...     a, b = k
...     return torch.stack([q, p + (1 + a)*q + (1 + b)*q**2])
>>> x = torch.tensor([0.0, 0.0], dtype=torch.float64)
>>> k = torch.tensor([0.0, 0.0], dtype=torch.float64)
>>> t = derivative((2, 1), fn, x, k)
>>> inverse((2, 1), x, [k], t)
[[[], []],
[tensor([[ 1.,  0.],
        [-1.,  1.]], dtype=torch.float64),
tensor([[[ 0.,  0.],
        [ 0.,  0.]],

[[-1., 0.], [ 0., 0.]]], dtype=torch.float64)],

[tensor([[[ 0., 0.],

[ 0., 0.]],

[[-2., 0.], [ 0., 0.]]], dtype=torch.float64),

tensor([[[[ 0., 0.],

[ 0., 0.]],

[[ 0., 0.],

[ 0., 0.]]],

[[[ 0., 0.],

[ 0., 0.]],

[[-2., 0.],

[ 0., 0.]]]], dtype=torch.float64)]]