Index

Derivative table representation utilities

ndmap.index.build(table: list, sequence: dict, shape: dict, unique: dict) None

Build derivative table representation from a given reduced representation

Note, table is assumed to represent a mapping or a scalar valued function Note, modify input container

Parameters:
  • table (Table) – container

  • sequence (dict[tuple[int, ...], tuple[tuple[int, ...], ...]]) – sequence of monomial indices with repetitions (see index function)

  • shape (dict[tuple[int, ...], tuple[int, ...]]) – output tensor shape

  • unique (dict[tuple[int, ...], Tensor]) – unique values

Return type:

None

Examples

>>> import torch
>>> from ndmap.derivative import derivative
>>> def fn(x, y): x1, x2 = x; y1, = y; return torch.stack([x1*y1 + x2, x2**2]).sum()
>>> x = torch.tensor([0.0, 0.0])
>>> y = torch.tensor([0.0])
>>> t = derivative((2, 2), fn, x, y)
>>> s = derivative((2, 2), lambda x, y: x.sum(), x, y)
>>> build(s, *reduce((2, 1), t))
>>> s
[[tensor(0.), tensor([0.]), tensor([[0.]])],
 [tensor([0., 1.]), tensor([[1., 0.]]), tensor([[[0., 0.]]])],
 [tensor([[0., 0.],
          [0., 2.]]),
  tensor([[[0., 0.],
           [0., 0.]]]),
  tensor([[[[0., 0.],
            [0., 0.]]]])]]
>>> import torch
>>> from ndmap.util import equal
>>> from ndmap.derivative import derivative
>>> def fn(x, y): x1, x2 = x; y1, = y; return torch.stack([x1*y1 + x2, x2**2])
>>> x = torch.tensor([0.0, 0.0])
>>> y = torch.tensor([0.0])
>>> t = derivative((2, 1), fn, x, y)
>>> t = derivative((2, 2), fn, x, y)
>>> s = derivative((2, 2), lambda x, y: x, x, y)
>>> build(s, *reduce((2, 1), t))
>>> equal(t, s)
True
ndmap.index.index(dimension: tuple[int, ...], order: tuple[int, ...]) numpy.ndarray

Generate monomial index table with repetitions for given dimensions and corresponding orders

Note, output length is product(dimension**degree)

Parameters:
  • dimension (tuple[int, ...], positive) – monomial dimensions

  • order (tuple[int, ...], non-negative) – derivative orders (total monomial degrees)

Returns:

monomial index table with repetitions

Return type:

Array

Example

>>> index((2, 2), (3, 1))
array([[3, 0, 1, 0],
       [3, 0, 0, 1],
       [2, 1, 1, 0],
       [2, 1, 0, 1],
       [2, 1, 1, 0],
       [2, 1, 0, 1],
       [1, 2, 1, 0],
       [1, 2, 0, 1],
       [2, 1, 1, 0],
       [2, 1, 0, 1],
       [1, 2, 1, 0],
       [1, 2, 0, 1],
       [1, 2, 1, 0],
       [1, 2, 0, 1],
       [0, 3, 1, 0],
       [0, 3, 0, 1]])
ndmap.index.reduce(dimension: tuple[int, ...], table: list) tuple[dict, dict, dict]

Generate reduced representation of a given derivative table

Note, table is assumed to represent a mapping or a scalar

Parameters:
  • dimension (tuple[int, ...]) – table derivative dimension

  • table (Table) – input derivative table

Returns:

(sequence, shape, unique)

Return type:

tuple[dict, dict, dict]

Examples

>>> import torch
>>> from ndmap.derivative import derivative
>>> from ndmap.signature import get
>>> def fn(x, y): x1, x2 = x; y1, = y; return torch.stack([x1*y1 + x2, x2**2]).sum()
>>> x = torch.tensor([0.0, 0.0])
>>> y = torch.tensor([0.0])
>>> t = derivative((2, 1), fn, x, y)
>>> t
[[tensor(0.), tensor([0.])],
[tensor([0., 1.]), tensor([[1., 0.]])],
[tensor([[0., 0.],
        [0., 2.]]),
tensor([[[0., 0.],
        [0., 0.]]])]]
>>> sequence, shape, unique = reduce((2, 1), t)
>>> sequence
{(0, 0): ((0, 0, 0),),
(0, 1): ((0, 0, 1),),
(1, 0): ((1, 0, 0), (0, 1, 0)),
(1, 1): ((1, 0, 1), (0, 1, 1)),
(2, 0): ((2, 0, 0), (1, 1, 0), (1, 1, 0), (0, 2, 0)),
(2, 1): ((2, 0, 1), (1, 1, 1), (1, 1, 1), (0, 2, 1))}
>>> shape
{(0, 0): torch.Size([]),
(0, 1): torch.Size([1]),
(1, 0): torch.Size([2]),
(1, 1): torch.Size([1, 2]),
(2, 0): torch.Size([2, 2]),
(2, 1): torch.Size([1, 2, 2])}
>>> unique
{(0, 0, 0): tensor(0.),
(0, 0, 1): tensor(0.),
(1, 0, 0): tensor(0.),
(0, 1, 0): tensor(1.),
(1, 0, 1): tensor(1.),
(0, 1, 1): tensor(0.),
(2, 0, 0): tensor(0.),
(1, 1, 0): tensor(0.),
(0, 2, 0): tensor(2.),
(2, 0, 1): tensor(0.),
(1, 1, 1): tensor(0.),
(0, 2, 1): tensor(0.)}
>>> import torch
>>> from ndmap.derivative import derivative
>>> def fn(x, y): x1, x2 = x; y1, = y; return torch.stack([x1*y1 + x2, x2**2])
>>> x = torch.tensor([0.0, 0.0])
>>> y = torch.tensor([0.0])
>>> t = derivative((2, 1), fn, x, y)
>>> sequence, shape, unique = reduce((2, 1), t)
>>> sequence
{(0, 0): ((0, 0, 0),),
 (0, 1): ((0, 0, 1),),
 (1, 0): ((1, 0, 0), (0, 1, 0)),
 (1, 1): ((1, 0, 1), (0, 1, 1)),
 (2, 0): ((2, 0, 0), (1, 1, 0), (1, 1, 0), (0, 2, 0)),
 (2, 1): ((2, 0, 1), (1, 1, 1), (1, 1, 1), (0, 2, 1))}
>>> shape
{(0, 0): torch.Size([2]),
 (0, 1): torch.Size([2, 1]),
 (1, 0): torch.Size([2, 2]),
 (1, 1): torch.Size([2, 1, 2]),
 (2, 0): torch.Size([2, 2, 2]),
 (2, 1): torch.Size([2, 1, 2, 2])}
>>> unique
{(0, 0, 0): tensor([0., 0.]),
 (0, 0, 1): tensor([0., 0.]),
 (1, 0, 0): tensor([0., 0.]),
 (0, 1, 0): tensor([1., 0.]),
 (1, 0, 1): tensor([1., 0.]),
 (0, 1, 1): tensor([0., 0.]),
 (2, 0, 0): tensor([0., 0.]),
 (1, 1, 0): tensor([0., 0.]),
 (0, 2, 0): tensor([0., 2.]),
 (2, 0, 1): tensor([0., 0.]),
 (1, 1, 1): tensor([0., 0.]),
 (0, 2, 1): tensor([0., 0.])}