Parametric fixed point
Computation of dynamic and parametric fixed points
- ndmap.pfp.chain_point(power: int, function: Callable, point: torch.Tensor, *pars: tuple) torch.Tensor [source]
Generate chain for a given fixed point
Note, can be mapped over point
- Parameters:
power (int, positive) – function power
function (Mapping) – input function
point (Tensor) – fixed point
*pars (tuple) – additional function arguments
- Return type:
Tensor
Examples
>>> from math import pi >>> import torch >>> from ndmap.util import nest >>> mu = 2.0*pi*torch.tensor(1/3 - 0.01) >>> kq, ks, ko = torch.tensor([0.0, 0.25, -0.25]) >>> def mapping(x): ... q, p = x ... q, p = q*mu.cos() + p*mu.sin(), p*mu.cos() - q*mu.sin() ... q, p = q, p + (kq*q + ks*q**2 + ko*q**3) ... return torch.stack([q, p]) >>> xi = 2.0*torch.rand((128, 2), dtype=torch.float64) - 1.0 >>> fp = torch.func.vmap(lambda x: fixed_point(32, mapping, x, power=3))(xi) >>> fp = clean_point(3, mapping, fp, epsilon=1.0E-12) >>> torch.func.vmap(lambda x: chain_point(3, mapping, x))(fp).shape torch.Size([2, 3, 2])
- ndmap.pfp.check_point(power: int, function: Callable, point: torch.Tensor, *pars: tuple, epsilon: float = 1e-12) bool [source]
Check fixed point candidate to have given prime period
- Parameters:
power (int, positive) – function power / prime period
function (Mapping) – input function
point (Tensor) – fixed point candidate
*pars (tuple) – additional function arguments
epsilon (float, default=1.0E-12) – tolerance epsilon
- Return type:
bool
Examples
>>> from math import pi >>> import torch >>> from ndmap.util import nest >>> mu = 2.0*pi*torch.tensor(1/3 - 0.01) >>> kq, ks, ko = torch.tensor([0.0, 0.25, -0.25]) >>> def mapping(x): ... q, p = x ... q, p = q*mu.cos() + p*mu.sin(), p*mu.cos() - q*mu.sin() ... q, p = q, p + (kq*q + ks*q**2 + ko*q**3) ... return torch.stack([q, p]) >>> xi = torch.tensor([0.00, 0.00]) >>> fp = fixed_point(32, mapping, xi, power=3) >>> torch.allclose(fp, nest(3, mapping)(fp)) True >>> check_point(3, mapping, fp, epsilon=1.0E-6) False >>> xi = torch.tensor([1.25, 0.00]) >>> fp = fixed_point(32, mapping, xi, power=3) >>> torch.allclose(fp, nest(3, mapping)(fp)) True >>> check_point(3, mapping, fp, epsilon=1.0E-6) True
- ndmap.pfp.clean_point(power: int, function: Callable, point: torch.Tensor, *pars: tuple, epsilon: float = 1e-12) bool [source]
Clean fixed point candidates
- Parameters:
power (int, positive) – function power / prime period
function (Mapping) – input function
point (Tensor) – fixed point candidates
*pars (tuple) – additional function arguments
epsilon (float, optional, default=1.0E-12) – tolerance epsilon
- Return type:
bool
Examples
>>> from math import pi >>> import torch >>> from ndmap.util import nest >>> mu = 2.0*pi*torch.tensor(1/3 - 0.01) >>> kq, ks, ko = torch.tensor([0.0, 0.25, -0.25]) >>> def mapping(x): ... q, p = x ... q, p = q*mu.cos() + p*mu.sin(), p*mu.cos() - q*mu.sin() ... q, p = q, p + (kq*q + ks*q**2 + ko*q**3) ... return torch.stack([q, p]) >>> xi = 2.0*torch.rand((128, 2), dtype=torch.float64) - 1.0 >>> fp = torch.func.vmap(lambda x: fixed_point(32, mapping, x, power=3))(xi) >>> fp.shape torch.Size([128, 2]) >>> clean_point(3, mapping, fp, epsilon=1.0E-12).shape torch.Size([2, 2])
- ndmap.pfp.fixed_point(limit: int, function: Callable, guess: torch.Tensor, *pars: tuple, power: int = 1, epsilon: float | None = None, factor: float = 1.0, alpha: float = 0.0, solve: Callable | None = None, roots: torch.Tensor | None = None, jacobian: Callable | None = None) torch.Tensor [source]
Estimate (dynamical) fixed point
Note, can be mapped over initial guess and/or other input function arguments if epsilon = None
- Parameters:
limit (int, positive) – maximum number of newton iterations
function (Mapping) – input mapping
guess (Tensor) – initial guess
*pars (tuple) – additional function arguments
power (int, positive, default=1) – function power / fixed point order
epsilon (Optional[float], default=None) – tolerance epsilon
factor (float, default=1.0) – step factor (learning rate)
alpha (float, positive, default=0.0) – regularization alpha
solve (Optional[Callable]) – linear solver(matrix, vector)
roots (Optional[Tensor], default=None) – known roots to avoid
jacobian (Optional[Callable]) – torch.func.jacfwd (default) or torch.func.jacrev
- Return type:
Tensor
Examples
>>> from math import pi >>> import torch >>> from ndmap.util import nest >>> mu = 2.0*pi*torch.tensor(1/3 - 0.01) >>> kq, ks, ko = torch.tensor([0.0, 0.25, -0.25]) >>> def mapping(x): ... q, p = x ... q, p = q*mu.cos() + p*mu.sin(), p*mu.cos() - q*mu.sin() ... q, p = q, p + (kq*q + ks*q**2 + ko*q**3) ... return torch.stack([q, p]) >>> xi = torch.tensor([1.25, 0.00]) >>> fp = fixed_point(32, mapping, xi, power=3) >>> torch.allclose(fp, nest(3, mapping)(fp)) True
- ndmap.pfp.matrix(power: int, function: Callable, point: torch.Tensor, *pars: tuple, jacobian: Callable = torch.func.jacfwd) torch.Tensor [source]
Compute (monodromy) matrix around given fixed point
- Parameters:
power (int, positive) – function power / prime period
function (Mapping) – input function
point (Tensor) – fixed point candidate
*pars (tuple) – additional function arguments
jacobian (Callable, default=torch.func.jacfwd) – torch.func.jacfwd or torch.func.jacrev
- Return type:
Tensor
Examples
>>> from math import pi >>> import torch >>> from ndmap.util import nest >>> mu = 2.0*pi*torch.tensor(1/3 - 0.01) >>> kq, ks, ko = torch.tensor([0.0, 0.25, -0.25]) >>> def mapping(x): ... q, p = x ... q, p = q*mu.cos() + p*mu.sin(), p*mu.cos() - q*mu.sin() ... q, p = q, p + (kq*q + ks*q**2 + ko*q**3) >>> return torch.stack([q, p]) >>> xi = torch.tensor([1.25, 0.00]) >>> fp = fixed_point(32, mapping, xi, power=3) >>> matrix(3, mapping, fp) tensor([[ 0.9770, 0.8199], [-0.5659, 0.5486]])
- ndmap.pfp.newton(function: Callable, guess: torch.Tensor, *pars: tuple, factor: float = 1.0, alpha: float = 0.0, solve: Callable | None = None, roots: torch.Tensor | None = None, jacobian: Callable | None = None) torch.Tensor [source]
Perform one Newton root search step
- Parameters:
function (Mapping) – input function
guess (Tensor) – initial guess
*pars – additional function arguments
factor (float, default=1.0) – step factor (learning rate)
alpha (float, positive, default=0.0) – regularization alpha
solve (Optional[Callable]) – linear solver(matrix, vector)
roots (Optional[Tensor], default=None) – known roots to avoid
jacobian (Optional[Callable]) – torch.func.jacfwd (default) or torch.func.jacrev
- Return type:
Tensor
Examples
>>> import torch >>> def fn(x): ... return (x - 5)**2 >>> x = torch.tensor(4.0) >>> for _ in range(16): ... x = newton(fn, x, solve=lambda matrix, vector: vector/matrix) >>> torch.allclose(x, torch.tensor(5.0)) True >>> def fn(x): ... x1, x2 = x ... return torch.stack([(x1 - 5)**2, (x2 + 5)**2]) >>> x = torch.tensor([4.0, -4.0]) >>> for _ in range(16): ... x = newton(fn, x, solve=lambda matrix, vector: torch.linalg.pinv(matrix) @ vector) >>> torch.allclose(x, torch.tensor([5.0, -5.0])) True
- ndmap.pfp.parametric_fixed_point(order: tuple[int, ...], state: torch.Tensor, knobs: list[torch.Tensor], function: Callable, *pars: tuple, power: int = 1, solve: Callable | None = None, jacobian: Callable | None = None) list [source]
Compute parametric fixed point
- Parameters:
order (tuple[int, ...], non-negative) – knobs derivative orders
state (State) – state fixed point
knobs (Knobs) – knobs value
function (Callable) – input function
*pars (tuple) – additional function arguments
power (int, positive, default=1) – function power
solve (Optional[Callable]) – linear solver(matrix, vecor)
jacobian (Optional[Callable]) – torch.func.jacfwd (default) or torch.func.jacrev
- Return type:
Table
Examples
>>> from math import pi >>> import torch >>> from ndmap.util import flatten >>> from ndmap.util import nest >>> from ndmap.evaluate import evaluate >>> from ndmap.propagate import propagate >>> mu = 2.0*pi*torch.tensor(1/5 - 0.01, dtype=torch.float64) >>> k = torch.tensor([0.25, -0.25], dtype=torch.float64) >>> def mapping(x, k): ... q, p = x ... a, b = k ... q, p = q*mu.cos() + p*mu.sin(), p*mu.cos() - q*mu.sin() ... return torch.stack([q, p + a*q**2 + b*q**3]) >>> xi = torch.tensor([0.75, 0.25], dtype=torch.float64) >>> fp = fixed_point(32, mapping, xi, k, power=5) >>> fp tensor([0.7279, 0.4947], dtype=torch.float64) >>> torch.allclose(fp, nest(5, lambda x: mapping(x, k))(fp)) True >>> torch.linalg.eigvals(matrix(5, mapping, fp, k)) tensor([1.3161+0.j, 0.7598+0.j], dtype=torch.complex128) >>> pfp = parametric_fixed_point((4, ), fp, [k], mapping, power=5) >>> out = propagate((2, 2), (0, 4), pfp, [k], lambda x, k: nest(5, mapping, k)(x, k)) >>> all(torch.allclose(x, y) for x, y in zip(*map(lambda x: flatten(x,target=list),(pfp,out)))) True >>> dk = torch.tensor([0.01, -0.01], dtype=torch.float64) >>> fp = fixed_point(32, mapping, xi, k + dk, power=5) >>> fp tensor([0.7163, 0.4868], dtype=torch.float64) >>> evaluate(pfp, [fp, dk]) tensor([0.7163, 0.4868], dtype=torch.float64)