OrthoMDTransform¶
-
class
menpofit.transform.
OrthoMDTransform
(model, transform_cls, source=None)[source]¶ Bases:
GlobalMDTransform
A transform that couples an alignment transform to a statistical model together with a global similarity transform, such that the weights of the transform are fully specified by both the weights of statistical model and the weights of the similarity transform. The model is assumed to generate an instance which is then transformed by the similarity transform; the result defines the target landmarks of the transform. If no source is provided, the mean of the model is defined as the source landmarks of the transform.
This transform (in contrast to the
GlobalMDTransform
) additionally orthonormalises both the global and the model basis against each other, ensuring that orthogonality and normalization is enforced across the unified bases.Parameters: - model (
OrthoPDM
or subclass) – A linear statistical shape model (Point Distribution Model) that also has a global similarity transform that is orthonormalised with the shape bases. - transform_cls (subclass of menpo.transform.Alignment) – A class of menpo.transform.Alignment. The align constructor will be called on this with the source and target landmarks. The target is set to the points generated from the model using the provide weights - the source is either given or set to the model’s mean.
- source (menpo.shape.PointCloud or
None
, optional) – The source landmarks of the transform. IfNone
, the mean of the model is used.
-
Jp
()¶ Compute the parameters’ Jacobian, as shown in [1].
Returns: Jp ( (n_params, n_params)
ndarray) – The parameters’ Jacobian.References
[1] G. Papandreou and P. Maragos, “Adaptive and Constrained Algorithms for Inverse Compositional Active Appearance Model Fitting”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
-
apply
(x, batch_size=None, **kwargs)¶ Applies this transform to
x
.If
x
is Transformable,x
will be handed this transform object to transform itself non-destructively (a transformed copy of the object will be returned).If not,
x
is assumed to be an ndarray. The transformation will be non-destructive, returning the transformed version.Any
kwargs
will be passed to the specific transform_apply()
method.Parameters: - x (Transformable or
(n_points, n_dims)
ndarray) – The array or object to be transformed. - batch_size (int, optional) – If not
None
, this determines how many items from the numpy array will be passed through the transform at a time. This is useful for operations that require large intermediate matrices to be computed. - kwargs (dict) – Passed through to
_apply()
.
Returns: transformed (
type(x)
) – The transformed object or array- x (Transformable or
-
apply_inplace
(*args, **kwargs)¶ Deprecated as public supported API, use the non-mutating apply() instead.
For internal performance-specific uses, see _apply_inplace().
-
as_vector
(**kwargs)¶ Returns a flattened representation of the object as a single vector.
Returns: vector ((N,) ndarray) – The core representation of the object, flattened into a single vector. Note that this is always a view back on to the original object, but is not writable.
-
compose_after
(transform)¶ Returns a TransformChain that represents this transform composed after the given transform:
c = a.compose_after(b) c.apply(p) == a.apply(b.apply(p))
a
andb
are left unchanged.This corresponds to the usual mathematical formalism for the compose operator, o.
Parameters: transform (Transform) – Transform to be applied before self Returns: transform (TransformChain) – The resulting transform chain.
-
compose_after_from_vector_inplace
(delta)¶ Composes two transforms together based on the first order approximation proposed in [1].
Parameters: delta ( (N,)
ndarray) – Vectorized ModelDrivenTransform to be applied before self.Returns: transform (self) – self, updated to the result of the composition References
[1] G. Papandreou and P. Maragos, “Adaptive and Constrained Algorithms for Inverse Compositional Active Appearance Model Fitting”, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2008.
-
compose_before
(transform)¶ Returns a TransformChain that represents this transform composed before the given transform:
c = a.compose_before(b) c.apply(p) == b.apply(a.apply(p))
a
andb
are left unchanged.Parameters: transform (Transform) – Transform to be applied after self Returns: transform (TransformChain) – The resulting transform chain.
-
copy
()¶ Generate an efficient copy of this object.
Note that Numpy arrays and other Copyable objects on
self
will be deeply copied. Dictionaries and sets will be shallow copied, and everything else will be assigned (no copy will be made).Classes that store state other than numpy arrays and immutable types should overwrite this method to ensure all state is copied.
Returns: type(self)
– A copy of this object
-
d_dp
(points)¶ The derivative of this ModelDrivenTransform with respect to the parametrisation changes evaluated at points.
This is done by chaining the derivative of points wrt the source landmarks on the transform (dW/dL) together with the Jacobian of the linear model wrt its weights (dX/dp).
Parameters: points ( (n_points, n_dims)
ndarray) – The spatial points at which the derivative should be evaluated.Returns: d_dp ( (n_points, n_parameters, n_dims)
ndarray) – The Jacobian with respect to the parametrisation.
-
from_vector
(vector)¶ Build a new instance of the object from it’s vectorized state.
self
is used to fill out the missing state required to rebuild a full object from it’s standardized flattened state. This is the default implementation, which is which is adeepcopy
of the object followed by a call tofrom_vector_inplace()
. This method can be overridden for a performance benefit if desired.Parameters: vector ( (n_parameters,)
ndarray) – Flattened representation of the object.Returns: object ( type(self)
) – An new instance of this class.
-
from_vector_inplace
(vector)¶ Deprecated. Use the non-mutating API, from_vector.
For internal usage in performance-sensitive spots, see _from_vector_inplace()
Parameters: vector ( (n_parameters,)
ndarray) – Flattened representation of this object
-
has_nan_values
()¶ Tests if the vectorized form of the object contains
nan
values or not. This is particularly useful for objects with unknown values that have been mapped tonan
values.Returns: has_nan_values (bool) – If the vectorized object contains nan
values.
-
pseudoinverse_vector
(vector)¶ The vectorized pseudoinverse of a provided vector instance. Syntactic sugar for self.from_vector(vector).pseudoinverse.as_vector(). On ModelDrivenTransform this is especially fast - we just negate the vector provided.
Parameters: vector ( (P,)
ndarray) – A vectorized version of selfReturns: pseudoinverse_vector ( (N,)
ndarray) – The pseudoinverse of the vector provided
-
set_target
(new_target)¶ Update this object so that it attempts to recreate the
new_target
.Parameters: new_target (PointCloud) – The new target that this object should try and regenerate.
-
has_true_inverse
¶ Whether the transform has true inverse.
Type: bool
-
n_dims
¶ The number of dimensions that the transform supports.
Type: int
-
n_dims_output
¶ The output of the data from the transform.
None
if the output of the transform is not dimension specific.Type: int or None
-
n_parameters
¶ The total number of parameters.
Type: int
-
target
¶ The current menpo.shape.PointCloud that this object produces.
Type: menpo.shape.PointCloud
- model (