UnifiedAAMCLM

class menpofit.unified_aam_clm.base.UnifiedAAMCLM(images, group=None, holistic_features=<function no_op>, reference_shape=None, diagonal=None, scales=(0.5, 1.0), expert_ensemble_cls=<class 'menpofit.clm.expert.ensemble.CorrelationFilterExpertEnsemble'>, patch_shape=(17, 17), context_shape=(34, 34), sample_offsets=None, transform=<class 'menpofit.transform.piecewiseaffine.DifferentiablePiecewiseAffine'>, shape_model_cls=<class 'menpofit.modelinstance.OrthoPDM'>, max_shape_components=None, max_appearance_components=None, sigma=None, boundary=3, response_covariance=2, patch_normalisation=<function no_op>, cosine_mask=True, verbose=False)[source]

Bases: object

Class for training a multi-scale unified holistic AAM and CLM as presented in [1]. Please see the references for AAMs and CLMs in their respective base classes.

Parameters
  • images (list of menpo.image.Image) – The list of training images.

  • group (str or None, optional) – The landmark group that will be used to train the model. If None and the images only have a single landmark group, then that is the one that will be used. Note that all the training images need to have the specified landmark group.

  • holistic_features (closure or list of closure, optional) – The features that will be extracted from the training images. Note that the features are extracted before warping the images to the reference shape. If list, then it must define a feature function per scale. Please refer to menpo.feature for a list of potential features.

  • reference_shape (menpo.shape.PointCloud or None, optional) – The reference shape that will be used for building the AAM. The purpose of the reference shape is to normalise the size of the training images. The normalization is performed by rescaling all the training images so that the scale of their ground truth shapes matches the scale of the reference shape. Note that the reference shape is rescaled with respect to the diagonal before performing the normalisation. If None, then the mean shape will be used.

  • diagonal (int or None, optional) – This parameter is used to rescale the reference shape so that the diagonal of its bounding box matches the provided value. In other words, this parameter controls the size of the model at the highest scale. If None, then the reference shape does not get rescaled.

  • scales (float or tuple of float, optional) – The scale value of each scale. They must provided in ascending order, i.e. from lowest to highest scale. If float, then a single scale is assumed.

  • expert_ensemble_cls (subclass of ExpertEnsemble, optional) – The class to be used for training the ensemble of experts. The most common choice is CorrelationFilterExpertEnsemble.

  • patch_shape ((int, int) or list of (int, int), optional) – The shape of the patches to be extracted. If a list is provided, then it defines a patch shape per scale.

  • context_shape ((int, int) or list of (int, int), optional) – The context shape for the convolution. If a list is provided, then it defines a context shape per scale.

  • sample_offsets ((n_offsets, n_dims) ndarray or None, optional) – The sample_offsets to sample from within a patch. So (0, 0) is the centre of the patch (no offset) and (1, 0) would be sampling the patch from 1 pixel up the first axis away from the centre. If None, then no sample_offsets are applied.

  • transform (subclass of DL and DX, optional) – A differential warp transform object, e.g. DifferentiablePiecewiseAffine or DifferentiableThinPlateSplines.

  • shape_model_cls (subclass of OrthoPDM, optional) – The class to be used for building the shape model. The most common choice is OrthoPDM.

  • max_shape_components (int, float, list of those or None, optional) – The number of shape components to keep. If int, then it sets the exact number of components. If float, then it defines the variance percentage that will be kept. If list, then it should define a value per scale. If a single number, then this will be applied to all scales. If None, then all the components are kept. Note that the unused components will be permanently trimmed.

  • max_appearance_components (int, float, list of those or None, optional) – The number of appearance components to keep. If int, then it sets the exact number of components. If float, then it defines the variance percentage that will be kept. If list, then it should define a value per scale. If a single number, then this will be applied to all scales. If None, then all the components are kept. Note that the unused components will be permanently trimmed.

  • sigma (float or None, optional) – If not None, the input images are smoothed with an isotropic Gaussian filter with the specified standard deviation.

  • boundary (int, optional) – The number of pixels to be left as a safe margin on the boundaries of the reference frame (has potential effects on the gradient computation).

  • response_covariance (int, optional) – The covariance of the generated Gaussian response.

  • patch_normalisation (callable, optional) – The normalisation function to be applied on the extracted patches.

  • cosine_mask (bool, optional) – If True, then a cosine mask (Hanning function) will be applied on the extracted patches.

  • verbose (bool, optional) – If True, then the progress of building the model will be printed.

References

1

J. Alabort-i-Medina, and S. Zafeiriou. “Unifying holistic and parts-based deformable model fitting”, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.

appearance_reconstructions(appearance_parameters, n_iters_per_scale)[source]

Method that generates the appearance reconstructions given a set of appearance parameters. This is to be combined with a UnifiedAAMCLMResult object, in order to generate the appearance reconstructions of a fitting procedure.

Parameters
  • appearance_parameters (list of (n_params,) ndarray) – A set of appearance parameters per fitting iteration. It can be retrieved as a property of an UnifiedAAMCLMResult object.

  • n_iters_per_scale (list of int) – The number of iterations per scale. This is necessary in order to figure out which appearance parameters correspond to the model of each scale. It can be retrieved as a property of a UnifiedAAMCLMResult object.

Returns

appearance_reconstructions (list of menpo.image.Image) – List of the appearance reconstructions that correspond to the provided parameters.

build_fitter_interfaces(sampling)[source]

Method that builds the correct fitting interface for a UnifiedAAMCLMFitter.

Parameters

sampling (list of int or ndarray or None) – It defines a sampling mask per scale. If int, then it defines the sub-sampling step of the sampling mask. If ndarray, then it explicitly defines the sampling mask. If None, then no sub-sampling is applied.

Returns

fitter_interfaces (list) – The list of fitting interfaces per scale.

instance(shape_weights=None, appearance_weights=None, scale_index=- 1)[source]

Generates a novel instance of the AAM part of the model given a set of shape and appearance weights. If no weights are provided, then the mean AAM instance is returned.

Parameters
  • shape_weights ((n_weights,) ndarray or list or None, optional) – The weights of the shape model that will be used to create a novel shape instance. If None, the weights are assumed to be zero, thus the mean shape is used.

  • appearance_weights ((n_weights,) ndarray or list or None, optional) – The weights of the appearance model that will be used to create a novel appearance instance. If None, the weights are assumed to be zero, thus the mean appearance is used.

  • scale_index (int, optional) – The scale to be used.

Returns

image (menpo.image.Image) – The AAM instance.

random_instance(scale_index=- 1)[source]

Generates a random instance of the AAM part of the model.

Parameters

scale_index (int, optional) – The scale to be used.

Returns

image (menpo.image.Image) – The AAM instance.

shape_instance(shape_weights=None, scale_index=- 1)[source]

Generates a novel shape instance given a set of shape weights. If no weights are provided, the mean shape is returned.

Parameters
  • shape_weights ((n_weights,) ndarray or list or None, optional) – The weights of the shape model that will be used to create a novel shape instance. If None, the weights are assumed to be zero, thus the mean shape is used.

  • scale_index (int, optional) – The scale to be used.

Returns

instance (menpo.shape.PointCloud) – The shape instance.

property n_scales

Returns the number of scales.

Type

int