MaskedAAM

class menpofit.aam.MaskedAAM(images, group=None, holistic_features=<function no_op>, reference_shape=None, diagonal=None, scales=(0.5, 1.0), patch_shape=(17, 17), shape_model_cls=<class 'menpofit.modelinstance.OrthoPDM'>, max_shape_components=None, max_appearance_components=None, verbose=False, batch_size=None)[source]

Bases: AAM

Class for training a multi-scale patch-based Masked Active Appearance Model. The appearance of this model is formulated by simply masking an image with a patch-based mask.

Parameters
  • images (list of menpo.image.Image) – The list of training images.

  • group (str or None, optional) – The landmark group that will be used to train the AAM. If None and the images only have a single landmark group, then that is the one that will be used. Note that all the training images need to have the specified landmark group.

  • holistic_features (closure or list of closure, optional) – The features that will be extracted from the training images. Note that the features are extracted before warping the images to the reference shape. If list, then it must define a feature function per scale. Please refer to menpo.feature for a list of potential features.

  • reference_shape (menpo.shape.PointCloud or None, optional) – The reference shape that will be used for building the AAM. The purpose of the reference shape is to normalise the size of the training images. The normalization is performed by rescaling all the training images so that the scale of their ground truth shapes matches the scale of the reference shape. Note that the reference shape is rescaled with respect to the diagonal before performing the normalisation. If None, then the mean shape will be used.

  • diagonal (int or None, optional) – This parameter is used to rescale the reference shape so that the diagonal of its bounding box matches the provided value. In other words, this parameter controls the size of the model at the highest scale. If None, then the reference shape does not get rescaled.

  • scales (float or tuple of float, optional) – The scale value of each scale. They must provided in ascending order, i.e. from lowest to highest scale. If float, then a single scale is assumed.

  • patch_shape ((int, int), optional) – The size of the patches of the mask that is used to sample the appearance vectors.

  • shape_model_cls (subclass of PDM, optional) – The class to be used for building the shape model. The most common choice is OrthoPDM.

  • max_shape_components (int, float, list of those or None, optional) – The number of shape components to keep. If int, then it sets the exact number of components. If float, then it defines the variance percentage that will be kept. If list, then it should define a value per scale. If a single number, then this will be applied to all scales. If None, then all the components are kept. Note that the unused components will be permanently trimmed.

  • max_appearance_components (int, float, list of those or None, optional) – The number of appearance components to keep. If int, then it sets the exact number of components. If float, then it defines the variance percentage that will be kept. If list, then it should define a value per scale. If a single number, then this will be applied to all scales. If None, then all the components are kept. Note that the unused components will be permanently trimmed.

  • verbose (bool, optional) – If True, then the progress of building the AAM will be printed.

  • batch_size (int or None, optional) – If an int is provided, then the training is performed in an incremental fashion on image batches of size equal to the provided value. If None, then the training is performed directly on the all the images.

appearance_reconstructions(appearance_parameters, n_iters_per_scale)

Method that generates the appearance reconstructions given a set of appearance parameters. This is to be combined with a AAMResult object, in order to generate the appearance reconstructions of a fitting procedure.

Parameters
  • appearance_parameters (list of (n_params,) ndarray) – A set of appearance parameters per fitting iteration. It can be retrieved as a property of an AAMResult object.

  • n_iters_per_scale (list of int) – The number of iterations per scale. This is necessary in order to figure out which appearance parameters correspond to the model of each scale. It can be retrieved as a property of a AAMResult object.

Returns

appearance_reconstructions (list of menpo.image.Image) – List of the appearance reconstructions that correspond to the provided parameters.

build_fitter_interfaces(sampling)

Method that builds the correct Lucas-Kanade fitting interface. It only applies in case you wish to fit the AAM with a Lucas-Kanade algorithm (i.e. LucasKanadeAAMFitter).

Parameters

sampling (list of int or ndarray or None) – It defines a sampling mask per scale. If int, then it defines the sub-sampling step of the sampling mask. If ndarray, then it explicitly defines the sampling mask. If None, then no sub-sampling is applied.

Returns

fitter_interfaces (list) – The list of Lucas-Kanade interface per scale.

increment(images, group=None, shape_forgetting_factor=1.0, appearance_forgetting_factor=1.0, verbose=False, batch_size=None)

Method to increment the trained AAM with a new set of training images.

Parameters
  • images (list of menpo.image.Image) – The list of training images.

  • group (str or None, optional) – The landmark group that will be used to train the AAM. If None and the images only have a single landmark group, then that is the one that will be used. Note that all the training images need to have the specified landmark group.

  • shape_forgetting_factor ([0.0, 1.0] float, optional) – Forgetting factor that weights the relative contribution of new samples vs old samples for the shape model. If 1.0, all samples are weighted equally and, hence, the result is the exact same as performing batch PCA on the concatenated list of old and new simples. If <1.0, more emphasis is put on the new samples.

  • appearance_forgetting_factor ([0.0, 1.0] float, optional) – Forgetting factor that weights the relative contribution of new samples vs old samples for the appearance model. If 1.0, all samples are weighted equally and, hence, the result is the exact same as performing batch PCA on the concatenated list of old and new simples. If <1.0, more emphasis is put on the new samples.

  • verbose (bool, optional) – If True, then the progress of building the AAM will be printed.

  • batch_size (int or None, optional) – If an int is provided, then the training is performed in an incremental fashion on image batches of size equal to the provided value. If None, then the training is performed directly on the all the images.

instance(shape_weights=None, appearance_weights=None, scale_index=- 1)

Generates a novel AAM instance given a set of shape and appearance weights. If no weights are provided, then the mean AAM instance is returned.

Parameters
  • shape_weights ((n_weights,) ndarray or list or None, optional) – The weights of the shape model that will be used to create a novel shape instance. If None, the weights are assumed to be zero, thus the mean shape is used.

  • appearance_weights ((n_weights,) ndarray or list or None, optional) – The weights of the appearance model that will be used to create a novel appearance instance. If None, the weights are assumed to be zero, thus the mean appearance is used.

  • scale_index (int, optional) – The scale to be used.

Returns

image (menpo.image.Image) – The AAM instance.

random_instance(scale_index=- 1)

Generates a random instance of the AAM.

Parameters

scale_index (int, optional) – The scale to be used.

Returns

image (menpo.image.Image) – The AAM instance.

property n_scales

Returns the number of scales.

Type

int