encodermap.callbacks package#

Submodules#

encodermap.callbacks.callbacks module#

Callbacks to strew into the Autoencoder classes.

class encodermap.callbacks.callbacks.CheckpointSaver(parameters: Optional[AnyParameters] = None)[source]#

Bases: EncoderMapBaseCallback

Callback, that saves an encodermap.models model.

on_checkpoint_step(epoch: int, logs: Optional[dict] = None) None[source]#

Overwrites parent class’ on_checkpoint_step method.

Uses encodermap.misc.saving_loading_models.save_model to save the model. Luckily, the keras callbacks contain the model as an attribute (self.model).

class encodermap.callbacks.callbacks.EarlyStop(patience: int = 0)[source]#

Bases: Callback

Stop training when the loss is at its min, i.e. the loss stops decreasing.

Parameters:

patience (int) – Number of epochs to wait after min has been hit. After this number of no improvement, training stops.

__init__(patience: int = 0) None[source]#

Instantiate the EarlyStop class.

Parameters:
  • patience (int) – Number of training steps to wait after min has been hit.

  • improvement. (Training is halted after this number of steps without) –

on_train_batch_end(batch: int, logs: Optional[dict] = None) None[source]#

Gets the current loss at the end of the batch compares it to previous batches.

on_train_begin(logs: Optional[dict] = None) None[source]#

Sets some attributes at the beginning of training.

on_train_end(logs: Optional[dict] = None) None[source]#

Prints a message after training, if an early stop occured.

class encodermap.callbacks.callbacks.IncreaseCartesianCost(parameters: Optional[ADCParameters] = None, start_step: int = 0)[source]#

Bases: Callback

Callback for the enocdermap.autoencoder.AngleDihedralCarteisanEncoderMap.

This callback implements the soft-start of the cartesian cost.

__init__(parameters: Optional[ADCParameters] = None, start_step: int = 0) None[source]#

Instantiate the callback.

Parameters:
  • (Optional[ACDParameters] (parameters) – Can be either None, or an instance of encodermap.parameters.ACDParameters. These parameters define the steps at which the cartesian cost scaling factor needs to be adjusted. If None is provided, the default values (None, None), i.e. no cartesian cost, will be used. Deafults to None.

  • start_step (int) – The current step of the training. This argument is important is training is stopped using the scaling cartesian cost. This argument will usually be loaded from a file in the saved model.

calc_current_cartesian_cost_scale(epoch)[source]#

Calculates the current cartesian distance scale, based on the parameters self.a, self.b self.p.cartesian_cost_scale.

on_train_batch_end(batch: int, logs: Optional[dict] = None)[source]#

Sets the value of the keras backend variable self.current_cartesian_cost_scale

class encodermap.callbacks.callbacks.ProgressBar(parameters: Optional[AnyParameters] = None)[source]#

Bases: EncoderMapBaseCallback

Progressbar Callback. Mix in with model.fit() and make sure to set verbosity to zero.

on_summary_step(epoch: int, logs: Optional[dict] = None) None[source]#

Update the progress bar after an epoch with the current loss.

Parameters:
  • epoch (int) – Current epoch. Will be automatically passed by tensorflow.

  • logs (Optional[dict]) – Also automatically passed by tensorflow. Contains metrics and losses. logs[‘loss’] will be written to the progress bar.

on_train_batch_end(batch: int, logs: Optional[dict] = None) None[source]#

Overwrites the parent class’ on_train_batch_end and adds a progress-bar update.

on_train_begin(logs: Optional[dict] = None) None[source]#

Simply creates the progressbar once training starts.

on_train_end(logs: Optional[dict] = None) None[source]#

Close the Progress Bar

class encodermap.callbacks.callbacks.TensorboardWriteBool(parameters: Optional[AnyParameters] = None)[source]#

Bases: Callback

This class saves the value of the keras variable log_bool.

Based on this variable, stuff will be written to tensorboard, or not.

__init__(parameters: Optional[AnyParameters] = None) None[source]#

Instantiate the class.

Parameters:

parameters (Union[encodermap.Parameters, encodermap.ADCParameters, None], optional) – Parameters that will be used check when data should be written to tensorboard. If None is passed default values (check them with print(em.ADCParameters.defaults_description())) will be used. Defaults to None.

on_train_batch_end(batch: int, logs: Optional[dict] = None) None[source]#

Sets the value of the keras backend variable log_bool.

This method does not use the batch argument, because, the variable self.current_training_step is used.

encodermap.callbacks.metrics module#

Module contents#