encodermap.callbacks package#
Submodules#
encodermap.callbacks.callbacks module#
Callbacks to strew into the Autoencoder classes.
- class encodermap.callbacks.callbacks.CheckpointSaver(parameters: Optional[AnyParameters] = None)[source]#
Bases:
EncoderMapBaseCallback
Callback, that saves an encodermap.models model.
- class encodermap.callbacks.callbacks.EarlyStop(patience: int = 0)[source]#
Bases:
Callback
Stop training when the loss is at its min, i.e. the loss stops decreasing.
- Parameters:
patience (int) – Number of epochs to wait after min has been hit. After this number of no improvement, training stops.
- __init__(patience: int = 0) None [source]#
Instantiate the EarlyStop class.
- Parameters:
patience (int) – Number of training steps to wait after min has been hit.
improvement. (Training is halted after this number of steps without) –
- on_train_batch_end(batch: int, logs: Optional[dict] = None) None [source]#
Gets the current loss at the end of the batch compares it to previous batches.
- class encodermap.callbacks.callbacks.IncreaseCartesianCost(parameters: Optional[ADCParameters] = None, start_step: int = 0)[source]#
Bases:
Callback
Callback for the enocdermap.autoencoder.AngleDihedralCarteisanEncoderMap.
This callback implements the soft-start of the cartesian cost.
- __init__(parameters: Optional[ADCParameters] = None, start_step: int = 0) None [source]#
Instantiate the callback.
- Parameters:
(Optional[ACDParameters] (parameters) – Can be either None, or an instance of encodermap.parameters.ACDParameters. These parameters define the steps at which the cartesian cost scaling factor needs to be adjusted. If None is provided, the default values (None, None), i.e. no cartesian cost, will be used. Deafults to None.
start_step (int) – The current step of the training. This argument is important is training is stopped using the scaling cartesian cost. This argument will usually be loaded from a file in the saved model.
- class encodermap.callbacks.callbacks.ProgressBar(parameters: Optional[AnyParameters] = None)[source]#
Bases:
EncoderMapBaseCallback
Progressbar Callback. Mix in with model.fit() and make sure to set verbosity to zero.
- on_summary_step(epoch: int, logs: Optional[dict] = None) None [source]#
Update the progress bar after an epoch with the current loss.
- Parameters:
epoch (int) – Current epoch. Will be automatically passed by tensorflow.
logs (Optional[dict]) – Also automatically passed by tensorflow. Contains metrics and losses. logs[‘loss’] will be written to the progress bar.
- on_train_batch_end(batch: int, logs: Optional[dict] = None) None [source]#
Overwrites the parent class’ on_train_batch_end and adds a progress-bar update.
- class encodermap.callbacks.callbacks.TensorboardWriteBool(parameters: Optional[AnyParameters] = None)[source]#
Bases:
Callback
This class saves the value of the keras variable log_bool.
Based on this variable, stuff will be written to tensorboard, or not.
- __init__(parameters: Optional[AnyParameters] = None) None [source]#
Instantiate the class.
- Parameters:
parameters (Union[encodermap.Parameters, encodermap.ADCParameters, None], optional) – Parameters that will be used check when data should be written to tensorboard. If None is passed default values (check them with print(em.ADCParameters.defaults_description())) will be used. Defaults to None.