encodermap.loss_functions package#
Submodules#
encodermap.loss_functions.loss_classes module#
Losses for encodermap.
All losses in EncoderMap inherit from tf.keras.losses.Loss and thus can be easily paired with other models.
encodermap.loss_functions.loss_functions module#
Loss functions for encodermap
- angle_loss(model, parameters=None, callback=None)[source]#
Encodermap angle loss.
Calculates distances between true and predicted angles. Respects periodicity in an [-a, a] interval if the provided parameters have a periodicity of 2 * a.
Note
The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.
- Returns:
A loss function.
- Return type:
Callable
- auto_loss(model, parameters=None, callback=None)[source]#
Encodermap auto_loss.
Use in custom training loops or in model.fit() training.
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.
- Returns:
A loss function.
- Return type:
Callable
- cartesian_distance_loss(model, parameters=None, callback=None)[source]#
Encodermap cartesian distance loss.
Calculates sigmoid-weighted distances between pairwise cartesians and latent. Uses sketch-map’s sigmoid function to transform the high-dimensional space of the input and the low-dimensional space of latent.
Note
Make sure to provide the pairwise cartesian distances. The output of the latent will be compared to the input.
Note
- If the model contains two layers. The first layer will be assumed to be
the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case-insensitive).
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.
- Returns:
A loss function.
- Return type:
Callable
- cartesian_loss(model, scale_callback=None, parameters=None, log_callback=None, print_current_scale=False)[source]#
Encodermap cartesian loss.
Calculates difference between input and output pairwise distances. Adjustments to this cost function via the soft_start parameter need to be made via a callback that re-compiles the model during training. For this, the soft_start parameters of the outer function will be used. It must be either 0 or 1, indexing the 1st or 2nd element of the cartesian_cost_scale_soft_start tuple. The callback should also be provided when model.fit() is executed.
- Three cases are possible:
Case 1: step < cartesian_cost_scale_soft_start[0]: cost_scale = 0
- Case 2: cartesian_cost_scale_soft_start[0] <= step <= cartesian_cost_scale_soft_start[1]:
cost_scale = p.cartesian_cost_scale / (cartesian_cost_scale_soft_start[1] - cartesian_cost_scale_soft_start[0]) * step
Case 3: cartesian_cost_scale_soft_start[1] < step: cost_scale = p.cartesian_cost_scale
Note
Make sure to provide the pairwise cartesian distances. This function will be adjusted as training increases via a callback. See encodermap.callbacks.callbacks.IncreaseCartesianCost for more info.
- Parameters:
model (tf.keras.Model) – The model to use the loss function on.
scale_callback (Callback | None) – Optional[encoodermap.callbacks.IncreaseCartesianCost]:
parameters (Optional[AnyParameters]]) – The parameters. If None is provided, default values (check them with print(em.ADCParameters.defaults_description())) are used. Defaults to None.
soft_start (Union[int, None], optional) – How to scale the cartesian loss. The encodermap.parameters.ADCParameters class contains a two-tuple of integers. These integers can be used to scale this loss function. If soft_start is 0, the first value of ADCParameters.cartesian_cost_scale_soft_start will be used. If it is 1, the second. If it is None, or both values of ADCParameters.cartesian_cost_scale_soft_start are None, the cost will not be scaled. Defaults to None.
print_current_scale (bool, optional) – Whether to print the current scale. Is used in testing. Defaults to False.
log_callback (Callback | None)
- Raises:
- Returns:
A loss function. Can be used in either custom training or model.fit().
- Return type:
Callable
- center_loss(model, parameters=None, callback=None)[source]#
Encodermap center_loss
Use in custom training loops or in model.fit() training.
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.
- Return type:
Note
If the model contains two layers. The first layer will be assumed to be the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case-insensitive).
- Raises:
Exception – When no bottleneck/latent layer can be found in the model.
- Returns:
A loss function.
- Return type:
Callable
- Parameters:
model (Model)
parameters (Parameters | Parameters | None)
callback (Callback | None)
- dihedral_loss(model, parameters=None, callback=None)[source]#
Encodermap dihedral loss.
Calculates distances between true and predicted dihedral angles. Respects periodicity in a [-a, a] interval if the provided parameters have a periodicity of 2 * a.
Note
The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.
- Returns:
A loss function.
- Return type:
Callable
- distance_loss(model, parameters=None, callback=None)[source]#
Encodermap distance_loss
Transforms space using sigmoid function first proposed by sketch-map.
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.
- Return type:
Note
If the model contains two layers. The first layer will be assumed to be the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case insensitive).
- Raises:
Exception – When no bottleneck/latent layer can be found in the model.
- Returns:
A loss function.
- Return type:
Callable
- Parameters:
model (Model)
parameters (Parameters | Parameters | None)
callback (Callback | None)
References:
@article{ceriotti2011simplifying, title={Simplifying the representation of complex free-energy landscapes using sketch-map}, author={Ceriotti, Michele and Tribello, Gareth A and Parrinello, Michele}, journal={Proceedings of the National Academy of Sciences}, volume={108}, number={32}, pages={13023--13028}, year={2011}, publisher={National Acad Sciences} }
- loss_combinator(*losses)[source]#
Calculates the sum of a list of losses and returns a combined loss.
- Parameters:
*losses (Callable) – Variable length argument list of loss functions.
- Returns:
A combined loss function that can be used in custom training or with model.fit()
- Return type:
Callable
Example
>>> import encodermap as em >>> from encodermap import loss_functions >>> import tensorflow as tf >>> import numpy as np >>> tf.random.set_seed(1) # fix random state to pass doctest :) ... >>> model = tf.keras.Sequential([ ... tf.keras.layers.Dense(100, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu'), ... tf.keras.layers.Dense(2, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu'), ... tf.keras.layers.Dense(100, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu') ... ]) ... >>> # Set up losses and bundle them using the loss combinator >>> auto_loss = loss_functions.auto_loss(model) >>> reg_loss = loss_functions.regularization_loss(model) >>> loss = loss_functions.loss_combinator(auto_loss, reg_loss) ... >>> # Compile model, model.fit() usually takes a tuple of (data, classes) but in >>> # regression learning the data needs to be provided twice. That's why we use fit(data, data) >>> model.compile(tf.keras.optimizers.Adam(), loss=loss) >>> data = np.random.random((100, 100)) >>> history = model.fit(x=data, y=data, verbose=0) >>> tf.random.set_seed(None) # reset seed ... >>> # This weird contraption is also there to make the output predictable and pass tests >>> # Somehow the tf.random.seed(1) does not work here. :( >>> loss = history['loss'][0] >>> print(loss) {'loss': array([2.6])} >>> print(type(loss)) <class 'float'>
- reconstruction_loss(model)[source]#
Simple Autoencoder recosntruction loss.
Use in custom training loops or in model.fit training.
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
- Returns:
- A loss function to be used in custom training or model.fit.
Function takes the following arguments: y_true (tf.Tensor): The true tensor. y_pred (tf.Tensor, optional): The output tensor. If not supplied
the model will be called to get this tensor. Defaults to None.
step (int): A step for tensorboard callbacks. Defaults to None.
- Return type:
Callable
Examples
>>> import tensorflow as tf >>> import encodermap as em >>> from encodermap import loss_functions >>> model = tf.keras.Model() >>> loss = loss_functions.reconstruction_loss(model) >>> x = tf.random.normal(shape=(10, 10)) >>> loss(x, x).numpy() 0.0
- regularization_loss(model, parameters=None, callback=None)[source]#
Regularization loss of arbitrary tf.keras.Model
Use in custom training loops or in model.fit() training. Loss is obtained as tf.math.add_n(model.losses)
- Parameters:
model (tf.keras.Model) – A model you want to use the loss function on.
parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.
- Returns:
A loss function.
- Return type:
Callable
Module contents#
EncoderMap’s loss functions.