Loss functions#

Loss functions for encodermap

Todo

  • Debug Autograph for distance cost

  • WARNING: AutoGraph could not transform <function sigmoid_loss at 0x00000264AB761040> and will run it as-is.

  • Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, export AUTOGRAPH_VERBOSITY=10) and attach the full output.

  • Cause: module ‘gast’ has no attribute ‘Index’

  • To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert

encodermap.loss_functions.loss_functions._do_nothing(*args)[source]#

This function does nothing. One of the functions provided to tf.cond.

encodermap.loss_functions.loss_functions._summary_cost(name, cost)[source]#

This functions logs a scalar to a name. One of the functions provided to tf.cond.

encodermap.loss_functions.loss_functions.angle_loss(model, parameters=None, callback=None)[source]#

Encodermap angle loss. Calculates distances between true and predicted angles.

Respects periodicity in a [-a, a] interval if the provided parameters have a periodicity of 2 * a.

Note

The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.

Parameters:
  • model (tf.keras.Model) – The model to use the loss function on.

  • parameters (Union[encodermap.ADCParameters, None], optional) – The parameters. If None is provided default values (check them with print(em.ADCParameters.defaults_description())) are used. Defaults to None.

Returns:

A loss function. Can be used in either custom training or model.fit().

Return type:

function

encodermap.loss_functions.loss_functions.auto_loss(model, parameters=None, callback=None)[source]#

Encodermap auto_loss.

Use in custom training loops or in model.fit() training.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Union[encodermap.Parameters, None], optional) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

Returns:

A loss function.

Return type:

function

encodermap.loss_functions.loss_functions.basic_loss_combinator(*losses)[source]#

Calculates the sum of a list of losses and returns a combined loss.

The basic loss combinator does not write to summary. Can be used for debugging.

encodermap.loss_functions.loss_functions.cartesian_distance_loss(model, parameters=None, callback=None)[source]#

Encodermap cartesian distance loss. Calculates sigmoid-weighted distances between pairwise cartesians and latent.

Uses sketch-map’s sigmoid function to transform the high-dimensional space of the input and the low-dimensional space of latent.

Make sure to provide the pairwise cartesian distances. The output of the latent will be compared to the input.

Note

If the model contains two layers. The first layer will be assumed to be the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case insensitive).

Parameters:
  • model (tf.keras.Model) – The model to use the loss function on.

  • parameters (Union[encodermap.ADCParameters, None], optional) – The parameters. If None is provided default values (check them with print(em.ADCParameters.defaults_description())) are used. Defaults to None.

Returns:

A loss function. Can be used in either custom training or model.fit().

Return type:

function

encodermap.loss_functions.loss_functions.cartesian_loss(model, scale_callback=None, parameters=None, log_callback=None, print_current_scale=False)[source]#

Encodermap cartesian distance loss. Calculates sigmoid-weighted distances between pairwise cartesians and latent.

Uses sketch-map’s sigmoid function to transform the high-dimensional space of the input and the high-dimensional space of the output.

Adjustments to this cost_function via the soft_start parameter need to be made via a callback that re-compiles the model during training. For this, the soft_start parameters of the outer function will be used. It must be either 0 or 1, indexing the 1st or 2nd element of the cartesian_cost_scale_soft_start tuple. The callback should also be provided when model.fit is executed.

Three cases are possible:
  • Case 1: step < cartesian_cost_scale_soft_start[0]: cost_scale = 0

  • Case 2: cartesian_cost_scale_soft_start[0] <= step <= cartesian_cost_scale_soft_start[1]:

    cost_scale = p.cartesian_cost_scale / (cartesian_cost_scale_soft_start[1] - cartesian_cost_scale_soft_start[0]) * step

  • Case 3: cartesian_cost_scale_soft_start[1] < step: cost_scale = p.cartesian_cost_scale

Make sure to provide the pairwise cartesian distances. This function will be adjusted as training increases via a callback. See encodermap.callbacks.callbacks.IncreaseCartesianCost for more info.

Parameters:
  • model (tf.keras.Model) – The model to use the loss function on.

  • parameters (Union[encodermap.ADCParameters, None], optional) – The parameters. If None is provided default values (check them with print(em.ADCParameters.defaults_description())) are used. Defaults to None.

  • soft_start (Union[int, None], optional) – How to scale the cartesian loss. The ADCParameters class contains a two-tuple of integers. These integers can be used to scale this loss function. If soft_start is 0, the first value of ADCParameters.cartesian_cost_scale_soft_start will be used, if it is 1, the second. if it is None, or both values of ADCParameters.cartesian_cost_scale_soft_start are None, the cost will not be scaled. Defaults to None.

  • print_current_scale (bool, optional) – Whether to print the current scale. Is used in unittesting. Defaults to False.

Raises:
  • Exception – When no bottleneck/latent layer can be found in the model.

  • Exception – When soft_start is greater than 1 and can’t index the two-tuple.

Returns:

A loss function. Can be used in either custom training or model.fit().

Return type:

function

encodermap.loss_functions.loss_functions.center_loss(model, parameters=None, callback=None)[source]#

Encodermap center_loss

Use in custom training loops or in model.fit() training.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Union[encodermap.Parameters, None], optional) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

Note

If the model contains two layers. The first layer will be assumed to be the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case insensitive).

Raises:

Exception – When no bottleneck/latent layer can be found in the model.

Returns:

A loss function.

Return type:

function

encodermap.loss_functions.loss_functions.dihedral_loss(model, parameters=None, callback=None)[source]#

Encodermap dihedral loss. Calculates distances between true and predicted dihedral angles.

Respects periodicity in a [-a, a] interval if the provided parameters have a periodicity of 2 * a.

Note

The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.

Parameters:
  • model (tf.keras.Model) – The model to use the loss function on.

  • parameters (Union[encodermap.ADCParameters, None], optional) – The parameters. If None is provided default values (check them with print(em.ADCParameters.defaults_description())) are used. Defaults to None.

Returns:

A loss function. Can be used in either custom training or model.fit().

Return type:

function

encodermap.loss_functions.loss_functions.distance_loss(model, parameters=None, callback=None)[source]#

Encodermap distance_loss

Transforms space using sigmoid function first proposed by sketch-map.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Union[encodermap.Parameters, None], optional) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

Note

If the model contains two layers. The first layer will be assumed to be the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case insensitive).

Raises:

Exception – When no bottleneck/latent layer can be found in the model.

Returns:

A loss function.

Return type:

function

References:

@article{ceriotti2011simplifying,
  title={Simplifying the representation of complex free-energy landscapes using sketch-map},
  author={Ceriotti, Michele and Tribello, Gareth A and Parrinello, Michele},
  journal={Proceedings of the National Academy of Sciences},
  volume={108},
  number={32},
  pages={13023--13028},
  year={2011},
  publisher={National Acad Sciences}
}
encodermap.loss_functions.loss_functions.loss_combinator(*losses)[source]#

Calculates the sum of a list of losses and returns a combined loss.

Parameters:

*losses – Variable length argument list of loss functions.

Returns:

A combined loss function that can be used in custom training or with model.fit()

Return type:

function

Example

>>> import encodermap as em
>>> from encodermap import loss_functions
>>> import tensorflow as tf
>>> import numpy as np
>>> tf.random.set_seed(1) # fix random state to pass doctest :)
>>> model = tf.keras.Sequential([
...     tf.keras.layers.Dense(100, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu'),
...     tf.keras.layers.Dense(2, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu'),
...     tf.keras.layers.Dense(100, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu')
... ])
>>> # Set up losses and bundle them using the loss combinator
>>> auto_loss = loss_functions.auto_loss(model)
>>> reg_loss = loss_functions.regularization_loss(model)
>>> loss = loss_functions.loss_combinator(auto_loss, reg_loss)
>>> # Compile model, model.fit() usually takes a tuple of (data, classes) but in
>>> # regression learning the data needs to be provided twice. That's why we use fit(data, data)
>>> model.compile(tf.keras.optimizers.Adam(), loss=loss)
>>> data = np.random.random((100, 100))
>>> history = model.fit(data, data, verbose=0)
>>> tf.random.set_seed(None) # reset seed
>>> # This weird contraption is also there to make the output predictable and pass tests
>>> # Somehow the tf.random.seed(1) does not work here. :(
>>> loss = history.history['loss'][0]
>>> print(loss) 
{'loss': array([2.6])}
>>> print(type(loss))
<class 'float'>
encodermap.loss_functions.loss_functions.old_distance_loss(model, parameters=None)[source]#
encodermap.loss_functions.loss_functions.reconstruction_loss(model)[source]#

Simple Autoencoder recosntruction loss.

Use in custom training loops or in model.fit training.

Parameters:

model (tf.keras.Model) – A model you want to use the loss function on.

Returns:

A loss function to be used in custom training or model.fit.

Function takes the following arguments: y_true (tf.Tensor): The true tensor. y_pred (tf.Tensor, optional): The output tensor. If not supplied

the model will be called to get this tensor. Defaults to None.

step (int): A step for tensorboard callbacks. Defaults to None.

Return type:

function

Examples

>>> import tensorflow as tf
>>> import encodermap as em
>>> from encodermap import loss_functions
>>> model = tf.keras.Model()
>>> loss = loss_functions.reconstruction_loss(model)
>>> x = tf.random.normal(shape=(10, 10))
>>> loss(x, x).numpy()
0.0
encodermap.loss_functions.loss_functions.regularization_loss(model, parameters=None, callback=None)[source]#

Regularization loss of arbitrary tf.keras.Model

Use in custom training loops or in model.fit() training. Loss is obtained as tf.math.add_n(model.losses)

Parameters:

model (tf.keras.Model) – A model you want to use the loss function on.

Returns:

A loss function.

Return type:

function

encodermap.loss_functions.loss_functions.side_dihedral_loss(model, parameters=None, callback=None)[source]#

Encodermap sidechain dihedral loss. Calculates distances between true and predicted sidechain dihedral angles.

Respects periodicity in a [-a, a] interval if the provided parameters have a periodicity of 2 * a.

Note

The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.

Parameters:
  • model (tf.keras.Model) – The model to use the loss function on.

  • parameters (Union[encodermap.ADCParameters, None], optional) – The parameters. If None is provided default values (check them with print(em.ADCParameters.defaults_description())) are used. Defaults to None.

Returns:

A loss function. Can be used in either custom training or model.fit().

Return type:

function

encodermap.loss_functions.loss_functions.sigmoid_loss(parameters=None, periodicity_overwrite=None)[source]#

Sigmoid loss closure for use in distance cost and cartesian distance cost.

Outer function prepares callable sigmoid. Sigmoid can then be called with just y_true and y_pred.

Parameters:
  • parameters (Union[encodermap.Parameters, None], optional) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • overwrite (periodicity) – Cartesian distance cost is always non-periodic. To make sure no periodicity is applied to the data, set periodicity_overwrite to float(‘inf’). If None is provided the periodicity of the parameters class (default 2*pi) will be used. Defaults to None.

Returns:

A function that takes y_true and y_pred. Both need to be of the same shape.

Return type:

function