Loss functions#

Loss functions for encodermap

_do_nothing(*args, **kwargs)[source]#

This function does nothing. One of the functions provided to tf.cond.

Parameters:
Return type:

None

_summary_cost(name, cost)[source]#

This functions logs a scalar to a name. One of the functions provided to tf.cond.

Parameters:
  • name (str)

  • cost (Tensor)

Return type:

None

angle_loss(model, parameters=None, callback=None)[source]#

Encodermap angle loss.

Calculates distances between true and predicted angles. Respects periodicity in an [-a, a] interval if the provided parameters have a periodicity of 2 * a.

Note

The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Returns:

A loss function.

Return type:

Callable

auto_loss(model, parameters=None, callback=None)[source]#

Encodermap auto_loss.

Use in custom training loops or in model.fit() training.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Returns:

A loss function.

Return type:

Callable

basic_loss_combinator(*losses)[source]#

Calculates the sum of a list of losses and returns a combined loss.

The basic loss combinator does not write to summary. Can be used for debugging.

Parameters:

losses (Callable)

Return type:

Callable

cartesian_distance_loss(model, parameters=None, callback=None)[source]#

Encodermap cartesian distance loss.

Calculates sigmoid-weighted distances between pairwise cartesians and latent. Uses sketch-map’s sigmoid function to transform the high-dimensional space of the input and the low-dimensional space of latent.

Note

Make sure to provide the pairwise cartesian distances. The output of the latent will be compared to the input.

Note

If the model contains two layers. The first layer will be assumed to be

the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case-insensitive).

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Returns:

A loss function.

Return type:

Callable

cartesian_loss(model, scale_callback=None, parameters=None, log_callback=None, print_current_scale=False)[source]#

Encodermap cartesian loss.

Calculates difference between input and output pairwise distances. Adjustments to this cost function via the soft_start parameter need to be made via a callback that re-compiles the model during training. For this, the soft_start parameters of the outer function will be used. It must be either 0 or 1, indexing the 1st or 2nd element of the cartesian_cost_scale_soft_start tuple. The callback should also be provided when model.fit() is executed.

Three cases are possible:
  • Case 1: step < cartesian_cost_scale_soft_start[0]: cost_scale = 0

  • Case 2: cartesian_cost_scale_soft_start[0] <= step <= cartesian_cost_scale_soft_start[1]:

    cost_scale = p.cartesian_cost_scale / (cartesian_cost_scale_soft_start[1] - cartesian_cost_scale_soft_start[0]) * step

  • Case 3: cartesian_cost_scale_soft_start[1] < step: cost_scale = p.cartesian_cost_scale

Note

Make sure to provide the pairwise cartesian distances. This function will be adjusted as training increases via a callback. See encodermap.callbacks.callbacks.IncreaseCartesianCost for more info.

Parameters:
  • model (tf.keras.Model) – The model to use the loss function on.

  • scale_callback (Callback | None) – Optional[encoodermap.callbacks.IncreaseCartesianCost]:

  • parameters (Optional[AnyParameters]]) – The parameters. If None is provided, default values (check them with print(em.ADCParameters.defaults_description())) are used. Defaults to None.

  • soft_start (Union[int, None], optional) – How to scale the cartesian loss. The encodermap.parameters.ADCParameters class contains a two-tuple of integers. These integers can be used to scale this loss function. If soft_start is 0, the first value of ADCParameters.cartesian_cost_scale_soft_start will be used. If it is 1, the second. If it is None, or both values of ADCParameters.cartesian_cost_scale_soft_start are None, the cost will not be scaled. Defaults to None.

  • print_current_scale (bool, optional) – Whether to print the current scale. Is used in testing. Defaults to False.

  • log_callback (Callback | None)

Raises:
  • Exception – When no bottleneck/latent layer can be found in the model.

  • Exception – When soft_start is greater than 1 and can’t index the two-tuple.

Returns:

A loss function. Can be used in either custom training or model.fit().

Return type:

Callable

center_loss(model, parameters=None, callback=None)[source]#

Encodermap center_loss

Use in custom training loops or in model.fit() training.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Return type:

Callable

Note

If the model contains two layers. The first layer will be assumed to be the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case-insensitive).

Raises:

Exception – When no bottleneck/latent layer can be found in the model.

Returns:

A loss function.

Return type:

Callable

Parameters:
dihedral_loss(model, parameters=None, callback=None)[source]#

Encodermap dihedral loss.

Calculates distances between true and predicted dihedral angles. Respects periodicity in a [-a, a] interval if the provided parameters have a periodicity of 2 * a.

Note

The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Returns:

A loss function.

Return type:

Callable

distance_loss(model, parameters=None, callback=None)[source]#

Encodermap distance_loss

Transforms space using sigmoid function first proposed by sketch-map.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Return type:

Callable

Note

If the model contains two layers. The first layer will be assumed to be the decoder. If the model contains more layers, one layer needs to be named ‘latent’ (case insensitive).

Raises:

Exception – When no bottleneck/latent layer can be found in the model.

Returns:

A loss function.

Return type:

Callable

Parameters:

References:

@article{ceriotti2011simplifying,
  title={Simplifying the representation of complex free-energy landscapes using sketch-map},
  author={Ceriotti, Michele and Tribello, Gareth A and Parrinello, Michele},
  journal={Proceedings of the National Academy of Sciences},
  volume={108},
  number={32},
  pages={13023--13028},
  year={2011},
  publisher={National Acad Sciences}
}
loss_combinator(*losses)[source]#

Calculates the sum of a list of losses and returns a combined loss.

Parameters:

*losses (Callable) – Variable length argument list of loss functions.

Returns:

A combined loss function that can be used in custom training or with model.fit()

Return type:

Callable

Example

>>> import encodermap as em
>>> from encodermap import loss_functions
>>> import tensorflow as tf
>>> import numpy as np
>>> tf.random.set_seed(1) # fix random state to pass doctest :)
...
>>> model = tf.keras.Sequential([
...     tf.keras.layers.Dense(100, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu'),
...     tf.keras.layers.Dense(2, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu'),
...     tf.keras.layers.Dense(100, kernel_regularizer=tf.keras.regularizers.l2(), activation='relu')
... ])
...
>>> # Set up losses and bundle them using the loss combinator
>>> auto_loss = loss_functions.auto_loss(model)
>>> reg_loss = loss_functions.regularization_loss(model)
>>> loss = loss_functions.loss_combinator(auto_loss, reg_loss)
...
>>> # Compile model, model.fit() usually takes a tuple of (data, classes) but in
>>> # regression learning the data needs to be provided twice. That's why we use fit(data, data)
>>> model.compile(tf.keras.optimizers.Adam(), loss=loss)
>>> data = np.random.random((100, 100))
>>> history = model.fit(x=data, y=data, verbose=0)
>>> tf.random.set_seed(None) # reset seed
...
>>> # This weird contraption is also there to make the output predictable and pass tests
>>> # Somehow the tf.random.seed(1) does not work here. :(
>>> loss = history['loss'][0]
>>> print(loss)  
{'loss': array([2.6])}
>>> print(type(loss))
<class 'float'>
old_distance_loss(model, parameters=None)[source]#
reconstruction_loss(model)[source]#

Simple Autoencoder recosntruction loss.

Use in custom training loops or in model.fit training.

Parameters:

model (tf.keras.Model) – A model you want to use the loss function on.

Returns:

A loss function to be used in custom training or model.fit.

Function takes the following arguments: y_true (tf.Tensor): The true tensor. y_pred (tf.Tensor, optional): The output tensor. If not supplied

the model will be called to get this tensor. Defaults to None.

step (int): A step for tensorboard callbacks. Defaults to None.

Return type:

Callable

Examples

>>> import tensorflow as tf
>>> import encodermap as em
>>> from encodermap import loss_functions
>>> model = tf.keras.Model()
>>> loss = loss_functions.reconstruction_loss(model)
>>> x = tf.random.normal(shape=(10, 10))
>>> loss(x, x).numpy()
0.0
regularization_loss(model, parameters=None, callback=None)[source]#

Regularization loss of arbitrary tf.keras.Model

Use in custom training loops or in model.fit() training. Loss is obtained as tf.math.add_n(model.losses)

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Returns:

A loss function.

Return type:

Callable

side_dihedral_loss(model, parameters=None, callback=None)[source]#

Encodermap sidechain dihedral loss.

Calculates distances between true and predicted sidechain dihedral angles. Respects periodicity in a [-a, a] interval if the provided parameters have a periodicity of 2 * a.

Note

The interval should be (-a, a], but due to floating point precision we can’t make this distinction here.

Parameters:
  • model (tf.keras.Model) – A model you want to use the loss function on.

  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • callback (Optional[tf.keras.callbacks.Callback]) – A write_bool callback, that prevents a tensorboard write when parameters.summary_step is set to greater values. This saves disk-space, as costs are not needed to be logged every training step.

Returns:

A loss function.

Return type:

Callable

sigmoid_loss(parameters=None, periodicity_overwrite=None, dist_dig_parameters_overwrite=None)[source]#

Sigmoid loss closure for use in distance cost and cartesian distance cost.

Outer function prepares callable sigmoid. Sigmoid can then be called with just y_true and y_pred.

Parameters:
  • parameters (Optional[AnyParameters]) – The parameters. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.

  • periodicity_overwrite (Optional[float]) – Cartesian distance cost is always non-periodic. To make sure no periodicity is applied to the data, set periodicity_overwrite to float(‘inf’). If None is provided, the periodicity of the parameters class (default 2*pi) will be used. Defaults to None.

  • dist_dig_parameters_overwrite (Optional[tuple[float, ...]]) – Distance costs for the AngleDihedralCartesianEncoderMap class come in two flavors. The regular distance cost compares the encoder inputs to the latent and use Sketch-map’s sigmoid function to weigh these data accordingly. The cartesian distance cost, on the other hand, compares the latent and the pairwise distances of the input CA coordinates. This cost function uses different sigmoid parameters (because the CA distances don’t lie in a periodic space). The tuple of 6 floats provided for dist_dig_parameters_overwrite will supersede the dist_sig_parameters in the parameters argument. Defaults to None.

Returns:

A function that takes y_true and y_pred.

Both need to be of the same shape.

Return type:

Callable