Autoencoder Classes#
Autoencoder#
- class Autoencoder(parameters=None, train_data=None, model=None, read_only=False, sparse=False)[source]#
Main Autoencoder class. Presents all high-level functions.
This is the main class for neural networks inside EncoderMap. The class prepares the data (batching and shuffling), creates a tf.keras.Model of layers specified by the attributes of the encodermap.Parameters class. Depending on what Parent/Child-Class is instantiated, a combination of various cost functions is set up. Callbacks to Tensorboard are also set up.
- train_data#
The numpy array of the train data passed at init.
- Type:
np.ndarray
- p#
An encodermap.Parameters class containing all info needed to set up the network.
- Type:
AnyParameters
- dataset#
The dataset that is actually used in training the keras model. The dataset is a batched, shuffled, infinitely-repeating dataset.
- Type:
tensorflow.data.Dataset
- read_only#
Variable telling the class whether it is allowed to write to disk (False) or not (True).
- Type:
- callbacks#
A list of tf.keras.callbacks.Callback subclasses changing the behavior of the model during training. Some standard callbacks are always present like:
- encodermap.callbacks.callbacks.ProgressBar:
A progress bar callback using tqdm giving the current progress of training and the current loss.
- CheckPointSaver:
A callback that saves the model every parameters.checkpoint_step steps into the main directory. This callback will only be used, when read_only is False.
- TensorboardWriteBool:
A callback that contains a boolean Tensor that will be True or False, depending on the current training step and the summary_step in the parameters class. The loss functions use this callback to decide whether they should write to Tensorboard. This callback will only be present when read_only is False and parameters.tensorboard is True.
You can append your own callbacks to this list before executing self.train().
- Type:
list[Any]
- encoder#
The encoder submodel of self.model.
- Type:
tf.keras.Model
- decoder#
The decoder submodel of self.model.
- Type:
tf.keras.Model
- loss#
A list of loss functions passed to the model when it is compiled. When the main Autoencoder class is used and parameters.loss is ‘emap_cost’, this list comprises center_cost, regularization_cost, auto_cost. When the EncoderMap sub-class is used and parameters.loss is ‘emap_cost’, distance_cost is added to the list. When parameters.loss is not ‘emap_cost’, the loss can either be a string (‘mse’), or a function, that both are acceptable arguments for loss, when a keras model is compiled.
- Type:
Sequence[Callable]
- plot_network()[source]#
Tries to plot the network. For this method to work graphviz, pydot and pydotplus need to be installed.
- Return type:
None
- generate()[source]#
Same as decode. For AngleDihedralCartesianAutoencoder classes, this will build a protein strutcure.
Note
Performance of tensorflow is not only dependent on your system’s hardware and how the data is presented to the network (for this check out https://www.tensorflow.org/guide/data_performance), but also how you compiled tensorflow. Normal tensorflow (pip install tensorflow) is build without CPU extensions to work on many CPUs. However, Tensorflow can greatly benefit from using CPU instructions like AVX2, AVX512 that bring a speed-up in linear algebra computations of 300%. By building tensorflow from source, you can activate these extensions. However, the speed-up of using tensorflow with a GPU dwarfs the CPU speed-up. To check whether a GPU is available run: print(len(tf.config.list_physical_devices(‘GPU’))). Refer to these pages to install tensorflow for the best performance: https://www.tensorflow.org/install/pip and https://www.tensorflow.org/install/gpu
Examples
>>> import encodermap as em >>> # without providing any data, default parameters and a 4D >>> # hypercube as input data will be used. >>> e_map = em.EncoderMap(read_only=True) >>> print(e_map.train_data.shape) (16000, 4) >>> print(e_map.dataset) <BatchDataset element_spec=(TensorSpec(shape=(None, 4), dtype=tf.float32, name=None), TensorSpec(shape=(None, 4), dtype=tf.float32, name=None))> >>> print(e_map.encode(e_map.train_data).shape) (16000, 2)
Instantiate the Autoencoder class.
- Parameters:
parameters (Union[encodermap.Parameters, None], optional) – The parameters to be used. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
train_data (Union[np.ndarray, tf.data.Dataset, None], optional) –
- The train data. Can be one of the following:
- None: If None is provided points on the edges of a
4-dimensional hypercube will be used as train data.
- np.ndarray: If a numpy array is provided, it will be
transformed into a batched tf.data.Dataset by first making it an infinitely repeating dataset, shuffling it and the batching it with a batch size specified by parameters.batch_size.
- tf.data.Dataset: If a dataset is provided it will be
used without making any adjustments. Make sure, that the dataset uses float32 as its type.
Defaults to None.
model (Union[tf.keras.models.Model, None], optional) – Providing a keras model to this argument will make the Autoencoder/EncoderMap class use this model instead of the predefined ones. Make sure the model can accept EncoderMap’s loss functions. If None is provided the model will be built using the specifications in parameters. Defaults to None.
read_only (bool, optional) – Whether the class is allowed to write to disk (False) or not (True). Defaults to False and will allow the class to write to disk.
sparse (bool)
- add_images_to_tensorboard(*args, **kwargs)[source]#
Adds images of the latent space to tensorboard.
- Parameters:
data (Optional[Union[np.ndarray, Sequence[np.ndarray]]) – The input-data will be passed through the encoder part of the autoencoder. If None is provided, a set of 10_000 points from self.train_data will be taken. A list[np.ndarray] is needed for the functional API of the AngleDihedralCartesianEncoderMap, that takes a list of [angles, dihedrals, side_dihedrals]. Defaults to None.
image_step (Optional[int]) – The interval in which to plot images to tensorboard. If None is provided, the image_step will be the same as Parameters.summary_step. Defaults to None.
max_size (int) – The maximum size of the high-dimensional data, that is projected. Prevents excessively large-datasets from being projected at every image_step. Defaults to 10_000.
scatter_kws (Optional[dict[str, Any]]) – A dict with items that plotly.express.scatter() will accept. If None is provided, a dict with size 20 will be passed to px.scatter(**{‘size_max’: 10, ‘opacity’: 0.2}), which sets an appropriate size of scatter points for the size of datasets encodermap is usually used for.
hist_kws (Optional[dict[str, Any]]) – A dict with items that encodermap.plot.plotting._plot_free_energy() will accept. If None is provided a dict with bins 50 will be passed to encodermap.plot.plotting._plot_free_energy(**{‘bins’: 50}). You can choose a colormap here by providing {‘bins’: 50, ‘cmap’: ‘plasma’} for this argument.
additional_fns (Optional[Sequence[Callable]]) – A list of functions that will accept the low-dimensional output of the Autoencoder latent/bottleneck layer and return a tf.Tensor that can be logged by tf.summary.image(). See the notebook ‘writing_custom_images_to_tensorboard.ipynb’ in tutorials/notebooks_customization for more info. If None is provided, no additional functions will be used to plot to tensorboard. Defaults to None.
when (Literal["epoch", "batch"]) – When to log the images can be either ‘batch’, then the images will be logged after every step during training, or ‘epoch’, then only after every image_step epoch the images will be written. Defaults to ‘epoch’.
save_to_disk (bool) – Whether to also write the images to disk.
args (Any)
kwargs (Any)
- Return type:
None
- decode(data)[source]#
Calls the decoder part of the model.
AngleDihedralCartesianAutoencoder will, like the other two classes’ output a list of np.ndarray.
- Parameters:
data (np.ndarray) – The data to be passed to the decoder part of the model. Make sure that the shape of the data matches the number of neurons in the latent space.
- Returns:
- Outputs from the decoder part.
For AngleDihedralCartesianEncoderMap, this will be a list of np.ndarray.
- Return type:
Union[list[np.ndarray], np.ndarray]
- encode(data=None)[source]#
Calls encoder part of self.model.
- Parameters:
data (Optional[np.ndarray]) – The data to be passed top the encoder part. It can be either numpy ndarray or None. If None is provided, a set of 10000 points from the provided train data will be taken. Defaults to None.
- Returns:
The output from the bottleneck/latent layer.
- Return type:
np.ndarray
- classmethod from_checkpoint(checkpoint_path, train_data=None, sparse=False, use_previous_model=False, compat=False)[source]#
Reconstructs the class from a checkpoint.
- Parameters:
checkpoint_path (Union[str, Path]) – The path to the checkpoint. Can be either a directory, in which case the most recently saved model will be loaded. Or a direct .keras file, in which case, this specific model will be loaded.
train_data (Optional[np.ndarray]) – can provide the train data here.
sparse (bool) – Whether the reloaded model should be sparse.
use_previous_model (bool) – Set this flag to True, if you load a model from an in-between checkpoint step (e.g., to continue training with different parameters). If you have the files saved_model_0.keras, saved_model_500.keras and saved_model_1000.keras, setting this to True and loading the saved_model_500.keras will back up the saved_model_1000.keras.
compat (bool) – Whether to use compatibility mode when missing or wrong parameter files are present. In this special case, some assumptions about the network architecture are made from the model and the parameters in parameters.json overwritten accordingly (a backup will also be made).
- Returns:
Encodermap Autoencoder class.
- Return type:
- generate(data)[source]#
Duplication of self.decode.
In Autoencoder and EncoderMap this method is equivalent to decode(). In AngleDihedralCartesianEncoderMap this method will be overwritten to produce output molecular conformations.
- Parameters:
data (np.ndarray) – The data to be passed to the decoder part of the model. Make sure that the shape of the data matches the number of neurons in the latent space.
- Returns:
- Outputs from the decoder part. For
AngleDihedralCartesianEncoderMap, this will either be a mdtraj.Trajectory or MDAnalysis.Universe.
- Return type:
np.ndarray
- plot_network()[source]#
Tries to plot the network using pydot, pydotplus and graphviz. Doesn’t raise an exception if plotting is not possible.
Note
Refer to this guide to install these programs: https://stackoverflow.com/questions/47605558/importerror-failed-to-import-pydot-you-must-install-pydot-and-graphviz-for-py
- Return type:
None
- save(step=None)[source]#
Saves the model to the current path defined in parameters.main_path.
- Parameters:
step (Optional[int]) – Does not save the model at the given training step, but rather changes the string used for saving the model from a datetime format to another.
- Returns:
- When the model has been saved, the Path will
be returned. If the model could not be saved. None will be returned.
- Return type:
Union[None, Path]
- set_train_data(data)[source]#
Resets the train data for reloaded models.
- Parameters:
data (ndarray | DatasetV2)
- Return type:
None
- train()[source]#
Starts the training of the model.
- Returns:
- If training succeeds, an
instance of tf.keras.callbacks.History is returned. If not, None is returned.
- Return type:
Union[tf.keras.callbacks.History, None]
- property decoder: Model#
Decoder part of the model.
- Type:
tf.keras.Model
- property encoder: Model#
Encoder part of the model.
- Type:
tf.keras.Model
EncoderMap#
- class EncoderMap(parameters=None, train_data=None, model=None, read_only=False, sparse=False)[source]#
Complete copy of Autoencoder class but uses additional distance cost scaled by the SketchMap sigmoid params
Instantiate the Autoencoder class.
- Parameters:
parameters (Union[encodermap.Parameters, None], optional) – The parameters to be used. If None is provided default values (check them with print(em.Parameters.defaults_description())) are used. Defaults to None.
train_data (Union[np.ndarray, tf.data.Dataset, None], optional) –
- The train data. Can be one of the following:
- None: If None is provided points on the edges of a
4-dimensional hypercube will be used as train data.
- np.ndarray: If a numpy array is provided, it will be
transformed into a batched tf.data.Dataset by first making it an infinitely repeating dataset, shuffling it and the batching it with a batch size specified by parameters.batch_size.
- tf.data.Dataset: If a dataset is provided it will be
used without making any adjustments. Make sure, that the dataset uses float32 as its type.
Defaults to None.
model (Union[tf.keras.models.Model, None], optional) – Providing a keras model to this argument will make the Autoencoder/EncoderMap class use this model instead of the predefined ones. Make sure the model can accept EncoderMap’s loss functions. If None is provided the model will be built using the specifications in parameters. Defaults to None.
read_only (bool, optional) – Whether the class is allowed to write to disk (False) or not (True). Defaults to False and will allow the class to write to disk.
sparse (bool)
- classmethod from_checkpoint(checkpoint_path, train_data=None, sparse=False, use_previous_model=False, compat=False)[source]#
Reconstructs the class from a checkpoint.
- Parameters:
checkpoint_path (Union[str, Path]) – The path to the checkpoint. Can be either a directory, in which case the most recently saved model will be loaded. Or a direct .keras file, in which case, this specific model will be loaded.
train_data (Optional[np.ndarray]) – can provide the train data here.
sparse (bool) – Whether the reloaded model should be sparse.
use_previous_model (bool) – Set this flag to True, if you load a model from an in-between checkpoint step (e.g., to continue training with different parameters). If you have the files saved_model_0.keras, saved_model_500.keras and saved_model_1000.keras, setting this to True and loading the saved_model_500.keras will back up the saved_model_1000.keras.
compat (bool) – Whether to use compatibility mode when missing or wrong parameter files are present. In this special case, some assumptions about the network architecture are made from the model and the parameters in parameters.json overwritten accordingly (a backup will also be made).
- Returns:
EncoderMap EncoderMap class.
- Return type:
AngleDihedralCartesianEncoderMap#
- class AngleDihedralCartesianEncoderMap(trajs=None, parameters=None, model=None, read_only=False, dataset=None, ensemble=False, use_dataset_when_possible=True, deterministic=False)[source]#
Different __init__ method, than Autoencoder Class. Uses callbacks to tune-in cartesian cost.
Overwritten methods: _set_up_callbacks and generate.
Examples
>>> import encodermap as em >>> from pathlib import Path >>> # Load two trajectories >>> test_data = Path(em.__file__).parent.parent / "tests/data" >>> test_data.is_dir() True >>> xtcs = [test_data / "1am7_corrected_part1.xtc", test_data / "1am7_corrected_part2.xtc"] >>> tops = [test_data / "1am7_protein.pdb", test_data /"1am7_protein.pdb"] >>> trajs = em.load(xtcs, tops) >>> print(trajs) encodermap.TrajEnsemble object. Current backend is no_load. Containing 2 trajectories. Not containing any CVs. >>> # load CVs >>> # This step can be omitted. The AngleDihedralCartesianEncoderMap class automatically loads CVs >>> trajs.load_CVs('all') >>> print(trajs.CVs['central_cartesians'].shape) (51, 474, 3) >>> print(trajs.CVs['central_dihedrals'].shape) (51, 471) >>> # create some parameters >>> p = em.ADCParameters(periodicity=360, use_backbone_angles=True, use_sidechains=True, ... cartesian_cost_scale_soft_start=(6, 12)) >>> # Standard is functional model, as it offers more flexibility >>> print(p.model_api) functional >>> print(p.distance_cost_scale) None >>> # Instantiate the class >>> e_map = em.AngleDihedralCartesianEncoderMap(trajs, p, read_only=True) Model... >>> # dataset contains these inputs: >>> # central_angles, central_dihedrals, central_cartesians, central_distances, sidechain_dihedrals >>> print(e_map.dataset) <BatchDataset element_spec=(TensorSpec(shape=(None, 472), dtype=tf.float32, name=None), TensorSpec(shape=(None, 471), dtype=tf.float32, name=None), TensorSpec(shape=(None, 474, 3), dtype=tf.float32, name=None), TensorSpec(shape=(None, 473), dtype=tf.float32, name=None), TensorSpec(shape=(None, 316), dtype=tf.float32, name=None))> >>> # output from the model contains the following data: >>> # out_angles, out_dihedrals, back_cartesians, pairwise_distances of inp cartesians, pairwise of back-mapped cartesians, out_side_dihedrals >>> for data in e_map.dataset.take(1): ... pass >>> out = e_map.model(data) >>> print([i.shape for i in out]) [TensorShape([256, 472]), TensorShape([256, 471]), TensorShape([256, 474, 3]), TensorShape([256, 112101]), TensorShape([256, 112101]), TensorShape([256, 316])] >>> # get output of latent space by providing central_angles, central_dihedrals, sidehcain_dihedrals >>> latent = e_map.encoder([data[0], data[1], data[-1]]) >>> print(latent.shape) (256, 2) >>> # Rebuild central_angles, central_dihedrals and sidechain_angles from latent >>> dih, ang, side_dih = e_map.decode(latent) >>> print(dih.shape, ang.shape, side_dih.shape) (256, 472) (256, 471) (256, 316)
Instantiate the AngleDihedralCartesianEncoderMap class.
- Parameters:
trajs (Optional[TrajEnsemble]) – The trajectories to be used as input. If trajs contain no CVs, correct CVs will be loaded. can be None, in which case the argument dataset should be provided. Defaults to None.
parameters (Optional[em.ADCParameters]) – The parameters for the current run. Can be set to None and the default parameters will be used. Defaults to None.
model (Optional[tf.keras.models.Model]) – The keras model to use. You can provide your own model with this argument. If set to None, the model will be built to the specifications of parameters using either the functional API. Defaults to None,
read_only (bool) – Whether to write anything to disk (False) or not (True). Defaults to False.
dataset (Optional[tf.data.Dataset]) – The dataset argument takes precedent over the trajs argument. If None, the dataset will be constructed from the trajs argument (see em.trajinfo.TrajEnsemble.tf_dataset for more info). Defaults to None.
ensemble (bool) – Whether to allow non-defined features when featurizing the provided trajs. Only takes effect, when the trajs don’t already have the features (central_cartesians, central_distances, central_angles, central_dihedrals, side_dihedrals) loaded. Defaults to False.
use_dataset_when_possible (bool) – Whether to use the trajs method tf_dataset() to get a dataset for training or constructy a dataset from the trajs CVs numpy arrays. For large datasets the first method can be advantageous as not all data will end up in memory and the dataset can be larger than the memory allows. For small datasets the second method is faster, as all data is in memory. Defaults to True.
deterministic (bool)
- add_images_to_tensorboard(*args, **kwargs)[source]#
Adds images of the latent space to tensorboard.
- Parameters:
data (Optional[Union[np.ndarray, Sequence[np.ndarray]]) – The input-data will be passed through the encoder part of the autoencoder. If None is provided, a set of 10_000 points from self.train_data will be taken. A list[np.ndarray] is needed for the functional API of the AngleDihedralCartesianEncoderMap, that takes a list of [angles, dihedrals, side_dihedrals]. Defaults to None.
image_step (Optional[int]) – The interval in which to plot images to tensorboard. If None is provided, the image_step will be the same as Parameters.summary_step. Defaults to None.
max_size (int) – The maximum size of the high-dimensional data, that is projected. Prevents excessively large-datasets from being projected at every image_step. Defaults to 10_000.
scatter_kws (Optional[dict[str, Any]]) – A dict with items that plotly.express.scatter() will accept. If None is provided, a dict with size 20 will be passed to px.scatter(**{‘size_max’: 10, ‘opacity’: 0.2}), which sets an appropriate size of scatter points for the size of datasets encodermap is usually used for.
hist_kws (Optional[dict[str, Any]]) – A dict with items that encodermap.plot.plotting._plot_free_energy() will accept. If None is provided a dict with bins 50 will be passed to encodermap.plot.plotting._plot_free_energy(**{‘bins’: 50}). You can choose a colormap here by providing {‘bins’: 50, ‘cmap’: ‘plasma’} for this argument.
additional_fns (Optional[Sequence[Callable]]) – A list of functions that will accept the low-dimensional output of the Autoencoder latent/bottleneck layer and return a tf.Tensor that can be logged by tf.summary.image(). See the notebook ‘writing_custom_images_to_tensorboard.ipynb’ in tutorials/notebooks_customization for more info. If None is provided, no additional functions will be used to plot to tensorboard. Defaults to None.
when (Literal["epoch", "batch"]) – When to log the images can be either ‘batch’, then the images will be logged after every step during training, or ‘epoch’, then only after every image_step epoch the images will be written. Defaults to ‘epoch’.
save_to_disk (bool) – Whether to also write the images to disk.
args (Any)
kwargs (Any)
- Return type:
None
- decode(data)[source]#
Calls the decoder part of the model.
AngleDihedralCartesianAutoencoder will, like the other two classes’ output a list of np.ndarray.
- Parameters:
data (np.ndarray) – The data to be passed to the decoder part of the model. Make sure that the shape of the data matches the number of neurons in the latent space.
- Returns:
- Outputs from the decoder part.
For AngleDihedralCartesianEncoderMap, this will be a list of np.ndarray.
- Return type:
Union[list[np.ndarray], np.ndarray]
- encode(data=None)[source]#
Runs the central_angles, central_dihedrals, (side_dihedrals) through the autoencoder. Make sure that data has the correct shape.
- Parameters:
data (Sequence[np.ndarray]) – Provide a sequence of angles, and central_dihedrals, if you used sidechain_dihedrals during training append these to the end of the sequence.
- Returns:
The latent space representation of the provided data.
- Return type:
np.ndarray
- classmethod from_checkpoint(trajs, checkpoint_path, dataset=None, use_previous_model=False, compat=False)[source]#
Reconstructs the model from a checkpoint.
Although the model can be loaded from disk without any form of data and still yield the correct input and output shapes, it is required to either provide trajs or dataset to double-check, that the correct model will be reloaded.
This is also, whe the sparse argument is not needed, as sparcity of the input data is a property of the TrajEnsemble provided.
- Parameters:
trajs (Union[None, TrajEnsemble]) – Either None (in which case, the argument dataset is required), or an instance of TrajEnsemble, which was used to instantiate the AngleDihedralCartesianEncoderMap, before it was saved to disk.
checkpoint_path (Union[Path, str]) – The path to the checkpoint. Can either be the path to a .keras file or to a directory containing .keras files, in which case the most recently created .keras file will be used.
dataset (Optional[tf.data.Dataset]) – If trajs is not provided, a dataset is required to make sure the input shapes match the model, that is stored on the disk.
use_previous_model (bool) – Set this flag to True, if you load a model from an in-between checkpoint step (e.g., to continue training with different parameters). If you have the files saved_model_0.keras, saved_model_500.keras and saved_model_1000.keras, setting this to True and loading the saved_model_500.keras will back up the saved_model_1000.keras.
compat (bool) – Whether to use compatibility mode when missing or wrong parameter files are present. In this special case, some assumptions about the network architecture are made from the model and the parameters in parameters.json overwritten accordingly (a backup will also be made).
- Returns:
An instance of AngleDihedralCartesianEncoderMap.
- Return type:
AngleDihedralCartesianEncoderMapType
- generate(points: ndarray, top: str | int | Topology | None, backend: Literal['mdtraj'], progbar: Any | None) Trajectory [source]#
- generate(points: ndarray, top: str | int | Topology | None, backend: Literal['mdanalysis'], progbar: Any | None) Universe
Overrides the parent class’ generate method and builds a trajectory.
Instead of just providing data to decode using the decoder part of the network, this method also takes a molecular topology as its top argument. This topology is then used to rebuild a time-resolved trajectory.
- Parameters:
points (np.ndarray) – The low-dimensional points from which the trajectory should be rebuilt.
top (Optional[str, int, mdtraj.Topology]) – The topology to be used for rebuilding the trajectory. This should be a string pointing towards a <*.pdb, *.gro, *.h5> file. Alternatively, None can be provided; in which case, the internal topology (self.top) of this class is used. Defaults to None.
backend (str) –
Defines what MD python package is to use, to build the trajectory and also what type this method returns, needs to be one of the following:
”mdtraj”
”mdanalysis”
- Returns:
- The trajectory after
applying the decoded structural information. The type of this depends on the chosen backend parameter.
- Return type:
Union[mdtraj.Trajectory, MDAnalysis.universe]
- static get_train_data_from_trajs(trajs, p, attr='CVs', max_size=-1)[source]#
Builds train data from a TrajEnsemble.
- Parameters:
trajs (TrajEnsemble) – A TrajEnsemble instance.
p (encodermap.parameters.ADCParameters) – An instance of encodermap.parameters.ADCParameters.
attr (str) – Which attribute to get from TrajEnsemble. This defaults to ‘CVs’, because ‘CVs’ is usually a dict containing the CV data. However, you can build the train data from any dict in the TrajEnsemble.
max_size (int) – When you only want a subset of the CV data. Set this to the desired size.
- Returns:
- A tuple containing the following:
- bool: A bool that shows whether some ‘CV’ values are np.nan (True),
which will be used to decide whether the sparse training will be used.
- list[np.ndarray]: An array of features fed into the autoencoder,
concatenated along the feature axis. The order of the features is: central_angles, central_dihedral, (side_dihedrals if p.use_sidechain_dihedrals is True).
- dict[str, np.ndarray]: The training data as a dict. Containing
all values in trajs.CVs.
- Return type:
- plot_network()[source]#
Tries to plot the network using pydot, pydotplus and graphviz. Doesn’t raise an exception if plotting is not possible.
Note
Refer to this guide to install these programs: https://stackoverflow.com/questions/47605558/importerror-failed-to-import-pydot-you-must-install-pydot-and-graphviz-for-py
- Return type:
None
- save(step=None)[source]#
Saves the model to the current path defined in parameters.main_path.
- Parameters:
step (Optional[int]) – Does not save the model at the given training step, but rather changes the string used for saving the model from a datetime format to another.
- Returns:
- When the model has been saved, the Path will
be returned. If the model could not be saved. None will be returned.
- Return type:
Union[None, Path]
- set_train_data(data)[source]#
Resets the train data for reloaded models.
- Parameters:
data (TrajEnsemble)
- Return type:
None
- train_for_references(subsample=100, maxiter=500)[source]#
Calculates the angle, dihedral, and cartesian costs to so-called references, which can be used to bring these costs to a similar magnitude.
- property decoder: Model#
The decoder Model.
- Type:
tf.keras.Model
- property encoder: Model#
The encoder Model.
- Type:
tf.keras.Model