{
"cells": [
{
"cell_type": "markdown",
"id": "c2be1906-f144-410e-b65e-35538dded4f7",
"metadata": {},
"source": [
"# Learning Rate Schedulers\n",
"\n",
"**Welcome**\n",
"\n",
"Welcome to the Learning Rate Schedulers tutorial. Learning rate schedulers can help us dynamically adjust the learning rate of the Adam optimization algorithm. That way, we can decrease the learning rate as we approach the minima of the cost function.\n",
"\n",
"Run this notebook on Google Colab:\n",
"\n",
"[](https://colab.research.google.com/github/AG-Peter/encodermap/blob/main/tutorials/notebooks_customization/04_learning_rate_schedulers.ipynb)\n",
"\n",
"Find the documentation of EncoderMap:\n",
"\n",
"https://ag-peter.github.io/encodermap\n",
"\n",
"**Goals:**\n",
"\n",
"In this tutorial you will learn:\n",
"\n",
"* [Why we can profit from learning rate schedulers](#why)\n",
"* [How to log the current learning rate to TensorBoard](#log_to_tb)\n",
"* [How to implement a learning rate scheduler with an exponentially decaying learning rate](#lr_implementation)"
]
},
{
"cell_type": "markdown",
"id": "65568a51-1a34-4727-938b-955e135f94ce",
"metadata": {},
"source": [
"### For Google colab only:\n",
"\n",
"If you're on Google colab, please uncomment these lines and install EncoderMap."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "408f06e7",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:15.540794Z",
"iopub.status.busy": "2024-12-29T12:56:15.540426Z",
"iopub.status.idle": "2024-12-29T12:56:15.543339Z",
"shell.execute_reply": "2024-12-29T12:56:15.542642Z"
}
},
"outputs": [],
"source": [
"# !wget https://gist.githubusercontent.com/kevinsawade/deda578a3c6f26640ae905a3557e4ed1/raw/b7403a37710cb881839186da96d4d117e50abf36/install_encodermap_google_colab.sh\n",
"# !sudo bash install_encodermap_google_colab.sh"
]
},
{
"cell_type": "markdown",
"id": "016b78d1",
"metadata": {},
"source": [
"If you're on Google Colab, you also want to download the data we will use:"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "16654191",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:15.545091Z",
"iopub.status.busy": "2024-12-29T12:56:15.544991Z",
"iopub.status.idle": "2024-12-29T12:56:15.546994Z",
"shell.execute_reply": "2024-12-29T12:56:15.546688Z"
}
},
"outputs": [],
"source": [
"# !wget https://raw.githubusercontent.com/AG-Peter/encodermap/main/tutorials/notebooks_starter/asp7.csv"
]
},
{
"cell_type": "markdown",
"id": "43decefb",
"metadata": {},
"source": [
"## Import Libraries\n",
"\n",
"Before we can start exploring the learning rate scheduler, we need to import some libraries."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c6be3e13",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:15.548627Z",
"iopub.status.busy": "2024-12-29T12:56:15.548495Z",
"iopub.status.idle": "2024-12-29T12:56:19.262440Z",
"shell.execute_reply": "2024-12-29T12:56:19.261663Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/kevin/git/encoder_map_private/encodermap/__init__.py:194: GPUsAreDisabledWarning: EncoderMap disables the GPU per default because most tensorflow code runs with a higher compatibility when the GPU is disabled. If you want to enable GPUs manually, set the environment variable 'ENCODERMAP_ENABLE_GPU' to 'True' before importing EncoderMap. To do this in python you can run:\n",
"\n",
"import os; os.environ['ENCODERMAP_ENABLE_GPU'] = 'True'\n",
"\n",
"before importing encodermap.\n",
" _warnings.warn(\n"
]
},
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "83407c56601845439856f7e7f528fa77",
"version_major": 2,
"version_minor": 0
},
"text/plain": []
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"import os\n",
"import numpy as np\n",
"import encodermap as em\n",
"import tensorflow as tf\n",
"import pandas as pd\n",
"from pathlib import Path\n",
"%load_ext autoreload\n",
"%autoreload 2"
]
},
{
"cell_type": "markdown",
"id": "05def998",
"metadata": {},
"source": [
"We wil work in the directory `runs/lr_scheduler`. We will create it now."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "234a1751-d9f5-4e8c-8a7f-cef281c2117c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:19.264779Z",
"iopub.status.busy": "2024-12-29T12:56:19.264426Z",
"iopub.status.idle": "2024-12-29T12:56:19.286073Z",
"shell.execute_reply": "2024-12-29T12:56:19.285209Z"
}
},
"outputs": [],
"source": [
"(Path.cwd() / \"runs/lr_scheduler\").mkdir(parents=True, exist_ok=True)"
]
},
{
"cell_type": "markdown",
"id": "674b6f11-95d6-4634-ba56-be87a5fb6677",
"metadata": {},
"source": [
"\n",
"\n",
"## Why learning rate schedulers? A linear regression example"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "809d7c7e-f9ed-4b79-beda-3293c96263c9",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "markdown",
"id": "c1063547",
"metadata": {},
"source": [
"\n",
"\n",
"## Log the current learning rate to Tensorboard\n",
"\n",
"Before we implement some dynamic learning rates we want to find a way to log the learning rate to tensorboard.\n",
"\n",
"### Running tensorboard on Google colab\n",
"\n",
"To use tensorboard in google colabs notebooks, you neet to first load the tensorboard extension\n",
"\n",
"```python\n",
"%load_ext tensorboard\n",
"```\n",
"\n",
"And then activate it with:\n",
"\n",
"```python\n",
"%tensorboard --logdir .\n",
"```\n",
"\n",
"The next code cell contains these commands. Uncomment them and then continue.\n",
"\n",
"### Running tensorboard locally\n",
"\n",
"TensorBoard is a visualization tool from the machine learning library TensorFlow which is used by the EncoderMap package. During the dimensionality reduction step, when the neural network autoencoder is trained, several readings are saved in a TensorBoard format. All output files are saved to the path defined in `parameters.main_path`. Navigate to this location in a shell and start TensorBoard. Change the paramter Tensorboard to `True` to make Encodermap log to Tensorboard.\n",
"\n",
"In case you run this tutorial in the provided Docker container you can open a new console inside the container by typing the following command in a new system shell.\n",
"```shell\n",
"docker exec -it emap bash\n",
"```\n",
"Navigate to the location where all the runs are saved. e.g.:\n",
"```shell\n",
"cd notebooks_easy/runs/asp7/\n",
"```\n",
"Start TensorBoard in this directory with:\n",
"```shell\n",
"tensorboard --logdir .\n",
"```\n",
"\n",
"You should now be able to open TensorBoard in your webbrowser on port 6006. \n",
"`0.0.0.0:6006` or `127.0.0.1:6006`\n",
"\n",
"In the SCALARS tab of TensorBoard you should see among other values the overall cost and different contributions to the cost. The two most important contributions are `auto_cost` and `distance_cost`. `auto_cost` indicates differences between the inputs and outputs of the autoencoder. `distance_cost` is the part of the cost function which compares pairwise distances in the input space and the low-dimensional (latent) space.\n",
"\n",
"**Fixing Reloading issues**\n",
"Using Tensorboard we often encountered some issues while training multiple models and writing mutliple runs to Tensorboard's logdir. Reloading the data and event refreshing the web page did not display the data of the current run. We needed to kill tensorboard and restart it in order to see the new data. This issue was fixed by setting `reload_multifile` `True`.\n",
"\n",
"```bash\n",
"tensorboard --logdir . --reload_multifile True\n",
"```"
]
},
{
"cell_type": "markdown",
"id": "dbd7f3c8-2502-4c18-a40e-78cbf138909d",
"metadata": {},
"source": [
"**When you're on Goole Colab, you can load the Tensorboard extension with:**"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "7b5bcabb",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:19.288031Z",
"iopub.status.busy": "2024-12-29T12:56:19.287899Z",
"iopub.status.idle": "2024-12-29T12:56:19.307782Z",
"shell.execute_reply": "2024-12-29T12:56:19.307194Z"
}
},
"outputs": [],
"source": [
"# %load_ext tensorboard\n",
"# %tensorboard --logdir ."
]
},
{
"cell_type": "markdown",
"id": "abe6f9ac-3f9f-4a63-b3da-eeb95ae97602",
"metadata": {},
"source": [
"### Sublcassing EncoderMap's `EncoderMapBaseCallback`\n",
"\n",
"The easiest way to implement and log a new variable to TensorBorard is by subclassing EncoderMap's `EncodeMapBaseCallback` from the `callbacks` submodule."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "e188df5c-47a3-4bca-86d8-cb8911d31fea",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:19.309556Z",
"iopub.status.busy": "2024-12-29T12:56:19.309428Z",
"iopub.status.idle": "2024-12-29T12:56:19.346237Z",
"shell.execute_reply": "2024-12-29T12:56:19.345709Z"
}
},
"outputs": [],
"source": [
"?em.callbacks.EncoderMapBaseCallback"
]
},
{
"cell_type": "markdown",
"id": "563c9b43",
"metadata": {},
"source": [
"As per the docstring of the `EncoderMapBaseCallback` class, we create the `LearningRateLogger` class and implement a piece of code in the `on_summary_step` method."
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "3c34f6e7",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:19.348257Z",
"iopub.status.busy": "2024-12-29T12:56:19.348126Z",
"iopub.status.idle": "2024-12-29T12:56:19.368466Z",
"shell.execute_reply": "2024-12-29T12:56:19.367959Z"
}
},
"outputs": [],
"source": [
"class LearningRateLogger(em.callbacks.EncoderMapBaseCallback):\n",
" def on_summary_step(self, step, logs=None):\n",
" with tf.name_scope(\"Learning Rate\"):\n",
" tf.summary.scalar('current learning rate', self.model.optimizer.lr, step=step)"
]
},
{
"cell_type": "markdown",
"id": "9f53d462",
"metadata": {},
"source": [
"We can now create an `EncoderMap` class and add our new callback with the `add_callback` method."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "9bf99bcc",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:19.370301Z",
"iopub.status.busy": "2024-12-29T12:56:19.370189Z",
"iopub.status.idle": "2024-12-29T12:56:19.589619Z",
"shell.execute_reply": "2024-12-29T12:56:19.588583Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Output files are saved to runs/lr_scheduler/run0 as defined in 'main_path' in the parameters.\n",
"Saved a text-summary of the model and an image in runs/lr_scheduler/run0, as specified in 'main_path' in the parameters.\n"
]
}
],
"source": [
"df = pd.read_csv('asp7.csv')\n",
"dihedrals = df.iloc[:,:-1].values.astype(np.float32)\n",
"cluster_ids = df.iloc[:,-1].values\n",
"\n",
"parameters = em.Parameters(\n",
"tensorboard=True,\n",
"periodicity=2*np.pi,\n",
"main_path=em.misc.run_path('runs/lr_scheduler'),\n",
"n_steps=100,\n",
"summary_step=5\n",
")\n",
"\n",
"# create an instance of EncoderMap\n",
"e_map = em.EncoderMap(parameters, dihedrals)\n",
"\n",
"# Add an instance of the new Callback\n",
"e_map.add_callback(LearningRateLogger)"
]
},
{
"cell_type": "markdown",
"id": "e214e7f1",
"metadata": {},
"source": [
"We train the Model."
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "96569684",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:19.591820Z",
"iopub.status.busy": "2024-12-29T12:56:19.591621Z",
"iopub.status.idle": "2024-12-29T12:56:23.469511Z",
"shell.execute_reply": "2024-12-29T12:56:23.469112Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 0%| | 0/100 [00:00, ?it/s]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 0%| | 0/100 [00:00, ?it/s, Loss after step ?=?]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 1%|▎ | 1/100 [00:02<04:55, 2.99s/it, Loss after step ?=?]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 4%|█▏ | 4/100 [00:03<04:46, 2.99s/it, Loss after step 5=64.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 9%|██▌ | 9/100 [00:03<04:31, 2.99s/it, Loss after step 10=44.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 13%|███▌ | 13/100 [00:03<00:15, 5.78it/s, Loss after step 10=44.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 14%|███▊ | 14/100 [00:03<00:14, 5.78it/s, Loss after step 15=40.6]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 19%|█████▏ | 19/100 [00:03<00:14, 5.78it/s, Loss after step 20=40.3]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 24%|██████▉ | 24/100 [00:03<00:13, 5.78it/s, Loss after step 25=38]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 26%|███████▌ | 26/100 [00:03<00:05, 13.33it/s, Loss after step 25=38]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 29%|████████▍ | 29/100 [00:03<00:05, 13.33it/s, Loss after step 30=41]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 34%|█████████▏ | 34/100 [00:03<00:04, 13.33it/s, Loss after step 35=36.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 38%|██████████▎ | 38/100 [00:03<00:02, 21.84it/s, Loss after step 35=36.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 39%|██████████▌ | 39/100 [00:03<00:02, 21.84it/s, Loss after step 40=37.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 44%|███████████▉ | 44/100 [00:03<00:02, 21.84it/s, Loss after step 45=36.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 49%|█████████████▏ | 49/100 [00:03<00:02, 21.84it/s, Loss after step 50=35.7]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 51%|█████████████▊ | 51/100 [00:03<00:01, 32.72it/s, Loss after step 50=35.7]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 54%|██████████████▌ | 54/100 [00:03<00:01, 32.72it/s, Loss after step 55=33.6]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 59%|███████████████▉ | 59/100 [00:03<00:01, 32.72it/s, Loss after step 60=34.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 64%|█████████████████▎ | 64/100 [00:03<00:00, 44.85it/s, Loss after step 60=34.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 64%|██████████████████▌ | 64/100 [00:03<00:00, 44.85it/s, Loss after step 65=36]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 69%|██████████████████▋ | 69/100 [00:03<00:00, 44.85it/s, Loss after step 70=36.1]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 74%|███████████████████▉ | 74/100 [00:03<00:00, 44.85it/s, Loss after step 75=35.6]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 76%|████████████████████▌ | 76/100 [00:03<00:00, 56.12it/s, Loss after step 75=35.6]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 79%|█████████████████████▎ | 79/100 [00:03<00:00, 56.12it/s, Loss after step 80=32.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 84%|██████████████████████▋ | 84/100 [00:03<00:00, 56.12it/s, Loss after step 85=33.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 89%|████████████████████████ | 89/100 [00:03<00:00, 69.17it/s, Loss after step 85=33.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 89%|████████████████████████ | 89/100 [00:03<00:00, 69.17it/s, Loss after step 90=31.7]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 94%|█████████████████████████▍ | 94/100 [00:03<00:00, 69.17it/s, Loss after step 95=31.7]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 99%|█████████████████████████▋| 99/100 [00:03<00:00, 69.17it/s, Loss after step 100=31.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
"100%|█████████████████████████| 100/100 [00:03<00:00, 26.30it/s, Loss after step 100=31.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Saving the model to runs/lr_scheduler/run0/saved_model_2024-12-29T13:56:23+01:00.keras. Use `em.EncoderMap.from_checkpoint('runs/lr_scheduler/run0')` to load the most recent model, or `em.EncoderMap.from_checkpoint('runs/lr_scheduler/run0/saved_model_2024-12-29T13:56:23+01:00.keras')` to load the model with specific weights..\n",
"This model has a subclassed encoder, which can be loaded independently. Use `tf.keras.load_model('runs/lr_scheduler/run0/saved_model_2024-12-29T13:56:23+01:00_encoder.keras')` to load only this model.\n",
"This model has a subclassed decoder, which can be loaded independently. Use `tf.keras.load_model('runs/lr_scheduler/run0/saved_model_2024-12-29T13:56:23+01:00_decoder.keras')` to load only this model.\n"
]
}
],
"source": [
"history = e_map.train()"
]
},
{
"cell_type": "markdown",
"id": "c5688138",
"metadata": {},
"source": [
"And now, we can see our current leanring rate in TensorBoard\n",
"\n",
"\n",
"\n",
"A constant learning rate of 0.001"
]
},
{
"cell_type": "markdown",
"id": "598068c0",
"metadata": {},
"source": [
"\n",
"\n",
"## Write a learning rate scheduler\n",
"\n",
"We can write a learning rate scheduler either by providing intervals of training steps and the associated learning rate:\n",
"\n",
"```python\n",
"def lr_schedule(step):\n",
" \"\"\"\n",
" Returns a custom learning rate that decreases as steps progress.\n",
" \"\"\"\n",
" learning_rate = 0.2\n",
" if step > 10:\n",
" learning_rate = 0.02\n",
" if step > 20:\n",
" learning_rate = 0.01\n",
" if step > 50:\n",
" learning_rate = 0.005\n",
"```\n",
"\n",
"Or by using a function that gives us a learning rate:\n",
"\n",
"```python\n",
"def scheduler(step, lr=1, n_steps=1000):\n",
" \"\"\"\n",
" Returns a custom learning rate that decreases based on an exp function as steps progress.\n",
" \"\"\"\n",
" if step < 10:\n",
" return lr\n",
" else:\n",
" return lr * tf.math.exp(-step / n_steps)\n",
"```\n",
"\n",
"Below, is an example combining both:"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "1a37f62f",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:23.471558Z",
"iopub.status.busy": "2024-12-29T12:56:23.471416Z",
"iopub.status.idle": "2024-12-29T12:56:23.494937Z",
"shell.execute_reply": "2024-12-29T12:56:23.494439Z"
}
},
"outputs": [],
"source": [
"def scheduler(step, lr=1):\n",
" \"\"\"\n",
" Returns a custom learning rate that decreases based on an exp function as steps progress.\n",
" \"\"\"\n",
" if step < 10:\n",
" return lr\n",
" else:\n",
" return lr * tf.math.exp(-0.1)"
]
},
{
"cell_type": "markdown",
"id": "a848aa1d",
"metadata": {},
"source": [
"This scheduler function can simply be provided to the builtin `keras.callbacks.LearningRateScheduler` callback."
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "332faaec",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:23.496615Z",
"iopub.status.busy": "2024-12-29T12:56:23.496491Z",
"iopub.status.idle": "2024-12-29T12:56:23.517579Z",
"shell.execute_reply": "2024-12-29T12:56:23.516848Z"
}
},
"outputs": [],
"source": [
"callback = tf.keras.callbacks.LearningRateScheduler(scheduler)"
]
},
{
"cell_type": "markdown",
"id": "e5f22dc5",
"metadata": {},
"source": [
"And appended to the list of `callbacks` in the EncoderMap class."
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "9087259e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:23.519613Z",
"iopub.status.busy": "2024-12-29T12:56:23.519498Z",
"iopub.status.idle": "2024-12-29T12:56:23.688602Z",
"shell.execute_reply": "2024-12-29T12:56:23.688124Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Output files are saved to runs/lr_scheduler/run1 as defined in 'main_path' in the parameters.\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Saved a text-summary of the model and an image in runs/lr_scheduler/run1, as specified in 'main_path' in the parameters.\n"
]
}
],
"source": [
"parameters = em.Parameters(\n",
"tensorboard=True,\n",
"periodicity=2*np.pi,\n",
"main_path=em.misc.run_path('runs/lr_scheduler'),\n",
"n_steps=50,\n",
"summary_step=1\n",
")\n",
"\n",
"e_map = em.EncoderMap(parameters, dihedrals)\n",
"e_map.add_callback(LearningRateLogger)\n",
"e_map.add_callback(callback)"
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "6b026be4",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:23.690845Z",
"iopub.status.busy": "2024-12-29T12:56:23.690702Z",
"iopub.status.idle": "2024-12-29T12:56:26.958840Z",
"shell.execute_reply": "2024-12-29T12:56:26.958236Z"
}
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 0%| | 0/50 [00:00, ?it/s]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 0%| | 0/50 [00:00, ?it/s, Loss after step ?=?]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 0%| | 0/50 [00:02, ?it/s, Loss after step 1=133]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 2%|▌ | 1/50 [00:02<02:12, 2.71s/it, Loss after step 1=133]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 2%|▌ | 1/50 [00:02<02:12, 2.71s/it, Loss after step 2=99.5]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 4%|█▏ | 2/50 [00:02<02:10, 2.71s/it, Loss after step 3=83.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 6%|█▉ | 3/50 [00:02<02:07, 2.71s/it, Loss after step 4=71]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 8%|██▍ | 4/50 [00:02<02:04, 2.71s/it, Loss after step 5=53.5]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 10%|███ | 5/50 [00:02<02:01, 2.71s/it, Loss after step 6=47.6]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 12%|███▌ | 6/50 [00:02<01:59, 2.71s/it, Loss after step 7=47.1]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 14%|████▍ | 7/50 [00:02<01:56, 2.71s/it, Loss after step 8=45]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 16%|████▊ | 8/50 [00:02<01:53, 2.71s/it, Loss after step 9=40.5]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 18%|█████▏ | 9/50 [00:02<01:51, 2.71s/it, Loss after step 10=41.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 20%|█████▌ | 10/50 [00:02<01:48, 2.71s/it, Loss after step 11=40.1]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 22%|██████▏ | 11/50 [00:02<00:07, 5.35it/s, Loss after step 11=40.1]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 22%|██████▏ | 11/50 [00:02<00:07, 5.35it/s, Loss after step 12=41.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 24%|██████▋ | 12/50 [00:02<00:07, 5.35it/s, Loss after step 13=41.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 26%|███████▎ | 13/50 [00:02<00:06, 5.35it/s, Loss after step 14=40.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 28%|███████▊ | 14/50 [00:02<00:06, 5.35it/s, Loss after step 15=37.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 30%|████████▍ | 15/50 [00:02<00:06, 5.35it/s, Loss after step 16=41.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 32%|████████▉ | 16/50 [00:02<00:06, 5.35it/s, Loss after step 17=37.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 34%|█████████▌ | 17/50 [00:02<00:06, 5.35it/s, Loss after step 18=40.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 36%|██████████ | 18/50 [00:02<00:05, 5.35it/s, Loss after step 19=37.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 38%|██████████▋ | 19/50 [00:02<00:05, 5.35it/s, Loss after step 20=38.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 40%|███████████▏ | 20/50 [00:02<00:05, 5.35it/s, Loss after step 21=41.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 42%|███████████▊ | 21/50 [00:02<00:05, 5.35it/s, Loss after step 22=39.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 44%|████████████▎ | 22/50 [00:02<00:02, 12.28it/s, Loss after step 22=39.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 44%|████████████▎ | 22/50 [00:02<00:02, 12.28it/s, Loss after step 23=40.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 46%|████████████▉ | 23/50 [00:02<00:02, 12.28it/s, Loss after step 24=40.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 48%|█████████████▍ | 24/50 [00:02<00:02, 12.28it/s, Loss after step 25=39.6]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 50%|██████████████ | 25/50 [00:02<00:02, 12.28it/s, Loss after step 26=39.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 52%|██████████████▌ | 26/50 [00:02<00:01, 12.28it/s, Loss after step 27=39.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 54%|███████████████ | 27/50 [00:02<00:01, 12.28it/s, Loss after step 28=39.1]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 56%|███████████████▋ | 28/50 [00:02<00:01, 12.28it/s, Loss after step 29=39.7]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 58%|████████████████▏ | 29/50 [00:02<00:01, 12.28it/s, Loss after step 30=39.7]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 60%|████████████████▊ | 30/50 [00:03<00:01, 12.28it/s, Loss after step 31=38.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 62%|█████████████████▎ | 31/50 [00:03<00:01, 12.28it/s, Loss after step 32=39.2]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 64%|█████████████████▉ | 32/50 [00:03<00:01, 12.28it/s, Loss after step 33=39.3]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 66%|██████████████████▍ | 33/50 [00:03<00:00, 20.59it/s, Loss after step 33=39.3]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 66%|██████████████████▍ | 33/50 [00:03<00:00, 20.59it/s, Loss after step 34=40.5]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 68%|███████████████████ | 34/50 [00:03<00:00, 20.59it/s, Loss after step 35=39.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 70%|███████████████████▌ | 35/50 [00:03<00:00, 20.59it/s, Loss after step 36=38.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 72%|████████████████████▏ | 36/50 [00:03<00:00, 20.59it/s, Loss after step 37=39.3]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 74%|████████████████████▋ | 37/50 [00:03<00:00, 20.59it/s, Loss after step 38=37.8]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 76%|█████████████████████▎ | 38/50 [00:03<00:00, 20.59it/s, Loss after step 39=41.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 78%|█████████████████████▊ | 39/50 [00:03<00:00, 20.59it/s, Loss after step 40=39.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 80%|██████████████████████▍ | 40/50 [00:03<00:00, 20.59it/s, Loss after step 41=38.3]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 82%|████████████████████████▌ | 41/50 [00:03<00:00, 20.59it/s, Loss after step 42=41]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 84%|█████████████████████████▏ | 42/50 [00:03<00:00, 20.59it/s, Loss after step 43=39]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 86%|████████████████████████ | 43/50 [00:03<00:00, 20.59it/s, Loss after step 44=37.1]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 88%|████████████████████████▋ | 44/50 [00:03<00:00, 30.09it/s, Loss after step 44=37.1]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 88%|██████████████████████████▍ | 44/50 [00:03<00:00, 30.09it/s, Loss after step 45=38]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 90%|█████████████████████████▏ | 45/50 [00:03<00:00, 30.09it/s, Loss after step 46=38.9]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 92%|█████████████████████████▊ | 46/50 [00:03<00:00, 30.09it/s, Loss after step 47=39.6]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 94%|██████████████████████████▎ | 47/50 [00:03<00:00, 30.09it/s, Loss after step 48=37.5]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 96%|██████████████████████████▉ | 48/50 [00:03<00:00, 30.09it/s, Loss after step 49=38.4]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
" 98%|███████████████████████████▍| 49/50 [00:03<00:00, 30.09it/s, Loss after step 50=38.3]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\r",
"100%|████████████████████████████| 50/50 [00:03<00:00, 15.64it/s, Loss after step 50=38.3]"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Saving the model to runs/lr_scheduler/run1/saved_model_2024-12-29T13:56:26+01:00.keras. Use `em.EncoderMap.from_checkpoint('runs/lr_scheduler/run1')` to load the most recent model, or `em.EncoderMap.from_checkpoint('runs/lr_scheduler/run1/saved_model_2024-12-29T13:56:26+01:00.keras')` to load the model with specific weights..\n",
"This model has a subclassed encoder, which can be loaded independently. Use `tf.keras.load_model('runs/lr_scheduler/run1/saved_model_2024-12-29T13:56:26+01:00_encoder.keras')` to load only this model.\n",
"This model has a subclassed decoder, which can be loaded independently. Use `tf.keras.load_model('runs/lr_scheduler/run1/saved_model_2024-12-29T13:56:26+01:00_decoder.keras')` to load only this model.\n"
]
}
],
"source": [
"history = e_map.train()"
]
},
{
"cell_type": "markdown",
"id": "7973533f",
"metadata": {},
"source": [
"Here's what Tensorboard should look like:\n",
"\n",
"
"
]
},
{
"cell_type": "markdown",
"id": "2372c499-3c60-420c-9e3f-c0f67a08e268",
"metadata": {},
"source": [
"And here's the learning rate plotted from the history."
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "669a5b1b-4ea2-4652-a039-0ce23381ef4a",
"metadata": {
"execution": {
"iopub.execute_input": "2024-12-29T12:56:26.960835Z",
"iopub.status.busy": "2024-12-29T12:56:26.960632Z",
"iopub.status.idle": "2024-12-29T12:56:27.345400Z",
"shell.execute_reply": "2024-12-29T12:56:27.344833Z"
}
},
"outputs": [
{
"data": {
"text/html": [
" \n",
" "
]
},
"metadata": {},
"output_type": "display_data"
},
{
"data": {
"application/vnd.plotly.v1+json": {
"config": {
"plotlyServerURL": "https://plot.ly"
},
"data": [
{
"hovertemplate": "variable=0
index=%{x}
value=%{y}