台风气象利用 Python tensorflow 深度学习进行数据的时序分析 (Python代写,Deep Learning代写)

台风数据来源为中国台风网“CMA-STI 热带气旋最佳路径数据集”,数据内容包括热带气旋每6h的经纬度、强度、中心气压等,在对台风分类之前,进行数据预处理,即对台风数据统一进行标准化处理并去除异常数据。用动态规整算法计算台风数据集中台风的相似度,并对相似度降序排列,设置阈限对台风进行分类,选取一部分作为训练集。

联系我们
微信: biyeprodaixie 欢迎联系咨询

本次CS代写的主要涉及如下领域: Python代写,Deep Learning代写

气象数据时序分析

数据

台风数据来源为中国台风网“CMA-STI 热带气旋最佳路径数据集”,数据内容包括热带气旋每6h的经纬度、强度、中心气压等,在对台风分类之前,进行数据预处理,即对台风数据统一进行标准化处理并去除异常数据。用动态规整算法计算台风数据集中台风的相似度,并对相似度降序排列,设置阈限对台风进行分类,选取一部分作为训练集。

神经网络搭建

在pycharm编辑器中实现神经网络的搭建,系统使用TensorFlow框架。 神经网络中输入层和输出层之间是隐藏层,隐藏层内的神经元为隐藏单元,大多数隐藏单元的区别在于激活函数不同。主要的激活函数有:① sigmoid函数,f(x)=1/(1+e^(-x) ),sigmoid 是使用范围最广的一类激活函数,具有指数函数形状,它在物理意义上最为接近生物神经元。此外, (0,1) 的输出还可以被表示作概率,或用于输入的归一化。sigmoid也有其自身的缺陷,最明显的就是饱和性,其两侧导数逐渐趋近于0:lim┬(x→∞)⁡〖f^' (x)=0〗,具有这种性质的称为软饱和激活函数。由于在后向传递过程中,sigmoid向下传导的梯度包含了一个 f^' (x)f'(x) 因子,因此一旦输入落入饱和区, f^' (x)f'(x) 就会变得接近于0,导致了向底层传递的梯度也变得非常小。此时,网络参数很难得到有效训练,这种现象被称为梯度消失。一般来说,sigmoid 网络在 5 层之内就会产生梯度消失现象。此外,sigmoid函数的输出均大于0,使得输出不是0均值,这称为偏移现象,这会导致后一层的神经元将得到上一层输出的非0均值的信号作为输入;② tanh函数,f(x)=(1-e^(-2x))/(1+e^(-2x) ),tanh也是一种非常常见的激活函数,与sigmoid相比,它的输出均值是0,使得其收敛速度要比sigmoid快,减少迭代次数。然而,tanh一样具有软饱和性,从而造成梯度消失;③ 线性整流函数ReLU(Rectified Linear Unit),f(x)=max{0,x},ReLU 能够在 x>0 时保持梯度不衰减,从而缓解梯度消失问题,因此能够直接以监督的方式训练深度神经网络,而无需依赖无监督的逐层预训练。然而,随着训练的推进,部分输入会落入硬饱和区,导致对应权重无法更新,这种现象被称为“神经元死亡”。

模型训练

对数据进行训练时,可能出现过拟合,根据造成过拟合的两个主要原因,即数据太少和模型太复杂,可以利用一些方法避免。① Early Stopping在运行优化方法若干次后,直到在验证集上的验证误差没有提升的时候停止迭代;② 损失函数分为经验风险损失函数和结构风险损失函数,结构风险损失函数就是经验损失函数+表示模型复杂度的正则化,正则项通常选择L1或者L2正则化。结构风险损失函数也能够有效地防止过拟合;③ Dropout方法通过修改隐藏层神经元的个数来防止网络的过拟合,也就是修改深度网络本身,广泛应用于全连接网络。在训练过程中需不断调整参数并选择合适的损失函数,最终选择在验证集上损失函数最小的模型作为最后的测试模型。

评估

比较RNN、CNN、LSTM和GRU四种网络结构的预测精度和模型复杂度,进行评估比较。

代码

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

import io
import pandas as pd
from pathlib import Path
import numpy as np
tf.__version__
'2.1.0'




gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        # Currently, memory growth needs to be the same across GPUs
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)
        logical_gpus = tf.config.experimental.list_logical_devices('GPU')
        print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
    except RuntimeError as e:
        # Memory growth must be set before GPUs have been initialized
        print(e)
8 Physical GPUs, 8 Logical GPUs



# !mv input_data.txt.txt input_data.txt
# !mv output_data.txt.txt output_data.txt
!cat output_data.txt | wc -l
90727



input_file = './input_data.txt'
output_file = './output_data.txt'
def rec_gen():
    x_lines = open(input_file).readlines()
    y_lines = open(output_file).readlines()
    tags = [line.startswith('6666') for line in x_lines]
    cumsums = np.cumsum(tags)
    max_cumsum = cumsums[-1]
    for i in range(max_cumsum):
        idxes = np.where(cumsums == (i + 1))[0]
        x_subline = x_lines[idxes[0]:idxes[-1] + 1]
        y_subline = y_lines[idxes[0]:idxes[-1] + 1]
        # get typhoon input data
        with io.StringIO(''.join(x_subline[1:])) as buf:
            df_x = pd.read_table(buf, sep=r'\s+', header=None,
                                 names=['x1', 'x2', 'x3'])
        # get typhoon output data
        with io.StringIO(''.join(y_subline[1:])) as buf:
            df_y = pd.read_table(buf, sep=r'\s+', header=None,
                                 names=['lat', 'lon'])
        # Assert no undefined data
        assert not df_x.isnull().values.any()
        assert not df_y.isnull().values.any()
        # Delete records of length zero
        if df_x.shape[0] == 0 or df_y.shape[0] == 0:
            continue
        x = df_x
        y = df_y / 10
        yield x, y
xs = list()
ys = list()
for x, y in rec_gen():
    xs.append(x)
    ys.append(y)

xs = pd.concat(xs)
ys = pd.concat(ys)
recs = list(rec_gen())
len(recs)  # num of samples
2985




train_size = int(2987 * 0.8)
test_size = 2987 - train_size
from sklearn import preprocessing
scalery = preprocessing.StandardScaler().fit(ys.values)
scalery.mean_
array([ 20.88949852, 134.39462959])




scalery.scale_
array([ 9.37136146, 16.44389784])




def rec_gen_2():
    for idx, (x, y) in enumerate(rec_gen()):
        if x.shape[0] == 0:
            print(idx)
        y = scalery.transform(y)
        yield x, y
dataset = tf.data.Dataset.from_generator(rec_gen_2, 
                                         (tf.float32, tf.float32),
                                         (tf.TensorShape([None, 3]), tf.TensorShape([None, 2])))
batch_size = 1
steps_per_epoch=train_size // batch_size

train_ds = dataset.take(train_size).cache()
val_ds = dataset.skip(train_size).take(test_size).cache()
train_ds = train_ds.shuffle(train_size).batch(1)
val_ds = val_ds.batch(1)
optimizer = keras.optimizers.Adam()
model = keras.Sequential([layers.LSTM(2, input_shape=(None, 3), return_sequences=True)])
model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse'])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm (LSTM)                  (None, None, 2)           48        
=================================================================
Total params: 48
Trainable params: 48
Non-trainable params: 0
_________________________________________________________________



history = model.fit(train_ds, epochs=100, validation_data=val_ds)
Epoch 1/100
2389/2389 [==============================] - 52s 22ms/step - loss: 0.9816 - mae: 0.8068 - mse: 1.0419 - val_loss: 0.9228 - val_mae: 0.7661 - val_mse: 0.9829
Epoch 2/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.9440 - mae: 0.7870 - mse: 1.0004 - val_loss: 0.9097 - val_mae: 0.7584 - val_mse: 0.9705
Epoch 3/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.9224 - mae: 0.7738 - mse: 0.9785 - val_loss: 0.8956 - val_mae: 0.7513 - val_mse: 0.9570
Epoch 4/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.9022 - mae: 0.7656 - mse: 0.9545 - val_loss: 0.8712 - val_mae: 0.7445 - val_mse: 0.9302
Epoch 5/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.8603 - mae: 0.7496 - mse: 0.8970 - val_loss: 0.8382 - val_mae: 0.7290 - val_mse: 0.8866
Epoch 6/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.8208 - mae: 0.7311 - mse: 0.8407 - val_loss: 0.8309 - val_mae: 0.7158 - val_mse: 0.8672
Epoch 7/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7968 - mae: 0.7174 - mse: 0.8075 - val_loss: 0.8111 - val_mae: 0.7141 - val_mse: 0.8514
Epoch 8/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7865 - mae: 0.7143 - mse: 0.7958 - val_loss: 0.8208 - val_mae: 0.7107 - val_mse: 0.8514
Epoch 9/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7815 - mae: 0.7110 - mse: 0.7891 - val_loss: 0.8072 - val_mae: 0.7038 - val_mse: 0.8411
Epoch 10/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7774 - mae: 0.7080 - mse: 0.7831 - val_loss: 0.8026 - val_mae: 0.7024 - val_mse: 0.8375
Epoch 11/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7742 - mae: 0.7064 - mse: 0.7803 - val_loss: 0.7965 - val_mae: 0.6990 - val_mse: 0.8335
Epoch 12/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7714 - mae: 0.7053 - mse: 0.7764 - val_loss: 0.7941 - val_mae: 0.6974 - val_mse: 0.8298
Epoch 13/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7700 - mae: 0.7040 - mse: 0.7747 - val_loss: 0.8067 - val_mae: 0.7010 - val_mse: 0.8350
Epoch 14/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7668 - mae: 0.7018 - mse: 0.7702 - val_loss: 0.8033 - val_mae: 0.7005 - val_mse: 0.8311
Epoch 15/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7659 - mae: 0.7017 - mse: 0.7691 - val_loss: 0.7955 - val_mae: 0.6970 - val_mse: 0.8260
Epoch 16/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7651 - mae: 0.7005 - mse: 0.7687 - val_loss: 0.7875 - val_mae: 0.6936 - val_mse: 0.8193
Epoch 17/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7627 - mae: 0.6992 - mse: 0.7651 - val_loss: 0.7896 - val_mae: 0.6929 - val_mse: 0.8202
Epoch 18/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7620 - mae: 0.6993 - mse: 0.7656 - val_loss: 0.7790 - val_mae: 0.6903 - val_mse: 0.8193
Epoch 19/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7617 - mae: 0.6989 - mse: 0.7644 - val_loss: 0.7998 - val_mae: 0.6959 - val_mse: 0.8234
Epoch 20/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7601 - mae: 0.6980 - mse: 0.7612 - val_loss: 0.7840 - val_mae: 0.6902 - val_mse: 0.8137
Epoch 21/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7593 - mae: 0.6971 - mse: 0.7612 - val_loss: 0.7749 - val_mae: 0.6898 - val_mse: 0.8140
Epoch 22/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7583 - mae: 0.6963 - mse: 0.7592 - val_loss: 0.7862 - val_mae: 0.6895 - val_mse: 0.8142
Epoch 23/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7576 - mae: 0.6965 - mse: 0.7587 - val_loss: 0.7813 - val_mae: 0.6889 - val_mse: 0.8130
Epoch 24/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7589 - mae: 0.6970 - mse: 0.7607 - val_loss: 0.7918 - val_mae: 0.6919 - val_mse: 0.8165
Epoch 25/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7562 - mae: 0.6960 - mse: 0.7574 - val_loss: 0.7814 - val_mae: 0.6883 - val_mse: 0.8123
Epoch 26/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7566 - mae: 0.6955 - mse: 0.7569 - val_loss: 0.7790 - val_mae: 0.6884 - val_mse: 0.8090
Epoch 27/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7555 - mae: 0.6948 - mse: 0.7558 - val_loss: 0.7780 - val_mae: 0.6877 - val_mse: 0.8081
Epoch 28/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7553 - mae: 0.6948 - mse: 0.7548 - val_loss: 0.7783 - val_mae: 0.6874 - val_mse: 0.8097
Epoch 29/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7549 - mae: 0.6943 - mse: 0.7553 - val_loss: 0.7938 - val_mae: 0.6940 - val_mse: 0.8168
Epoch 30/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7557 - mae: 0.6932 - mse: 0.7552 - val_loss: 0.7729 - val_mae: 0.6885 - val_mse: 0.8085
Epoch 31/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7546 - mae: 0.6947 - mse: 0.7558 - val_loss: 0.7780 - val_mae: 0.6869 - val_mse: 0.8047
Epoch 32/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7529 - mae: 0.6928 - mse: 0.7523 - val_loss: 0.7717 - val_mae: 0.6865 - val_mse: 0.8056
Epoch 33/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7536 - mae: 0.6929 - mse: 0.7537 - val_loss: 0.7761 - val_mae: 0.6901 - val_mse: 0.8080
Epoch 34/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7526 - mae: 0.6932 - mse: 0.7524 - val_loss: 0.7712 - val_mae: 0.6865 - val_mse: 0.8048
Epoch 35/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7521 - mae: 0.6926 - mse: 0.7522 - val_loss: 0.7807 - val_mae: 0.6879 - val_mse: 0.8058
Epoch 36/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7530 - mae: 0.6926 - mse: 0.7526 - val_loss: 0.7746 - val_mae: 0.6862 - val_mse: 0.8055
Epoch 37/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7522 - mae: 0.6930 - mse: 0.7527 - val_loss: 0.7773 - val_mae: 0.6872 - val_mse: 0.8054
Epoch 38/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7519 - mae: 0.6925 - mse: 0.7523 - val_loss: 0.7848 - val_mae: 0.6888 - val_mse: 0.8078
Epoch 39/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7519 - mae: 0.6930 - mse: 0.7515 - val_loss: 0.7803 - val_mae: 0.6872 - val_mse: 0.8042
Epoch 40/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7521 - mae: 0.6932 - mse: 0.7521 - val_loss: 0.7777 - val_mae: 0.6864 - val_mse: 0.8041
Epoch 41/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7508 - mae: 0.6923 - mse: 0.7505 - val_loss: 0.7899 - val_mae: 0.6883 - val_mse: 0.8098
Epoch 42/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7513 - mae: 0.6921 - mse: 0.7515 - val_loss: 0.7711 - val_mae: 0.6856 - val_mse: 0.8010
Epoch 43/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7505 - mae: 0.6919 - mse: 0.7502 - val_loss: 0.7727 - val_mae: 0.6868 - val_mse: 0.8026
Epoch 44/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7506 - mae: 0.6926 - mse: 0.7510 - val_loss: 0.7733 - val_mae: 0.6852 - val_mse: 0.8015
Epoch 45/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7499 - mae: 0.6907 - mse: 0.7493 - val_loss: 0.7712 - val_mae: 0.6858 - val_mse: 0.8013
Epoch 46/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7499 - mae: 0.6917 - mse: 0.7503 - val_loss: 0.7704 - val_mae: 0.6859 - val_mse: 0.8008
Epoch 47/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7513 - mae: 0.6922 - mse: 0.7515 - val_loss: 0.7803 - val_mae: 0.6870 - val_mse: 0.8029
Epoch 48/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7498 - mae: 0.6915 - mse: 0.7498 - val_loss: 0.7712 - val_mae: 0.6860 - val_mse: 0.8008
Epoch 49/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7497 - mae: 0.6916 - mse: 0.7493 - val_loss: 0.7674 - val_mae: 0.6837 - val_mse: 0.8008
Epoch 50/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7488 - mae: 0.6913 - mse: 0.7485 - val_loss: 0.7809 - val_mae: 0.6854 - val_mse: 0.8027
Epoch 51/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7499 - mae: 0.6909 - mse: 0.7501 - val_loss: 0.7706 - val_mae: 0.6848 - val_mse: 0.8005
Epoch 52/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7489 - mae: 0.6903 - mse: 0.7486 - val_loss: 0.7752 - val_mae: 0.6865 - val_mse: 0.8015
Epoch 53/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7493 - mae: 0.6909 - mse: 0.7485 - val_loss: 0.7758 - val_mae: 0.6858 - val_mse: 0.8016
Epoch 54/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7486 - mae: 0.6907 - mse: 0.7481 - val_loss: 0.7709 - val_mae: 0.6842 - val_mse: 0.7998
Epoch 55/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7494 - mae: 0.6911 - mse: 0.7492 - val_loss: 0.7754 - val_mae: 0.6864 - val_mse: 0.8021
Epoch 56/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7499 - mae: 0.6907 - mse: 0.7487 - val_loss: 0.7774 - val_mae: 0.6864 - val_mse: 0.8028
Epoch 57/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7492 - mae: 0.6903 - mse: 0.7482 - val_loss: 0.7785 - val_mae: 0.6860 - val_mse: 0.8022
Epoch 58/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7466 - mae: 0.6904 - mse: 0.7468 - val_loss: 0.7758 - val_mae: 0.6856 - val_mse: 0.7999
Epoch 59/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7481 - mae: 0.6908 - mse: 0.7474 - val_loss: 0.7773 - val_mae: 0.6850 - val_mse: 0.8017
Epoch 60/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7471 - mae: 0.6896 - mse: 0.7467 - val_loss: 0.7737 - val_mae: 0.6848 - val_mse: 0.7975
Epoch 61/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7477 - mae: 0.6903 - mse: 0.7474 - val_loss: 0.7685 - val_mae: 0.6826 - val_mse: 0.7976
Epoch 62/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7475 - mae: 0.6895 - mse: 0.7469 - val_loss: 0.7664 - val_mae: 0.6832 - val_mse: 0.7979
Epoch 63/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7475 - mae: 0.6899 - mse: 0.7464 - val_loss: 0.7771 - val_mae: 0.6858 - val_mse: 0.7998
Epoch 64/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7472 - mae: 0.6894 - mse: 0.7466 - val_loss: 0.7719 - val_mae: 0.6863 - val_mse: 0.8014
Epoch 65/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7460 - mae: 0.6898 - mse: 0.7449 - val_loss: 0.7668 - val_mae: 0.6823 - val_mse: 0.7996
Epoch 66/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7474 - mae: 0.6893 - mse: 0.7468 - val_loss: 0.7740 - val_mae: 0.6862 - val_mse: 0.8003
Epoch 67/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7471 - mae: 0.6896 - mse: 0.7461 - val_loss: 0.7691 - val_mae: 0.6834 - val_mse: 0.7961
Epoch 68/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7467 - mae: 0.6894 - mse: 0.7465 - val_loss: 0.7692 - val_mae: 0.6851 - val_mse: 0.7991
Epoch 69/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7469 - mae: 0.6897 - mse: 0.7464 - val_loss: 0.7777 - val_mae: 0.6867 - val_mse: 0.8005
Epoch 70/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7469 - mae: 0.6902 - mse: 0.7466 - val_loss: 0.7761 - val_mae: 0.6838 - val_mse: 0.7988
Epoch 71/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7468 - mae: 0.6886 - mse: 0.7462 - val_loss: 0.7714 - val_mae: 0.6844 - val_mse: 0.7962
Epoch 72/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7468 - mae: 0.6895 - mse: 0.7466 - val_loss: 0.7688 - val_mae: 0.6829 - val_mse: 0.7956
Epoch 73/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7463 - mae: 0.6886 - mse: 0.7450 - val_loss: 0.7744 - val_mae: 0.6845 - val_mse: 0.7974
Epoch 74/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7459 - mae: 0.6896 - mse: 0.7462 - val_loss: 0.7615 - val_mae: 0.6830 - val_mse: 0.7979
Epoch 75/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7464 - mae: 0.6892 - mse: 0.7463 - val_loss: 0.7728 - val_mae: 0.6850 - val_mse: 0.7970
Epoch 76/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7464 - mae: 0.6892 - mse: 0.7456 - val_loss: 0.7659 - val_mae: 0.6820 - val_mse: 0.7948
Epoch 77/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7461 - mae: 0.6893 - mse: 0.7460 - val_loss: 0.7720 - val_mae: 0.6826 - val_mse: 0.7960
Epoch 78/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7461 - mae: 0.6881 - mse: 0.7444 - val_loss: 0.7676 - val_mae: 0.6819 - val_mse: 0.7963
Epoch 79/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7457 - mae: 0.6892 - mse: 0.7461 - val_loss: 0.7700 - val_mae: 0.6837 - val_mse: 0.7961
Epoch 80/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7461 - mae: 0.6888 - mse: 0.7445 - val_loss: 0.7629 - val_mae: 0.6812 - val_mse: 0.7963
Epoch 81/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7457 - mae: 0.6886 - mse: 0.7453 - val_loss: 0.7797 - val_mae: 0.6866 - val_mse: 0.8010
Epoch 82/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7473 - mae: 0.6897 - mse: 0.7475 - val_loss: 0.7844 - val_mae: 0.6877 - val_mse: 0.8038
Epoch 83/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7452 - mae: 0.6886 - mse: 0.7450 - val_loss: 0.7882 - val_mae: 0.6888 - val_mse: 0.8061
Epoch 84/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7461 - mae: 0.6887 - mse: 0.7450 - val_loss: 0.7841 - val_mae: 0.6874 - val_mse: 0.8039
Epoch 85/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7453 - mae: 0.6892 - mse: 0.7455 - val_loss: 0.7761 - val_mae: 0.6839 - val_mse: 0.7979
Epoch 86/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7450 - mae: 0.6884 - mse: 0.7445 - val_loss: 0.7671 - val_mae: 0.6818 - val_mse: 0.7945
Epoch 87/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7479 - mae: 0.6897 - mse: 0.7476 - val_loss: 0.7671 - val_mae: 0.6818 - val_mse: 0.7955
Epoch 88/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7449 - mae: 0.6881 - mse: 0.7440 - val_loss: 0.7701 - val_mae: 0.6837 - val_mse: 0.7991
Epoch 89/100
2389/2389 [==============================] - 19s 8ms/step - loss: 0.7451 - mae: 0.6894 - mse: 0.7453 - val_loss: 0.7782 - val_mae: 0.6855 - val_mse: 0.8008
Epoch 90/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7443 - mae: 0.6878 - mse: 0.7439 - val_loss: 0.7687 - val_mae: 0.6828 - val_mse: 0.7960
Epoch 91/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7446 - mae: 0.6889 - mse: 0.7450 - val_loss: 0.7765 - val_mae: 0.6858 - val_mse: 0.7998
Epoch 92/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7446 - mae: 0.6883 - mse: 0.7439 - val_loss: 0.7693 - val_mae: 0.6826 - val_mse: 0.7959
Epoch 93/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7527 - mae: 0.6893 - mse: 0.7481 - val_loss: 0.7745 - val_mae: 0.6850 - val_mse: 0.7987
Epoch 94/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7436 - mae: 0.6880 - mse: 0.7423 - val_loss: 0.7739 - val_mae: 0.6848 - val_mse: 0.7985
Epoch 95/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7437 - mae: 0.6868 - mse: 0.7415 - val_loss: 0.7675 - val_mae: 0.6830 - val_mse: 0.7973
Epoch 96/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7446 - mae: 0.6879 - mse: 0.7432 - val_loss: 0.7702 - val_mae: 0.6827 - val_mse: 0.7976
Epoch 97/100
2389/2389 [==============================] - 18s 7ms/step - loss: 0.7436 - mae: 0.6870 - mse: 0.7422 - val_loss: 0.7782 - val_mae: 0.6876 - val_mse: 0.8019
Epoch 98/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7450 - mae: 0.6889 - mse: 0.7450 - val_loss: 0.7713 - val_mae: 0.6842 - val_mse: 0.7995
Epoch 99/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7443 - mae: 0.6880 - mse: 0.7436 - val_loss: 0.7782 - val_mae: 0.6851 - val_mse: 0.8004
Epoch 100/100
2389/2389 [==============================] - 18s 8ms/step - loss: 0.7441 - mae: 0.6877 - mse: 0.7436 - val_loss: 0.7747 - val_mae: 0.6845 - val_mse: 0.7982



import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
plotter = tfdocs.plots.HistoryPlotter(metric='mse', smoothing_std=0)
histories = dict()
histories['epoch100'] = history
plotter.plot(histories)