Compression (Pruning)
Contents
!apt-get -qq install -y graphviz && pip install pydot
!pip install -U matplotlib
!pip install git+https://github.com/fastmachinelearning/hls4ml.git@main#egg=hls4ml[profiling]
!pip install qkeras==0.9.0
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
Requirement already satisfied: matplotlib in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (3.5.3)
Requirement already satisfied: fonttools>=4.22.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (4.38.0)
Requirement already satisfied: pillow>=6.2.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (9.3.0)
Requirement already satisfied: python-dateutil>=2.7 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (2.8.2)
Requirement already satisfied: numpy>=1.17 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (1.21.6)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (21.3)
Requirement already satisfied: cycler>=0.10 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (0.11.0)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (3.0.9)
Requirement already satisfied: typing-extensions in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib) (4.4.0)
Requirement already satisfied: six>=1.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)
Collecting hls4ml[profiling]
Cloning https://github.com/fastmachinelearning/hls4ml.git (to revision main) to /tmp/pip-install-w9ig6xpk/hls4ml_87d239a9fa6d4b73aff75568098e772f
Running command git clone --filter=blob:none --quiet https://github.com/fastmachinelearning/hls4ml.git /tmp/pip-install-w9ig6xpk/hls4ml_87d239a9fa6d4b73aff75568098e772f
Resolved https://github.com/fastmachinelearning/hls4ml.git to commit fe9d3e71b03e0422c7643027880310bd2cc02cb1
Running command git submodule update --init --recursive -q
Installing build dependencies ... ?25l-
\
|
/
-
\
done
?25h Getting requirements to build wheel ... ?25l-
done
?25h Preparing metadata (pyproject.toml) ... ?25l-
\
done
?25hRequirement already satisfied: numpy in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.21.6)
Requirement already satisfied: tabulate in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (0.9.0)
Requirement already satisfied: onnx>=1.4.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.12.0)
Requirement already satisfied: six in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.16.0)
Requirement already satisfied: qkeras in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (0.9.0)
Requirement already satisfied: pyyaml in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (6.0)
Requirement already satisfied: h5py in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (3.7.0)
Requirement already satisfied: pydigitalwavetools==1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.1)
Requirement already satisfied: calmjs.parse in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.3.0)
Requirement already satisfied: seaborn in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (0.12.1)
Requirement already satisfied: pandas in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.3.5)
Requirement already satisfied: matplotlib in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (3.5.3)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from onnx>=1.4.0->hls4ml[profiling]) (4.4.0)
Requirement already satisfied: protobuf<=3.20.1,>=3.12.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from onnx>=1.4.0->hls4ml[profiling]) (3.19.6)
Requirement already satisfied: setuptools in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from calmjs.parse->hls4ml[profiling]) (65.5.1)
Requirement already satisfied: ply>=3.6 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from calmjs.parse->hls4ml[profiling]) (3.11)
Requirement already satisfied: python-dateutil>=2.7 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (2.8.2)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (3.0.9)
Requirement already satisfied: pillow>=6.2.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (9.3.0)
Requirement already satisfied: cycler>=0.10 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (4.38.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (21.3)
Requirement already satisfied: pytz>=2017.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pandas->hls4ml[profiling]) (2022.6)
Requirement already satisfied: scipy>=1.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.7.3)
Requirement already satisfied: tqdm>=4.48.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (4.64.1)
Requirement already satisfied: networkx>=2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (2.6.3)
Requirement already satisfied: pyparser in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.0)
Requirement already satisfied: tensorflow-model-optimization>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (0.7.3)
Requirement already satisfied: scikit-learn>=0.23.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.0.2)
Requirement already satisfied: keras-tuner>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.1.3)
Requirement already satisfied: ipython in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (7.34.0)
Requirement already satisfied: kt-legacy in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.0.4)
Requirement already satisfied: requests in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.28.1)
Requirement already satisfied: tensorboard in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.10.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras->hls4ml[profiling]) (3.1.0)
Requirement already satisfied: joblib>=0.11 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras->hls4ml[profiling]) (1.2.0)
Requirement already satisfied: dm-tree~=0.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorflow-model-optimization>=0.2.1->qkeras->hls4ml[profiling]) (0.1.7)
Requirement already satisfied: parse==1.6.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyparser->qkeras->hls4ml[profiling]) (1.6.5)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.0.32)
Requirement already satisfied: traitlets>=4.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.5.0)
Requirement already satisfied: jedi>=0.16 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.18.1)
Requirement already satisfied: pygments in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.13.0)
Requirement already satisfied: backcall in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.2.0)
Requirement already satisfied: matplotlib-inline in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.1.6)
Requirement already satisfied: pexpect>4.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (4.8.0)
Requirement already satisfied: pickleshare in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.7.5)
Requirement already satisfied: decorator in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.1.1)
Requirement already satisfied: charset-normalizer<3,>=2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.1.1)
Requirement already satisfied: certifi>=2017.4.17 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2022.9.24)
Requirement already satisfied: idna<4,>=2.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.26.12)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.6.1)
Requirement already satisfied: absl-py>=0.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.3.0)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.8.1)
Requirement already satisfied: google-auth<3,>=1.6.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.14.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.4.6)
Requirement already satisfied: wheel>=0.26 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.38.2)
Requirement already satisfied: werkzeug>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.2.2)
Requirement already satisfied: grpcio>=1.24.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.50.0)
Requirement already satisfied: markdown>=2.6.8 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.4.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.2.8)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (4.9)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.2.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.3.1)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from jedi>=0.16->ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.8.3)
Requirement already satisfied: importlib-metadata>=4.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.0.0)
Requirement already satisfied: ptyprocess>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pexpect>4.3->ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.7.0)
Requirement already satisfied: wcwidth in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.2.5)
Requirement already satisfied: MarkupSafe>=2.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from werkzeug>=1.0.1->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.1.1)
Requirement already satisfied: zipp>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.10.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.2.2)
Requirement already satisfied: qkeras==0.9.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (0.9.0)
Requirement already satisfied: tensorflow-model-optimization>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (0.7.3)
Requirement already satisfied: scipy>=1.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.7.3)
Requirement already satisfied: pyparser in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.0)
Requirement already satisfied: scikit-learn>=0.23.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.0.2)
Requirement already satisfied: numpy>=1.16.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.21.6)
Requirement already satisfied: tqdm>=4.48.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (4.64.1)
Requirement already satisfied: networkx>=2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (2.6.3)
Requirement already satisfied: setuptools>=41.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (65.5.1)
Requirement already satisfied: keras-tuner>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.1.3)
Requirement already satisfied: ipython in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (7.34.0)
Requirement already satisfied: kt-legacy in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (1.0.4)
Requirement already satisfied: packaging in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (21.3)
Requirement already satisfied: tensorboard in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (2.10.1)
Requirement already satisfied: requests in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (2.28.1)
Requirement already satisfied: joblib>=0.11 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras==0.9.0) (1.2.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras==0.9.0) (3.1.0)
Requirement already satisfied: dm-tree~=0.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorflow-model-optimization>=0.2.1->qkeras==0.9.0) (0.1.7)
Requirement already satisfied: six~=1.10 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorflow-model-optimization>=0.2.1->qkeras==0.9.0) (1.16.0)
Requirement already satisfied: parse==1.6.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyparser->qkeras==0.9.0) (1.6.5)
Requirement already satisfied: pickleshare in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.7.5)
Requirement already satisfied: matplotlib-inline in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.1.6)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (3.0.32)
Requirement already satisfied: pygments in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (2.13.0)
Requirement already satisfied: pexpect>4.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (4.8.0)
Requirement already satisfied: decorator in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (5.1.1)
Requirement already satisfied: jedi>=0.16 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.18.1)
Requirement already satisfied: backcall in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.2.0)
Requirement already satisfied: traitlets>=4.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (5.5.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from packaging->keras-tuner>=1.0.1->qkeras==0.9.0) (3.0.9)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (1.26.12)
Requirement already satisfied: certifi>=2017.4.17 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (2022.9.24)
Requirement already satisfied: idna<4,>=2.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (3.4)
Requirement already satisfied: charset-normalizer<3,>=2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (2.1.1)
Requirement already satisfied: google-auth<3,>=1.6.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (2.14.1)
Requirement already satisfied: grpcio>=1.24.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.50.0)
Requirement already satisfied: protobuf<3.20,>=3.9.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.19.6)
Requirement already satisfied: markdown>=2.6.8 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.4.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.4.6)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.8.1)
Requirement already satisfied: wheel>=0.26 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.38.2)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.6.1)
Requirement already satisfied: werkzeug>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (2.2.2)
Requirement already satisfied: absl-py>=0.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.3.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (4.9)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (5.2.0)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.3.1)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from jedi>=0.16->ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.8.3)
Requirement already satisfied: importlib-metadata>=4.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (5.0.0)
Requirement already satisfied: ptyprocess>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pexpect>4.3->ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.7.0)
Requirement already satisfied: wcwidth in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.2.5)
Requirement already satisfied: MarkupSafe>=2.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from werkzeug>=1.0.1->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (2.1.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (4.4.0)
Requirement already satisfied: zipp>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.10.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.2.2)
COLAB=True
!pip install wget
import wget
import os.path
# WGET for colab
if not os.path.exists("callbacks.py"):
url = "https://raw.githubusercontent.com/jmduarte/iaifi-summer-school/main/book/callbacks.py"
callbacksFile = wget.download(url)
if not os.path.exists("plotting.py"):
urlPlot = "https://raw.githubusercontent.com/jmduarte/iaifi-summer-school/main/book/plotting.py"
plotFile = wget.download(urlPlot)
Requirement already satisfied: wget in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (3.2)
if COLAB:
if not os.path.exists("./iaifi-summer-school"):
!git clone https://github.com/jmduarte/iaifi-summer-school.git
Compression (Pruning)#
Load the dataset (if you are restarting from this point)#
from tensorflow.keras.utils import to_categorical
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
seed = 0
np.random.seed(seed)
import tensorflow as tf
tf.random.set_seed(seed)
# import os
# os.environ['PATH'] = '/opt/Xilinx/Vivado/2019.2/bin:' + os.environ['PATH']
# for this tutorial we wont be actually running Vivado, so I have commented these lines out
# but if you want to look into actually running on an FPGA then simply uncomment these lines
if not COLAB:
#FOR LOCAL PULL
X_train_val = np.load("X_train_val.npy")
X_test = np.load("X_test.npy")
y_train_val = np.load("y_train_val.npy")
y_test = np.load("y_test.npy")
classes = np.load("classes.npy", allow_pickle=True)
if COLAB:
#FOR COLAB
X_train_val = np.load("iaifi-summer-school/book/X_train_val.npy")
X_test = np.load("iaifi-summer-school/book/X_test.npy")
y_train_val = np.load("iaifi-summer-school/book/y_train_val.npy")
y_test = np.load("iaifi-summer-school/book/y_test.npy")
classes = np.load("iaifi-summer-school/book/classes.npy", allow_pickle=True)
2022-11-08 17:48:09.761748: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-08 17:48:09.918570: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.7.15/x64/lib
2022-11-08 17:48:09.918596: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-08 17:48:09.951797: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-08 17:48:10.729051: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.7.15/x64/lib
2022-11-08 17:48:10.729145: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.7.15/x64/lib
2022-11-08 17:48:10.729155: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Construct a new model#
We’ll now use the same architecture as we originally used: 3 hidden layers with 64, then 32, then 32 neurons. Each layer will use relu
activation.
Add an output layer with 5 neurons (one for each class), then finish with Softmax activation.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l1
from callbacks import all_callbacks
model = Sequential()
model.add(
Dense(
64,
input_shape=(16,),
name="fc1",
kernel_initializer="lecun_uniform",
kernel_regularizer=l1(0.0001),
)
)
model.add(Activation(activation="relu", name="relu1"))
model.add(
Dense(
32,
name="fc2",
kernel_initializer="lecun_uniform",
kernel_regularizer=l1(0.0001),
)
)
model.add(Activation(activation="relu", name="relu2"))
model.add(
Dense(
32,
name="fc3",
kernel_initializer="lecun_uniform",
kernel_regularizer=l1(0.0001),
)
)
model.add(Activation(activation="relu", name="relu3"))
model.add(
Dense(
5,
name="output",
kernel_initializer="lecun_uniform",
kernel_regularizer=l1(0.0001),
)
)
model.add(Activation(activation="softmax", name="softmax"))
2022-11-08 17:48:12.886623: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.7.15/x64/lib
2022-11-08 17:48:12.886657: W tensorflow/stream_executor/cuda/cuda_driver.cc:263] failed call to cuInit: UNKNOWN ERROR (303)
2022-11-08 17:48:12.886679: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (fv-az196-389): /proc/driver/nvidia/version does not exist
2022-11-08 17:48:12.886968: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Train sparse#
This time we’ll use the Tensorflow model optimization sparsity to train a sparse model (forcing many weights to ‘0’). In this instance, the target sparsity is 75%
from tensorflow_model_optimization.python.core.sparsity.keras import (
prune,
pruning_callbacks,
pruning_schedule,
)
from tensorflow_model_optimization.sparsity.keras import strip_pruning
pruning_params = {
"pruning_schedule": pruning_schedule.ConstantSparsity(
0.75, begin_step=2000, frequency=100
)
}
model = prune.prune_low_magnitude(model, **pruning_params)
Train the model#
We’ll use the same settings as the previous model: Adam optimizer with categorical crossentropy loss.
The callbacks will decay the learning rate and save the model into a directory ‘model_3’
The model isn’t very complex, so this should just take a few minutes even on the CPU.
If you’ve restarted the notebook kernel after training once, set train = False
to load the trained model rather than training again.
train = True
if train:
adam = Adam(lr=0.0001)
model.compile(
optimizer=adam, loss=["categorical_crossentropy"], metrics=["accuracy"]
)
callbacks = all_callbacks(
stop_patience=1000,
lr_factor=0.5,
lr_patience=10,
lr_epsilon=0.000001,
lr_cooldown=2,
lr_minimum=0.0000001,
outputDir="model_3",
)
callbacks.callbacks.append(pruning_callbacks.UpdatePruningStep())
model.fit(
X_train_val,
y_train_val,
batch_size=1024,
epochs=30,
validation_split=0.25,
shuffle=True,
callbacks=callbacks.callbacks,
)
# Save the model again but with the pruning 'stripped' to use the regular layer types
model = strip_pruning(model)
model.save("model_3/KERAS_check_best_model.h5")
else:
from tensorflow.keras.models import load_model
model = load_model("model_3/KERAS_check_best_model.h5")
WARNING:tensorflow:`epsilon` argument is deprecated and will be removed, use `min_delta` instead.
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Epoch 1/30
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/keras/optimizers/optimizer_v2/adam.py:114: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super().__init__(name, **kwargs)
1/487 [..............................] - ETA: 22:12 - loss: 1.7579 - accuracy: 0.1445
WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0025s vs `on_train_batch_end` time: 0.0086s). Check your callbacks.
18/487 [>.............................] - ETA: 1s - loss: 1.7224 - accuracy: 0.1609
37/487 [=>............................] - ETA: 1s - loss: 1.6828 - accuracy: 0.2015
56/487 [==>...........................] - ETA: 1s - loss: 1.6516 - accuracy: 0.2448
75/487 [===>..........................] - ETA: 1s - loss: 1.6243 - accuracy: 0.2821
94/487 [====>.........................] - ETA: 1s - loss: 1.5995 - accuracy: 0.3160
112/487 [=====>........................] - ETA: 1s - loss: 1.5775 - accuracy: 0.3455
130/487 [=======>......................] - ETA: 0s - loss: 1.5571 - accuracy: 0.3710
148/487 [========>.....................] - ETA: 0s - loss: 1.5381 - accuracy: 0.3919
166/487 [=========>....................] - ETA: 0s - loss: 1.5202 - accuracy: 0.4090
184/487 [==========>...................] - ETA: 0s - loss: 1.5030 - accuracy: 0.4233
202/487 [===========>..................] - ETA: 0s - loss: 1.4870 - accuracy: 0.4351
221/487 [============>.................] - ETA: 0s - loss: 1.4710 - accuracy: 0.4461
240/487 [=============>................] - ETA: 0s - loss: 1.4557 - accuracy: 0.4551
258/487 [==============>...............] - ETA: 0s - loss: 1.4426 - accuracy: 0.4628
276/487 [================>.............] - ETA: 0s - loss: 1.4288 - accuracy: 0.4714
294/487 [=================>............] - ETA: 0s - loss: 1.4162 - accuracy: 0.4790
312/487 [==================>...........] - ETA: 0s - loss: 1.4038 - accuracy: 0.4855
331/487 [===================>..........] - ETA: 0s - loss: 1.3912 - accuracy: 0.4919
349/487 [====================>.........] - ETA: 0s - loss: 1.3805 - accuracy: 0.4975
368/487 [=====================>........] - ETA: 0s - loss: 1.3693 - accuracy: 0.5033
387/487 [======================>.......] - ETA: 0s - loss: 1.3587 - accuracy: 0.5087
405/487 [=======================>......] - ETA: 0s - loss: 1.3491 - accuracy: 0.5135
423/487 [=========================>....] - ETA: 0s - loss: 1.3401 - accuracy: 0.5180
442/487 [==========================>...] - ETA: 0s - loss: 1.3310 - accuracy: 0.5226
461/487 [===========================>..] - ETA: 0s - loss: 1.3224 - accuracy: 0.5269
480/487 [============================>.] - ETA: 0s - loss: 1.3145 - accuracy: 0.5308
***callbacks***
saving losses to model_3/losses.log
Epoch 1: val_loss improved from inf to 1.11182, saving model to model_3/KERAS_check_best_model.h5
Epoch 1: val_loss improved from inf to 1.11182, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 1: saving model to model_3/KERAS_check_model_last.h5
Epoch 1: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 5s 4ms/step - loss: 1.3117 - accuracy: 0.5322 - val_loss: 1.1118 - val_accuracy: 0.6318 - lr: 1.0000e-04
Epoch 2/30
1/487 [..............................] - ETA: 2s - loss: 1.0814 - accuracy: 0.6367
20/487 [>.............................] - ETA: 1s - loss: 1.1026 - accuracy: 0.6328
39/487 [=>............................] - ETA: 1s - loss: 1.1053 - accuracy: 0.6329
59/487 [==>...........................] - ETA: 1s - loss: 1.1039 - accuracy: 0.6332
79/487 [===>..........................] - ETA: 1s - loss: 1.0979 - accuracy: 0.6368
98/487 [=====>........................] - ETA: 1s - loss: 1.0959 - accuracy: 0.6385
116/487 [======>.......................] - ETA: 0s - loss: 1.0930 - accuracy: 0.6408
134/487 [=======>......................] - ETA: 0s - loss: 1.0897 - accuracy: 0.6424
154/487 [========>.....................] - ETA: 0s - loss: 1.0867 - accuracy: 0.6440
173/487 [=========>....................] - ETA: 0s - loss: 1.0842 - accuracy: 0.6456
192/487 [==========>...................] - ETA: 0s - loss: 1.0823 - accuracy: 0.6464
211/487 [===========>..................] - ETA: 0s - loss: 1.0795 - accuracy: 0.6478
230/487 [=============>................] - ETA: 0s - loss: 1.0767 - accuracy: 0.6494
249/487 [==============>...............] - ETA: 0s - loss: 1.0749 - accuracy: 0.6502
268/487 [===============>..............] - ETA: 0s - loss: 1.0731 - accuracy: 0.6509
286/487 [================>.............] - ETA: 0s - loss: 1.0708 - accuracy: 0.6521
305/487 [=================>............] - ETA: 0s - loss: 1.0684 - accuracy: 0.6533
324/487 [==================>...........] - ETA: 0s - loss: 1.0669 - accuracy: 0.6542
343/487 [====================>.........] - ETA: 0s - loss: 1.0648 - accuracy: 0.6552
362/487 [=====================>........] - ETA: 0s - loss: 1.0622 - accuracy: 0.6564
381/487 [======================>.......] - ETA: 0s - loss: 1.0601 - accuracy: 0.6572
400/487 [=======================>......] - ETA: 0s - loss: 1.0580 - accuracy: 0.6583
419/487 [========================>.....] - ETA: 0s - loss: 1.0565 - accuracy: 0.6589
438/487 [=========================>....] - ETA: 0s - loss: 1.0545 - accuracy: 0.6599
456/487 [===========================>..] - ETA: 0s - loss: 1.0530 - accuracy: 0.6605
474/487 [============================>.] - ETA: 0s - loss: 1.0512 - accuracy: 0.6613
***callbacks***
saving losses to model_3/losses.log
Epoch 2: val_loss improved from 1.11182 to 1.00579, saving model to model_3/KERAS_check_best_model.h5
Epoch 2: val_loss improved from 1.11182 to 1.00579, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 2: saving model to model_3/KERAS_check_model_last.h5
Epoch 2: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 4ms/step - loss: 1.0499 - accuracy: 0.6617 - val_loss: 1.0058 - val_accuracy: 0.6809 - lr: 1.0000e-04
Epoch 3/30
1/487 [..............................] - ETA: 2s - loss: 0.9745 - accuracy: 0.6953
20/487 [>.............................] - ETA: 1s - loss: 1.0065 - accuracy: 0.6826
39/487 [=>............................] - ETA: 1s - loss: 0.9990 - accuracy: 0.6829
58/487 [==>...........................] - ETA: 1s - loss: 1.0007 - accuracy: 0.6825
77/487 [===>..........................] - ETA: 1s - loss: 0.9988 - accuracy: 0.6822
96/487 [====>.........................] - ETA: 1s - loss: 0.9965 - accuracy: 0.6834
115/487 [======>.......................] - ETA: 1s - loss: 0.9965 - accuracy: 0.6832
134/487 [=======>......................] - ETA: 0s - loss: 0.9959 - accuracy: 0.6838
153/487 [========>.....................] - ETA: 0s - loss: 0.9929 - accuracy: 0.6853
173/487 [=========>....................] - ETA: 0s - loss: 0.9912 - accuracy: 0.6863
192/487 [==========>...................] - ETA: 0s - loss: 0.9895 - accuracy: 0.6868
211/487 [===========>..................] - ETA: 0s - loss: 0.9886 - accuracy: 0.6870
230/487 [=============>................] - ETA: 0s - loss: 0.9877 - accuracy: 0.6874
248/487 [==============>...............] - ETA: 0s - loss: 0.9856 - accuracy: 0.6885
266/487 [===============>..............] - ETA: 0s - loss: 0.9841 - accuracy: 0.6891
284/487 [================>.............] - ETA: 0s - loss: 0.9826 - accuracy: 0.6895
302/487 [=================>............] - ETA: 0s - loss: 0.9810 - accuracy: 0.6901
321/487 [==================>...........] - ETA: 0s - loss: 0.9801 - accuracy: 0.6905
340/487 [===================>..........] - ETA: 0s - loss: 0.9792 - accuracy: 0.6908
359/487 [=====================>........] - ETA: 0s - loss: 0.9776 - accuracy: 0.6915
378/487 [======================>.......] - ETA: 0s - loss: 0.9762 - accuracy: 0.6922
397/487 [=======================>......] - ETA: 0s - loss: 0.9749 - accuracy: 0.6927
415/487 [========================>.....] - ETA: 0s - loss: 0.9738 - accuracy: 0.6929
434/487 [=========================>....] - ETA: 0s - loss: 0.9721 - accuracy: 0.6936
453/487 [==========================>...] - ETA: 0s - loss: 0.9706 - accuracy: 0.6943
472/487 [============================>.] - ETA: 0s - loss: 0.9693 - accuracy: 0.6947
***callbacks***
saving losses to model_3/losses.log
Epoch 3: val_loss improved from 1.00579 to 0.94006, saving model to model_3/KERAS_check_best_model.h5
Epoch 3: val_loss improved from 1.00579 to 0.94006, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 3: saving model to model_3/KERAS_check_model_last.h5
Epoch 3: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.9685 - accuracy: 0.6949 - val_loss: 0.9401 - val_accuracy: 0.7069 - lr: 1.0000e-04
Epoch 4/30
1/487 [..............................] - ETA: 2s - loss: 0.9479 - accuracy: 0.7031
20/487 [>.............................] - ETA: 1s - loss: 0.9372 - accuracy: 0.7081
39/487 [=>............................] - ETA: 1s - loss: 0.9344 - accuracy: 0.7089
57/487 [==>...........................] - ETA: 1s - loss: 0.9370 - accuracy: 0.7088
76/487 [===>..........................] - ETA: 1s - loss: 0.9346 - accuracy: 0.7099
95/487 [====>.........................] - ETA: 1s - loss: 0.9346 - accuracy: 0.7092
114/487 [======>.......................] - ETA: 1s - loss: 0.9335 - accuracy: 0.7095
133/487 [=======>......................] - ETA: 0s - loss: 0.9298 - accuracy: 0.7106
152/487 [========>.....................] - ETA: 0s - loss: 0.9281 - accuracy: 0.7107
171/487 [=========>....................] - ETA: 0s - loss: 0.9274 - accuracy: 0.7110
190/487 [==========>...................] - ETA: 0s - loss: 0.9265 - accuracy: 0.7113
209/487 [===========>..................] - ETA: 0s - loss: 0.9253 - accuracy: 0.7114
228/487 [=============>................] - ETA: 0s - loss: 0.9235 - accuracy: 0.7120
247/487 [==============>...............] - ETA: 0s - loss: 0.9230 - accuracy: 0.7120
266/487 [===============>..............] - ETA: 0s - loss: 0.9223 - accuracy: 0.7121
285/487 [================>.............] - ETA: 0s - loss: 0.9212 - accuracy: 0.7124
304/487 [=================>............] - ETA: 0s - loss: 0.9199 - accuracy: 0.7129
323/487 [==================>...........] - ETA: 0s - loss: 0.9192 - accuracy: 0.7130
342/487 [====================>.........] - ETA: 0s - loss: 0.9180 - accuracy: 0.7135
361/487 [=====================>........] - ETA: 0s - loss: 0.9170 - accuracy: 0.7138
379/487 [======================>.......] - ETA: 0s - loss: 0.9161 - accuracy: 0.7142
397/487 [=======================>......] - ETA: 0s - loss: 0.9156 - accuracy: 0.7143
415/487 [========================>.....] - ETA: 0s - loss: 0.9151 - accuracy: 0.7143
434/487 [=========================>....] - ETA: 0s - loss: 0.9142 - accuracy: 0.7145
453/487 [==========================>...] - ETA: 0s - loss: 0.9132 - accuracy: 0.7148
472/487 [============================>.] - ETA: 0s - loss: 0.9123 - accuracy: 0.7151
***callbacks***
saving losses to model_3/losses.log
Epoch 4: val_loss improved from 0.94006 to 0.89407, saving model to model_3/KERAS_check_best_model.h5
Epoch 4: val_loss improved from 0.94006 to 0.89407, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 4: saving model to model_3/KERAS_check_model_last.h5
Epoch 4: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.9120 - accuracy: 0.7151 - val_loss: 0.8941 - val_accuracy: 0.7209 - lr: 1.0000e-04
Epoch 5/30
1/487 [..............................] - ETA: 2s - loss: 0.8638 - accuracy: 0.7256
21/487 [>.............................] - ETA: 1s - loss: 0.8853 - accuracy: 0.7200
40/487 [=>............................] - ETA: 1s - loss: 0.8853 - accuracy: 0.7209
57/487 [==>...........................] - ETA: 1s - loss: 0.9455 - accuracy: 0.6804
76/487 [===>..........................] - ETA: 1s - loss: 1.0653 - accuracy: 0.5951
95/487 [====>.........................] - ETA: 1s - loss: 1.1162 - accuracy: 0.5562
115/487 [======>.......................] - ETA: 1s - loss: 1.1425 - accuracy: 0.5405
135/487 [=======>......................] - ETA: 0s - loss: 1.1551 - accuracy: 0.5354
154/487 [========>.....................] - ETA: 0s - loss: 1.1606 - accuracy: 0.5348
173/487 [=========>....................] - ETA: 0s - loss: 1.1620 - accuracy: 0.5368
192/487 [==========>...................] - ETA: 0s - loss: 1.1620 - accuracy: 0.5387
211/487 [===========>..................] - ETA: 0s - loss: 1.1599 - accuracy: 0.5412
229/487 [=============>................] - ETA: 0s - loss: 1.1580 - accuracy: 0.5427
249/487 [==============>...............] - ETA: 0s - loss: 1.1548 - accuracy: 0.5447
268/487 [===============>..............] - ETA: 0s - loss: 1.1516 - accuracy: 0.5466
287/487 [================>.............] - ETA: 0s - loss: 1.1487 - accuracy: 0.5484
306/487 [=================>............] - ETA: 0s - loss: 1.1461 - accuracy: 0.5506
326/487 [===================>..........] - ETA: 0s - loss: 1.1430 - accuracy: 0.5539
346/487 [====================>.........] - ETA: 0s - loss: 1.1394 - accuracy: 0.5581
365/487 [=====================>........] - ETA: 0s - loss: 1.1359 - accuracy: 0.5619
384/487 [======================>.......] - ETA: 0s - loss: 1.1325 - accuracy: 0.5655
403/487 [=======================>......] - ETA: 0s - loss: 1.1294 - accuracy: 0.5691
422/487 [========================>.....] - ETA: 0s - loss: 1.1259 - accuracy: 0.5731
441/487 [==========================>...] - ETA: 0s - loss: 1.1225 - accuracy: 0.5770
460/487 [===========================>..] - ETA: 0s - loss: 1.1194 - accuracy: 0.5804
479/487 [============================>.] - ETA: 0s - loss: 1.1160 - accuracy: 0.5842
***callbacks***
saving losses to model_3/losses.log
Epoch 5: val_loss did not improve from 0.89407
Epoch 5: val_loss did not improve from 0.89407
Epoch 5: saving model to model_3/KERAS_check_model_last.h5
Epoch 5: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 1.1147 - accuracy: 0.5858 - val_loss: 1.0326 - val_accuracy: 0.6773 - lr: 1.0000e-04
Epoch 6/30
1/487 [..............................] - ETA: 2s - loss: 0.9659 - accuracy: 0.7188
19/487 [>.............................] - ETA: 1s - loss: 1.0182 - accuracy: 0.6834
38/487 [=>............................] - ETA: 1s - loss: 1.0208 - accuracy: 0.6833
57/487 [==>...........................] - ETA: 1s - loss: 1.0188 - accuracy: 0.6845
76/487 [===>..........................] - ETA: 1s - loss: 1.0177 - accuracy: 0.6847
94/487 [====>.........................] - ETA: 1s - loss: 1.0165 - accuracy: 0.6853
113/487 [=====>........................] - ETA: 1s - loss: 1.0141 - accuracy: 0.6864
132/487 [=======>......................] - ETA: 0s - loss: 1.0119 - accuracy: 0.6872
151/487 [========>.....................] - ETA: 0s - loss: 1.0097 - accuracy: 0.6879
170/487 [=========>....................] - ETA: 0s - loss: 1.0069 - accuracy: 0.6890
189/487 [==========>...................] - ETA: 0s - loss: 1.0052 - accuracy: 0.6894
208/487 [===========>..................] - ETA: 0s - loss: 1.0038 - accuracy: 0.6899
227/487 [============>.................] - ETA: 0s - loss: 1.0024 - accuracy: 0.6904
246/487 [==============>...............] - ETA: 0s - loss: 1.0000 - accuracy: 0.6914
265/487 [===============>..............] - ETA: 0s - loss: 0.9980 - accuracy: 0.6922
284/487 [================>.............] - ETA: 0s - loss: 0.9963 - accuracy: 0.6928
303/487 [=================>............] - ETA: 0s - loss: 0.9948 - accuracy: 0.6933
323/487 [==================>...........] - ETA: 0s - loss: 0.9935 - accuracy: 0.6933
343/487 [====================>.........] - ETA: 0s - loss: 0.9921 - accuracy: 0.6936
363/487 [=====================>........] - ETA: 0s - loss: 0.9905 - accuracy: 0.6939
382/487 [======================>.......] - ETA: 0s - loss: 0.9889 - accuracy: 0.6943
401/487 [=======================>......] - ETA: 0s - loss: 0.9869 - accuracy: 0.6947
420/487 [========================>.....] - ETA: 0s - loss: 0.9857 - accuracy: 0.6948
439/487 [==========================>...] - ETA: 0s - loss: 0.9842 - accuracy: 0.6951
458/487 [===========================>..] - ETA: 0s - loss: 0.9826 - accuracy: 0.6954
477/487 [============================>.] - ETA: 0s - loss: 0.9813 - accuracy: 0.6958
***callbacks***
saving losses to model_3/losses.log
Epoch 6: val_loss did not improve from 0.89407
Epoch 6: val_loss did not improve from 0.89407
Epoch 6: saving model to model_3/KERAS_check_model_last.h5
Epoch 6: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.9808 - accuracy: 0.6959 - val_loss: 0.9487 - val_accuracy: 0.7020 - lr: 1.0000e-04
Epoch 7/30
1/487 [..............................] - ETA: 2s - loss: 0.9141 - accuracy: 0.7139
18/487 [>.............................] - ETA: 1s - loss: 0.9378 - accuracy: 0.7078
34/487 [=>............................] - ETA: 1s - loss: 0.9495 - accuracy: 0.7003
52/487 [==>...........................] - ETA: 1s - loss: 0.9465 - accuracy: 0.7020
70/487 [===>..........................] - ETA: 1s - loss: 0.9445 - accuracy: 0.7026
88/487 [====>.........................] - ETA: 1s - loss: 0.9445 - accuracy: 0.7017
106/487 [=====>........................] - ETA: 1s - loss: 0.9424 - accuracy: 0.7017
125/487 [======>.......................] - ETA: 1s - loss: 0.9398 - accuracy: 0.7031
143/487 [=======>......................] - ETA: 0s - loss: 0.9408 - accuracy: 0.7027
161/487 [========>.....................] - ETA: 0s - loss: 0.9403 - accuracy: 0.7027
179/487 [==========>...................] - ETA: 0s - loss: 0.9395 - accuracy: 0.7027
197/487 [===========>..................] - ETA: 0s - loss: 0.9380 - accuracy: 0.7033
215/487 [============>.................] - ETA: 0s - loss: 0.9366 - accuracy: 0.7035
233/487 [=============>................] - ETA: 0s - loss: 0.9358 - accuracy: 0.7034
252/487 [==============>...............] - ETA: 0s - loss: 0.9345 - accuracy: 0.7038
271/487 [===============>..............] - ETA: 0s - loss: 0.9335 - accuracy: 0.7040
289/487 [================>.............] - ETA: 0s - loss: 0.9327 - accuracy: 0.7042
307/487 [=================>............] - ETA: 0s - loss: 0.9313 - accuracy: 0.7047
325/487 [===================>..........] - ETA: 0s - loss: 0.9304 - accuracy: 0.7049
343/487 [====================>.........] - ETA: 0s - loss: 0.9293 - accuracy: 0.7049
361/487 [=====================>........] - ETA: 0s - loss: 0.9281 - accuracy: 0.7053
379/487 [======================>.......] - ETA: 0s - loss: 0.9275 - accuracy: 0.7053
397/487 [=======================>......] - ETA: 0s - loss: 0.9269 - accuracy: 0.7055
415/487 [========================>.....] - ETA: 0s - loss: 0.9265 - accuracy: 0.7054
433/487 [=========================>....] - ETA: 0s - loss: 0.9260 - accuracy: 0.7054
451/487 [==========================>...] - ETA: 0s - loss: 0.9257 - accuracy: 0.7054
470/487 [===========================>..] - ETA: 0s - loss: 0.9251 - accuracy: 0.7053
487/487 [==============================] - ETA: 0s - loss: 0.9246 - accuracy: 0.7055
***callbacks***
saving losses to model_3/losses.log
Epoch 7: val_loss did not improve from 0.89407
Epoch 7: val_loss did not improve from 0.89407
Epoch 7: saving model to model_3/KERAS_check_model_last.h5
Epoch 7: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 4ms/step - loss: 0.9246 - accuracy: 0.7055 - val_loss: 0.9103 - val_accuracy: 0.7080 - lr: 1.0000e-04
Epoch 8/30
1/487 [..............................] - ETA: 2s - loss: 0.9593 - accuracy: 0.6934
20/487 [>.............................] - ETA: 1s - loss: 0.9009 - accuracy: 0.7108
38/487 [=>............................] - ETA: 1s - loss: 0.9019 - accuracy: 0.7110
56/487 [==>...........................] - ETA: 1s - loss: 0.9015 - accuracy: 0.7092
75/487 [===>..........................] - ETA: 1s - loss: 0.9035 - accuracy: 0.7082
94/487 [====>.........................] - ETA: 1s - loss: 0.9035 - accuracy: 0.7092
113/487 [=====>........................] - ETA: 1s - loss: 0.9040 - accuracy: 0.7084
131/487 [=======>......................] - ETA: 0s - loss: 0.9044 - accuracy: 0.7083
150/487 [========>.....................] - ETA: 0s - loss: 0.9034 - accuracy: 0.7087
168/487 [=========>....................] - ETA: 0s - loss: 0.9018 - accuracy: 0.7092
186/487 [==========>...................] - ETA: 0s - loss: 0.9016 - accuracy: 0.7095
204/487 [===========>..................] - ETA: 0s - loss: 0.9015 - accuracy: 0.7098
223/487 [============>.................] - ETA: 0s - loss: 0.9017 - accuracy: 0.7095
241/487 [=============>................] - ETA: 0s - loss: 0.9013 - accuracy: 0.7096
260/487 [===============>..............] - ETA: 0s - loss: 0.9007 - accuracy: 0.7100
279/487 [================>.............] - ETA: 0s - loss: 0.8997 - accuracy: 0.7100
297/487 [=================>............] - ETA: 0s - loss: 0.8992 - accuracy: 0.7102
316/487 [==================>...........] - ETA: 0s - loss: 0.8987 - accuracy: 0.7103
335/487 [===================>..........] - ETA: 0s - loss: 0.8987 - accuracy: 0.7101
354/487 [====================>.........] - ETA: 0s - loss: 0.8986 - accuracy: 0.7100
373/487 [=====================>........] - ETA: 0s - loss: 0.8980 - accuracy: 0.7102
393/487 [=======================>......] - ETA: 0s - loss: 0.8972 - accuracy: 0.7104
413/487 [========================>.....] - ETA: 0s - loss: 0.8969 - accuracy: 0.7105
433/487 [=========================>....] - ETA: 0s - loss: 0.8964 - accuracy: 0.7106
453/487 [==========================>...] - ETA: 0s - loss: 0.8957 - accuracy: 0.7108
473/487 [============================>.] - ETA: 0s - loss: 0.8947 - accuracy: 0.7112
***callbacks***
saving losses to model_3/losses.log
Epoch 8: val_loss improved from 0.89407 to 0.88553, saving model to model_3/KERAS_check_best_model.h5
Epoch 8: val_loss improved from 0.89407 to 0.88553, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 8: saving model to model_3/KERAS_check_model_last.h5
Epoch 8: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.8940 - accuracy: 0.7115 - val_loss: 0.8855 - val_accuracy: 0.7130 - lr: 1.0000e-04
Epoch 9/30
1/487 [..............................] - ETA: 2s - loss: 0.8906 - accuracy: 0.7227
20/487 [>.............................] - ETA: 1s - loss: 0.8805 - accuracy: 0.7189
38/487 [=>............................] - ETA: 1s - loss: 0.8784 - accuracy: 0.7168
56/487 [==>...........................] - ETA: 1s - loss: 0.8768 - accuracy: 0.7162
74/487 [===>..........................] - ETA: 1s - loss: 0.8764 - accuracy: 0.7164
92/487 [====>.........................] - ETA: 1s - loss: 0.8782 - accuracy: 0.7156
110/487 [=====>........................] - ETA: 1s - loss: 0.8778 - accuracy: 0.7154
128/487 [======>.......................] - ETA: 1s - loss: 0.8781 - accuracy: 0.7152
146/487 [=======>......................] - ETA: 0s - loss: 0.8783 - accuracy: 0.7148
164/487 [=========>....................] - ETA: 0s - loss: 0.8780 - accuracy: 0.7148
182/487 [==========>...................] - ETA: 0s - loss: 0.8769 - accuracy: 0.7155
201/487 [===========>..................] - ETA: 0s - loss: 0.8771 - accuracy: 0.7153
219/487 [============>.................] - ETA: 0s - loss: 0.8765 - accuracy: 0.7157
238/487 [=============>................] - ETA: 0s - loss: 0.8766 - accuracy: 0.7155
257/487 [==============>...............] - ETA: 0s - loss: 0.8760 - accuracy: 0.7159
276/487 [================>.............] - ETA: 0s - loss: 0.8758 - accuracy: 0.7160
295/487 [=================>............] - ETA: 0s - loss: 0.8747 - accuracy: 0.7163
314/487 [==================>...........] - ETA: 0s - loss: 0.8746 - accuracy: 0.7162
333/487 [===================>..........] - ETA: 0s - loss: 0.8745 - accuracy: 0.7160
352/487 [====================>.........] - ETA: 0s - loss: 0.8743 - accuracy: 0.7160
371/487 [=====================>........] - ETA: 0s - loss: 0.8743 - accuracy: 0.7160
390/487 [=======================>......] - ETA: 0s - loss: 0.8743 - accuracy: 0.7161
408/487 [========================>.....] - ETA: 0s - loss: 0.8745 - accuracy: 0.7159
424/487 [=========================>....] - ETA: 0s - loss: 0.8743 - accuracy: 0.7159
443/487 [==========================>...] - ETA: 0s - loss: 0.8741 - accuracy: 0.7160
461/487 [===========================>..] - ETA: 0s - loss: 0.8735 - accuracy: 0.7161
480/487 [============================>.] - ETA: 0s - loss: 0.8731 - accuracy: 0.7162
***callbacks***
saving losses to model_3/losses.log
Epoch 9: val_loss improved from 0.88553 to 0.86738, saving model to model_3/KERAS_check_best_model.h5
Epoch 9: val_loss improved from 0.88553 to 0.86738, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 9: saving model to model_3/KERAS_check_model_last.h5
Epoch 9: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 4ms/step - loss: 0.8728 - accuracy: 0.7164 - val_loss: 0.8674 - val_accuracy: 0.7178 - lr: 1.0000e-04
Epoch 10/30
1/487 [..............................] - ETA: 2s - loss: 0.8355 - accuracy: 0.7266
19/487 [>.............................] - ETA: 1s - loss: 0.8708 - accuracy: 0.7188
37/487 [=>............................] - ETA: 1s - loss: 0.8682 - accuracy: 0.7188
55/487 [==>...........................] - ETA: 1s - loss: 0.8673 - accuracy: 0.7177
75/487 [===>..........................] - ETA: 1s - loss: 0.8627 - accuracy: 0.7189
94/487 [====>.........................] - ETA: 1s - loss: 0.8632 - accuracy: 0.7188
113/487 [=====>........................] - ETA: 1s - loss: 0.8611 - accuracy: 0.7195
132/487 [=======>......................] - ETA: 0s - loss: 0.8596 - accuracy: 0.7201
151/487 [========>.....................] - ETA: 0s - loss: 0.8582 - accuracy: 0.7210
170/487 [=========>....................] - ETA: 0s - loss: 0.8594 - accuracy: 0.7203
189/487 [==========>...................] - ETA: 0s - loss: 0.8601 - accuracy: 0.7199
208/487 [===========>..................] - ETA: 0s - loss: 0.8596 - accuracy: 0.7199
227/487 [============>.................] - ETA: 0s - loss: 0.8597 - accuracy: 0.7197
246/487 [==============>...............] - ETA: 0s - loss: 0.8591 - accuracy: 0.7199
265/487 [===============>..............] - ETA: 0s - loss: 0.8586 - accuracy: 0.7201
284/487 [================>.............] - ETA: 0s - loss: 0.8584 - accuracy: 0.7202
303/487 [=================>............] - ETA: 0s - loss: 0.8578 - accuracy: 0.7204
322/487 [==================>...........] - ETA: 0s - loss: 0.8580 - accuracy: 0.7203
341/487 [====================>.........] - ETA: 0s - loss: 0.8572 - accuracy: 0.7206
360/487 [=====================>........] - ETA: 0s - loss: 0.8574 - accuracy: 0.7204
379/487 [======================>.......] - ETA: 0s - loss: 0.8568 - accuracy: 0.7206
398/487 [=======================>......] - ETA: 0s - loss: 0.8563 - accuracy: 0.7208
417/487 [========================>.....] - ETA: 0s - loss: 0.8567 - accuracy: 0.7205
435/487 [=========================>....] - ETA: 0s - loss: 0.8569 - accuracy: 0.7203
453/487 [==========================>...] - ETA: 0s - loss: 0.8565 - accuracy: 0.7205
471/487 [============================>.] - ETA: 0s - loss: 0.8569 - accuracy: 0.7202
***callbacks***
saving losses to model_3/losses.log
Epoch 10: val_loss improved from 0.86738 to 0.85348, saving model to model_3/KERAS_check_best_model.h5
Epoch 10: val_loss improved from 0.86738 to 0.85348, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 10: saving model to model_3/KERAS_check_model_last.h5
Epoch 10: saving model to model_3/KERAS_check_model_last_weights.h5
Epoch 10: saving model to model_3/KERAS_check_model_epoch10.h5
***callbacks end***
487/487 [==============================] - 2s 4ms/step - loss: 0.8569 - accuracy: 0.7203 - val_loss: 0.8535 - val_accuracy: 0.7213 - lr: 1.0000e-04
Epoch 11/30
1/487 [..............................] - ETA: 2s - loss: 0.8751 - accuracy: 0.7178
20/487 [>.............................] - ETA: 1s - loss: 0.8407 - accuracy: 0.7261
39/487 [=>............................] - ETA: 1s - loss: 0.8483 - accuracy: 0.7229
58/487 [==>...........................] - ETA: 1s - loss: 0.8469 - accuracy: 0.7238
76/487 [===>..........................] - ETA: 1s - loss: 0.8466 - accuracy: 0.7247
95/487 [====>.........................] - ETA: 1s - loss: 0.8500 - accuracy: 0.7227
113/487 [=====>........................] - ETA: 1s - loss: 0.8490 - accuracy: 0.7222
131/487 [=======>......................] - ETA: 0s - loss: 0.8476 - accuracy: 0.7226
149/487 [========>.....................] - ETA: 0s - loss: 0.8480 - accuracy: 0.7226
168/487 [=========>....................] - ETA: 0s - loss: 0.8484 - accuracy: 0.7224
186/487 [==========>...................] - ETA: 0s - loss: 0.8473 - accuracy: 0.7228
204/487 [===========>..................] - ETA: 0s - loss: 0.8469 - accuracy: 0.7227
222/487 [============>.................] - ETA: 0s - loss: 0.8471 - accuracy: 0.7226
241/487 [=============>................] - ETA: 0s - loss: 0.8474 - accuracy: 0.7223
260/487 [===============>..............] - ETA: 0s - loss: 0.8468 - accuracy: 0.7224
279/487 [================>.............] - ETA: 0s - loss: 0.8466 - accuracy: 0.7225
297/487 [=================>............] - ETA: 0s - loss: 0.8461 - accuracy: 0.7227
315/487 [==================>...........] - ETA: 0s - loss: 0.8468 - accuracy: 0.7225
334/487 [===================>..........] - ETA: 0s - loss: 0.8458 - accuracy: 0.7230
353/487 [====================>.........] - ETA: 0s - loss: 0.8456 - accuracy: 0.7231
371/487 [=====================>........] - ETA: 0s - loss: 0.8457 - accuracy: 0.7230
389/487 [======================>.......] - ETA: 0s - loss: 0.8454 - accuracy: 0.7232
408/487 [========================>.....] - ETA: 0s - loss: 0.8459 - accuracy: 0.7228
427/487 [=========================>....] - ETA: 0s - loss: 0.8454 - accuracy: 0.7230
446/487 [==========================>...] - ETA: 0s - loss: 0.8454 - accuracy: 0.7229
465/487 [===========================>..] - ETA: 0s - loss: 0.8445 - accuracy: 0.7232
484/487 [============================>.] - ETA: 0s - loss: 0.8446 - accuracy: 0.7232
***callbacks***
saving losses to model_3/losses.log
Epoch 11: val_loss improved from 0.85348 to 0.84250, saving model to model_3/KERAS_check_best_model.h5
Epoch 11: val_loss improved from 0.85348 to 0.84250, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 11: saving model to model_3/KERAS_check_model_last.h5
Epoch 11: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.8445 - accuracy: 0.7233 - val_loss: 0.8425 - val_accuracy: 0.7232 - lr: 1.0000e-04
Epoch 12/30
1/487 [..............................] - ETA: 2s - loss: 0.8538 - accuracy: 0.7266
20/487 [>.............................] - ETA: 1s - loss: 0.8373 - accuracy: 0.7249
39/487 [=>............................] - ETA: 1s - loss: 0.8370 - accuracy: 0.7261
58/487 [==>...........................] - ETA: 1s - loss: 0.8324 - accuracy: 0.7273
77/487 [===>..........................] - ETA: 1s - loss: 0.8337 - accuracy: 0.7264
96/487 [====>.........................] - ETA: 1s - loss: 0.8357 - accuracy: 0.7256
114/487 [======>.......................] - ETA: 1s - loss: 0.8348 - accuracy: 0.7259
133/487 [=======>......................] - ETA: 0s - loss: 0.8350 - accuracy: 0.7260
152/487 [========>.....................] - ETA: 0s - loss: 0.8350 - accuracy: 0.7256
171/487 [=========>....................] - ETA: 0s - loss: 0.8348 - accuracy: 0.7259
189/487 [==========>...................] - ETA: 0s - loss: 0.8341 - accuracy: 0.7264
207/487 [===========>..................] - ETA: 0s - loss: 0.8349 - accuracy: 0.7261
226/487 [============>.................] - ETA: 0s - loss: 0.8350 - accuracy: 0.7259
244/487 [==============>...............] - ETA: 0s - loss: 0.8356 - accuracy: 0.7259
261/487 [===============>..............] - ETA: 0s - loss: 0.8366 - accuracy: 0.7255
279/487 [================>.............] - ETA: 0s - loss: 0.8367 - accuracy: 0.7255
297/487 [=================>............] - ETA: 0s - loss: 0.8354 - accuracy: 0.7259
316/487 [==================>...........] - ETA: 0s - loss: 0.8355 - accuracy: 0.7258
335/487 [===================>..........] - ETA: 0s - loss: 0.8354 - accuracy: 0.7256
354/487 [====================>.........] - ETA: 0s - loss: 0.8353 - accuracy: 0.7254
373/487 [=====================>........] - ETA: 0s - loss: 0.8360 - accuracy: 0.7250
392/487 [=======================>......] - ETA: 0s - loss: 0.8360 - accuracy: 0.7250
411/487 [========================>.....] - ETA: 0s - loss: 0.8352 - accuracy: 0.7252
430/487 [=========================>....] - ETA: 0s - loss: 0.8348 - accuracy: 0.7253
449/487 [==========================>...] - ETA: 0s - loss: 0.8345 - accuracy: 0.7254
467/487 [===========================>..] - ETA: 0s - loss: 0.8343 - accuracy: 0.7254
486/487 [============================>.] - ETA: 0s - loss: 0.8344 - accuracy: 0.7254
***callbacks***
saving losses to model_3/losses.log
Epoch 12: val_loss improved from 0.84250 to 0.83362, saving model to model_3/KERAS_check_best_model.h5
Epoch 12: val_loss improved from 0.84250 to 0.83362, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 12: saving model to model_3/KERAS_check_model_last.h5
Epoch 12: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 4ms/step - loss: 0.8344 - accuracy: 0.7254 - val_loss: 0.8336 - val_accuracy: 0.7262 - lr: 1.0000e-04
Epoch 13/30
1/487 [..............................] - ETA: 2s - loss: 0.8343 - accuracy: 0.7148
20/487 [>.............................] - ETA: 1s - loss: 0.8297 - accuracy: 0.7267
38/487 [=>............................] - ETA: 1s - loss: 0.8306 - accuracy: 0.7274
56/487 [==>...........................] - ETA: 1s - loss: 0.8276 - accuracy: 0.7269
75/487 [===>..........................] - ETA: 1s - loss: 0.8293 - accuracy: 0.7260
94/487 [====>.........................] - ETA: 1s - loss: 0.8298 - accuracy: 0.7259
113/487 [=====>........................] - ETA: 1s - loss: 0.8292 - accuracy: 0.7259
132/487 [=======>......................] - ETA: 0s - loss: 0.8296 - accuracy: 0.7259
151/487 [========>.....................] - ETA: 0s - loss: 0.8279 - accuracy: 0.7264
170/487 [=========>....................] - ETA: 0s - loss: 0.8279 - accuracy: 0.7269
189/487 [==========>...................] - ETA: 0s - loss: 0.8293 - accuracy: 0.7262
208/487 [===========>..................] - ETA: 0s - loss: 0.8291 - accuracy: 0.7263
227/487 [============>.................] - ETA: 0s - loss: 0.8287 - accuracy: 0.7266
246/487 [==============>...............] - ETA: 0s - loss: 0.8292 - accuracy: 0.7264
265/487 [===============>..............] - ETA: 0s - loss: 0.8290 - accuracy: 0.7263
284/487 [================>.............] - ETA: 0s - loss: 0.8283 - accuracy: 0.7264
303/487 [=================>............] - ETA: 0s - loss: 0.8289 - accuracy: 0.7262
322/487 [==================>...........] - ETA: 0s - loss: 0.8289 - accuracy: 0.7262
341/487 [====================>.........] - ETA: 0s - loss: 0.8281 - accuracy: 0.7265
359/487 [=====================>........] - ETA: 0s - loss: 0.8281 - accuracy: 0.7267
378/487 [======================>.......] - ETA: 0s - loss: 0.8276 - accuracy: 0.7269
397/487 [=======================>......] - ETA: 0s - loss: 0.8275 - accuracy: 0.7269
416/487 [========================>.....] - ETA: 0s - loss: 0.8275 - accuracy: 0.7269
435/487 [=========================>....] - ETA: 0s - loss: 0.8270 - accuracy: 0.7271
454/487 [==========================>...] - ETA: 0s - loss: 0.8265 - accuracy: 0.7272
473/487 [============================>.] - ETA: 0s - loss: 0.8267 - accuracy: 0.7271
***callbacks***
saving losses to model_3/losses.log
Epoch 13: val_loss improved from 0.83362 to 0.82621, saving model to model_3/KERAS_check_best_model.h5
Epoch 13: val_loss improved from 0.83362 to 0.82621, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 13: saving model to model_3/KERAS_check_model_last.h5
Epoch 13: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.8262 - accuracy: 0.7272 - val_loss: 0.8262 - val_accuracy: 0.7274 - lr: 1.0000e-04
Epoch 14/30
1/487 [..............................] - ETA: 2s - loss: 0.8273 - accuracy: 0.7432
19/487 [>.............................] - ETA: 1s - loss: 0.8207 - accuracy: 0.7323
37/487 [=>............................] - ETA: 1s - loss: 0.8195 - accuracy: 0.7315
56/487 [==>...........................] - ETA: 1s - loss: 0.8223 - accuracy: 0.7304
74/487 [===>..........................] - ETA: 1s - loss: 0.8238 - accuracy: 0.7289
93/487 [====>.........................] - ETA: 1s - loss: 0.8233 - accuracy: 0.7287
111/487 [=====>........................] - ETA: 1s - loss: 0.8229 - accuracy: 0.7291
130/487 [=======>......................] - ETA: 0s - loss: 0.8236 - accuracy: 0.7284
148/487 [========>.....................] - ETA: 0s - loss: 0.8225 - accuracy: 0.7285
166/487 [=========>....................] - ETA: 0s - loss: 0.8222 - accuracy: 0.7285
184/487 [==========>...................] - ETA: 0s - loss: 0.8210 - accuracy: 0.7288
203/487 [===========>..................] - ETA: 0s - loss: 0.8213 - accuracy: 0.7289
222/487 [============>.................] - ETA: 0s - loss: 0.8214 - accuracy: 0.7286
241/487 [=============>................] - ETA: 0s - loss: 0.8230 - accuracy: 0.7281
260/487 [===============>..............] - ETA: 0s - loss: 0.8229 - accuracy: 0.7281
279/487 [================>.............] - ETA: 0s - loss: 0.8224 - accuracy: 0.7283
298/487 [=================>............] - ETA: 0s - loss: 0.8215 - accuracy: 0.7284
317/487 [==================>...........] - ETA: 0s - loss: 0.8211 - accuracy: 0.7284
336/487 [===================>..........] - ETA: 0s - loss: 0.8207 - accuracy: 0.7285
355/487 [====================>.........] - ETA: 0s - loss: 0.8207 - accuracy: 0.7285
374/487 [======================>.......] - ETA: 0s - loss: 0.8201 - accuracy: 0.7286
393/487 [=======================>......] - ETA: 0s - loss: 0.8202 - accuracy: 0.7285
412/487 [========================>.....] - ETA: 0s - loss: 0.8207 - accuracy: 0.7283
431/487 [=========================>....] - ETA: 0s - loss: 0.8205 - accuracy: 0.7284
450/487 [==========================>...] - ETA: 0s - loss: 0.8202 - accuracy: 0.7284
469/487 [===========================>..] - ETA: 0s - loss: 0.8196 - accuracy: 0.7286
486/487 [============================>.] - ETA: 0s - loss: 0.8193 - accuracy: 0.7287
***callbacks***
saving losses to model_3/losses.log
Epoch 14: val_loss improved from 0.82621 to 0.81981, saving model to model_3/KERAS_check_best_model.h5
Epoch 14: val_loss improved from 0.82621 to 0.81981, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 14: saving model to model_3/KERAS_check_model_last.h5
Epoch 14: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.8193 - accuracy: 0.7287 - val_loss: 0.8198 - val_accuracy: 0.7286 - lr: 1.0000e-04
Epoch 15/30
1/487 [..............................] - ETA: 2s - loss: 0.8106 - accuracy: 0.7334
20/487 [>.............................] - ETA: 1s - loss: 0.8181 - accuracy: 0.7299
39/487 [=>............................] - ETA: 1s - loss: 0.8192 - accuracy: 0.7288
58/487 [==>...........................] - ETA: 1s - loss: 0.8169 - accuracy: 0.7296
77/487 [===>..........................] - ETA: 1s - loss: 0.8163 - accuracy: 0.7289
96/487 [====>.........................] - ETA: 1s - loss: 0.8190 - accuracy: 0.7284
115/487 [======>.......................] - ETA: 1s - loss: 0.8200 - accuracy: 0.7278
134/487 [=======>......................] - ETA: 0s - loss: 0.8186 - accuracy: 0.7281
153/487 [========>.....................] - ETA: 0s - loss: 0.8198 - accuracy: 0.7280
172/487 [=========>....................] - ETA: 0s - loss: 0.8188 - accuracy: 0.7281
191/487 [==========>...................] - ETA: 0s - loss: 0.8178 - accuracy: 0.7286
210/487 [===========>..................] - ETA: 0s - loss: 0.8174 - accuracy: 0.7287
229/487 [=============>................] - ETA: 0s - loss: 0.8169 - accuracy: 0.7289
248/487 [==============>...............] - ETA: 0s - loss: 0.8171 - accuracy: 0.7288
267/487 [===============>..............] - ETA: 0s - loss: 0.8172 - accuracy: 0.7289
286/487 [================>.............] - ETA: 0s - loss: 0.8171 - accuracy: 0.7292
305/487 [=================>............] - ETA: 0s - loss: 0.8166 - accuracy: 0.7292
324/487 [==================>...........] - ETA: 0s - loss: 0.8165 - accuracy: 0.7291
343/487 [====================>.........] - ETA: 0s - loss: 0.8156 - accuracy: 0.7294
362/487 [=====================>........] - ETA: 0s - loss: 0.8149 - accuracy: 0.7297
380/487 [======================>.......] - ETA: 0s - loss: 0.8147 - accuracy: 0.7297
399/487 [=======================>......] - ETA: 0s - loss: 0.8145 - accuracy: 0.7296
418/487 [========================>.....] - ETA: 0s - loss: 0.8143 - accuracy: 0.7295
437/487 [=========================>....] - ETA: 0s - loss: 0.8141 - accuracy: 0.7297
456/487 [===========================>..] - ETA: 0s - loss: 0.8138 - accuracy: 0.7298
475/487 [============================>.] - ETA: 0s - loss: 0.8135 - accuracy: 0.7299
***callbacks***
saving losses to model_3/losses.log
Epoch 15: val_loss improved from 0.81981 to 0.81431, saving model to model_3/KERAS_check_best_model.h5
Epoch 15: val_loss improved from 0.81981 to 0.81431, saving model to model_3/KERAS_check_best_model_weights.h5
Epoch 15: saving model to model_3/KERAS_check_model_last.h5
Epoch 15: saving model to model_3/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 0.8134 - accuracy: 0.7299 - val_loss: 0.8143 - val_accuracy: 0.7300 - lr: 1.0000e-04
Epoch 16/30
1/487 [..............................] - ETA: 2s - loss: 0.8278 - accuracy: 0.7197
20/487 [>.............................] - ETA: 1s - loss: 0.8139 - accuracy: 0.7272
40/487 [=>............................] - ETA: 1s - loss: 0.8158 - accuracy: 0.7274
59/487 [==>...........................] - ETA: 1s - loss: 0.8157 - accuracy: 0.7277
77/487 [===>..........................] - ETA: 1s - loss: 0.8126 - accuracy: 0.7294
95/487 [====>.........................] - ETA: 1s - loss: 0.8084 - accuracy: 0.7314
113/487 [=====>........................] - ETA: 1s - loss: 0.8079 - accuracy: 0.7311
131/487 [=======>......................] - ETA: 0s - loss: 0.8067 - accuracy: 0.7315
149/487 [========>.....................] - ETA: 0s - loss: 0.8086 - accuracy: 0.7309
167/487 [=========>....................] - ETA: 0s - loss: 0.8073 - accuracy: 0.7316
186/487 [==========>...................] - ETA: 0s - loss: 0.8073 - accuracy: 0.7313
204/487 [===========>..................] - ETA: 0s - loss: 0.8063 - accuracy: 0.7316
222/487 [============>.................] - ETA: 0s - loss: 0.8080 - accuracy: 0.7312
241/487 [=============>................] - ETA: 0s - loss: 0.8087 - accuracy: 0.7309
260/487 [===============>..............] - ETA: 0s - loss: 0.8091 - accuracy: 0.7306
279/487 [================>.............] - ETA: 0s - loss: 0.8097 - accuracy: 0.7303
298/487 [=================>............] - ETA: 0s - loss: 0.8094 - accuracy: 0.7305
317/487 [==================>...........] - ETA: 0s - loss: 0.8095 - accuracy: 0.7305
336/487 [===================>..........] - ETA: 0s - loss: 0.8092 - accuracy: 0.7305
355/487 [====================>.........] - ETA: 0s - loss: 0.8087 - accuracy: 0.7307
374/487 [======================>.......] - ETA: 0s - loss: 0.8087 - accuracy: 0.7307
393/487 [=======================>......] - ETA: 0s - loss: 0.8082 - accuracy: 0.7309
412/487 [========================>.....] - ETA: 0s - loss: 0.8083 - accuracy: 0.7307
432/487 [=========================>....] - ETA: 0s - loss: 0.8083 - accuracy: 0.7309
452/487 [==========================>...] - ETA: 0s - loss: 0.8078 - accuracy: 0.7310
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
/tmp/ipykernel_6015/3673671120.py in <module>
22 validation_split=0.25,
23 shuffle=True,
---> 24 callbacks=callbacks.callbacks,
25 )
26 # Save the model again but with the pruning 'stripped' to use the regular layer types
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
63 filtered_tb = None
64 try:
---> 65 return fn(*args, **kwargs)
66 except Exception as e:
67 filtered_tb = _process_traceback_frames(e.__traceback__)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1562 ):
1563 callbacks.on_train_batch_begin(step)
-> 1564 tmp_logs = self.train_function(iterator)
1565 if data_handler.should_sync:
1566 context.async_wait()
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)
148 filtered_tb = None
149 try:
--> 150 return fn(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
913
914 with OptionalXlaContext(self._jit_compile):
--> 915 result = self._call(*args, **kwds)
916
917 new_tracing_count = self.experimental_get_tracing_count()
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
945 # In this case we have created variables on the first call, so we run the
946 # defunned version which is guaranteed to never create variables.
--> 947 return self._stateless_fn(*args, **kwds) # pylint: disable=not-callable
948 elif self._stateful_fn is not None:
949 # Release the lock early so that multiple threads can perform the call
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
2495 filtered_flat_args) = self._maybe_define_function(args, kwargs)
2496 return graph_function._call_flat(
-> 2497 filtered_flat_args, captured_inputs=graph_function.captured_inputs) # pylint: disable=protected-access
2498
2499 @property
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
1861 # No tape is watching; skip to running the function.
1862 return self._build_call_outputs(self._inference_function.call(
-> 1863 ctx, args, cancellation_manager=cancellation_manager))
1864 forward_backward = self._select_forward_and_backward_functions(
1865 args,
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager)
502 inputs=args,
503 attrs=attrs,
--> 504 ctx=ctx)
505 else:
506 outputs = execute.execute_with_cancellation(
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
53 ctx.ensure_initialized()
54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 55 inputs, attrs, num_outputs)
56 except core._NotOkStatusException as e:
57 if name is not None:
KeyboardInterrupt:
Check sparsity#
Make a quick check that the model was indeed trained sparse. We’ll just make a histogram of the weights of the 1st layer, and hopefully observe a large peak in the bin containing ‘0’. Note logarithmic y axis.
w = model.layers[0].weights[0].numpy()
h, b = np.histogram(w, bins=100)
plt.figure(figsize=(7, 7))
plt.bar(b[:-1], h, width=b[1] - b[0])
plt.semilogy()
print("% of zeros = {}".format(np.sum(w == 0) / np.size(w)))
% of zeros = 0.75
Compare this to the first model:
from tensorflow.keras.models import load_model
if not COLAB:
model_orig = load_model("model_1/KERAS_check_best_model.h5") #locally
else:
model_orig = load_model("iaifi-summer-school/book/model_1/KERAS_check_best_model.h5") #for colab
w_orig = model_orig.layers[0].weights[0].numpy().flatten()
w = model.layers[0].weights[0].numpy().flatten()
plt.figure(figsize=(7, 7))
_, bins, _ = plt.hist(w_orig, bins=100, label="Original", histtype="step")
plt.hist(w, bins=bins, label="Pruned", histtype="step")
plt.semilogy()
plt.legend()
<matplotlib.legend.Legend at 0x14cbabfa0>
Check performance#
How does this 75% sparse model compare against the unpruned model? Let’s report the accuracy and make a ROC curve. The pruned model is shown with solid lines, the unpruned model is shown with dashed lines.
import plotting
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from tensorflow.keras.models import load_model
if not COLAB:
model_ref = load_model("model_1/KERAS_check_best_model.h5") #locally
else:
model_ref = load_model("iaifi-summer-school/book/model_1/KERAS_check_best_model.h5") #for colab
y_ref = model_ref.predict(X_test)
y_prune = model.predict(X_test)
print(
"Accuracy unpruned: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_ref, axis=1))
)
)
print(
"Accuracy pruned: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_prune, axis=1))
)
)
fig, ax = plt.subplots(figsize=(9, 9))
_ = plotting.makeRoc(y_test, y_ref, classes)
plt.gca().set_prop_cycle(None) # reset the colors
_ = plotting.makeRoc(y_test, y_prune, classes, linestyle="--")
from matplotlib.lines import Line2D
lines = [Line2D([0], [0], ls="-"), Line2D([0], [0], ls="--")]
from matplotlib.legend import Legend
leg = Legend(ax, lines, labels=["unpruned", "pruned"], loc="lower right", frameon=False)
ax.add_artist(leg)
5188/5188 [==============================] - 4s 698us/step
5188/5188 [==============================] - 4s 728us/step
Accuracy unpruned: 0.7516506024096385
Accuracy pruned: 0.7429879518072289
<matplotlib.legend.Legend at 0x14dd01c70>
Reduced size model#
What if instead of pruning our model we simply shrink the size? Let’s now train a model where the hidden layers are a quarter of the size they are in the original model: 3 hidden layers with 16, then 8, then 8 neurons. Each layer will use relu
activation.
Add an output layer with 5 neurons (one for each class), then finish with Softmax activation.
model_small = Sequential()
model_small.add(
Dense(
16,
input_shape=(16,),
name="fc1",
kernel_initializer="lecun_uniform",
kernel_regularizer=l1(0.0001),
)
)
model_small.add(Activation(activation="relu", name="relu1"))
model_small.add(
Dense(
8, name="fc2", kernel_initializer="lecun_uniform", kernel_regularizer=l1(0.0001)
)
)
model_small.add(Activation(activation="relu", name="relu2"))
model_small.add(
Dense(
8, name="fc3", kernel_initializer="lecun_uniform", kernel_regularizer=l1(0.0001)
)
)
model_small.add(Activation(activation="relu", name="relu3"))
model_small.add(
Dense(
5,
name="output",
kernel_initializer="lecun_uniform",
kernel_regularizer=l1(0.0001),
)
)
model_small.add(Activation(activation="softmax", name="softmax"))
train = True
if train:
adam = Adam(lr=0.0001)
model_small.compile(
optimizer=adam, loss=["categorical_crossentropy"], metrics=["accuracy"]
)
callbacks = all_callbacks(
stop_patience=1000,
lr_factor=0.5,
lr_patience=10,
lr_epsilon=0.000001,
lr_cooldown=2,
lr_minimum=0.0000001,
outputDir="model_1_half",
)
model_small.fit(
X_train_val,
y_train_val,
batch_size=1024,
epochs=30,
validation_split=0.25,
shuffle=True,
callbacks=callbacks.callbacks,
)
model_small.save("model_1_small/KERAS_check_best_model.h5")
else:
from tensorflow.keras.models import load_model
model_small = load_model("model_1_small/KERAS_check_best_model.h5")
WARNING:tensorflow:`epsilon` argument is deprecated and will be removed, use `min_delta` instead.
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Epoch 1/30
/Users/wmccorma/miniconda3/envs/ml-iaifi/lib/python3.9/site-packages/keras/optimizers/optimizer_v2/adam.py:110: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super(Adam, self).__init__(name, **kwargs)
1/487 [..............................] - ETA: 3:45 - loss: 1.6479 - accuracy: 0.1084WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0013s vs `on_train_batch_end` time: 0.0028s). Check your callbacks.
452/487 [==========================>...] - ETA: 0s - loss: 1.5647 - accuracy: 0.2630
***callbacks***
saving losses to model_1_half/losses.log
Epoch 1: val_loss improved from inf to 1.45849, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 1: val_loss improved from inf to 1.45849, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 1: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 1: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 2ms/step - loss: 1.5577 - accuracy: 0.2705 - val_loss: 1.4585 - val_accuracy: 0.3769 - lr: 1.0000e-04
Epoch 2/30
477/487 [============================>.] - ETA: 0s - loss: 1.3490 - accuracy: 0.4994
***callbacks***
saving losses to model_1_half/losses.log
Epoch 2: val_loss improved from 1.45849 to 1.24289, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 2: val_loss improved from 1.45849 to 1.24289, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 2: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 2: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.3470 - accuracy: 0.5006 - val_loss: 1.2429 - val_accuracy: 0.5633 - lr: 1.0000e-04
Epoch 3/30
474/487 [============================>.] - ETA: 0s - loss: 1.1766 - accuracy: 0.5798
***callbacks***
saving losses to model_1_half/losses.log
Epoch 3: val_loss improved from 1.24289 to 1.12312, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 3: val_loss improved from 1.24289 to 1.12312, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 3: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 3: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.1753 - accuracy: 0.5802 - val_loss: 1.1231 - val_accuracy: 0.5967 - lr: 1.0000e-04
Epoch 4/30
463/487 [===========================>..] - ETA: 0s - loss: 1.0858 - accuracy: 0.6139
***callbacks***
saving losses to model_1_half/losses.log
Epoch 4: val_loss improved from 1.12312 to 1.05513, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 4: val_loss improved from 1.12312 to 1.05513, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 4: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 4: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.0845 - accuracy: 0.6145 - val_loss: 1.0551 - val_accuracy: 0.6293 - lr: 1.0000e-04
Epoch 5/30
461/487 [===========================>..] - ETA: 0s - loss: 1.0314 - accuracy: 0.6338
***callbacks***
saving losses to model_1_half/losses.log
Epoch 5: val_loss improved from 1.05513 to 1.01500, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 5: val_loss improved from 1.05513 to 1.01500, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 5: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 5: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.0305 - accuracy: 0.6341 - val_loss: 1.0150 - val_accuracy: 0.6405 - lr: 1.0000e-04
Epoch 6/30
457/487 [===========================>..] - ETA: 0s - loss: 0.9997 - accuracy: 0.6436
***callbacks***
saving losses to model_1_half/losses.log
Epoch 6: val_loss improved from 1.01500 to 0.99025, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 6: val_loss improved from 1.01500 to 0.99025, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 6: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 6: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9990 - accuracy: 0.6438 - val_loss: 0.9903 - val_accuracy: 0.6487 - lr: 1.0000e-04
Epoch 7/30
476/487 [============================>.] - ETA: 0s - loss: 0.9776 - accuracy: 0.6544
***callbacks***
saving losses to model_1_half/losses.log
Epoch 7: val_loss improved from 0.99025 to 0.97078, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 7: val_loss improved from 0.99025 to 0.97078, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 7: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 7: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9773 - accuracy: 0.6546 - val_loss: 0.9708 - val_accuracy: 0.6602 - lr: 1.0000e-04
Epoch 8/30
463/487 [===========================>..] - ETA: 0s - loss: 0.9600 - accuracy: 0.6656
***callbacks***
saving losses to model_1_half/losses.log
Epoch 8: val_loss improved from 0.97078 to 0.95339, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 8: val_loss improved from 0.97078 to 0.95339, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 8: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 8: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9590 - accuracy: 0.6661 - val_loss: 0.9534 - val_accuracy: 0.6717 - lr: 1.0000e-04
Epoch 9/30
475/487 [============================>.] - ETA: 0s - loss: 0.9421 - accuracy: 0.6766
***callbacks***
saving losses to model_1_half/losses.log
Epoch 9: val_loss improved from 0.95339 to 0.93640, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 9: val_loss improved from 0.95339 to 0.93640, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 9: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 9: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9418 - accuracy: 0.6768 - val_loss: 0.9364 - val_accuracy: 0.6825 - lr: 1.0000e-04
Epoch 10/30
483/487 [============================>.] - ETA: 0s - loss: 0.9249 - accuracy: 0.6877
***callbacks***
saving losses to model_1_half/losses.log
Epoch 10: val_loss improved from 0.93640 to 0.91934, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 10: val_loss improved from 0.93640 to 0.91934, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 10: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 10: saving model to model_1_half/KERAS_check_model_last_weights.h5
Epoch 10: saving model to model_1_half/KERAS_check_model_epoch10.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9248 - accuracy: 0.6878 - val_loss: 0.9193 - val_accuracy: 0.6925 - lr: 1.0000e-04
Epoch 11/30
483/487 [============================>.] - ETA: 0s - loss: 0.9077 - accuracy: 0.6974
***callbacks***
saving losses to model_1_half/losses.log
Epoch 11: val_loss improved from 0.91934 to 0.90247, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 11: val_loss improved from 0.91934 to 0.90247, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 11: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 11: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9076 - accuracy: 0.6974 - val_loss: 0.9025 - val_accuracy: 0.7007 - lr: 1.0000e-04
Epoch 12/30
466/487 [===========================>..] - ETA: 0s - loss: 0.8913 - accuracy: 0.7049
***callbacks***
saving losses to model_1_half/losses.log
Epoch 12: val_loss improved from 0.90247 to 0.88672, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 12: val_loss improved from 0.90247 to 0.88672, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 12: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 12: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8910 - accuracy: 0.7050 - val_loss: 0.8867 - val_accuracy: 0.7075 - lr: 1.0000e-04
Epoch 13/30
452/487 [==========================>...] - ETA: 0s - loss: 0.8765 - accuracy: 0.7105
***callbacks***
saving losses to model_1_half/losses.log
Epoch 13: val_loss improved from 0.88672 to 0.87253, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 13: val_loss improved from 0.88672 to 0.87253, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 13: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 13: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8759 - accuracy: 0.7106 - val_loss: 0.8725 - val_accuracy: 0.7124 - lr: 1.0000e-04
Epoch 14/30
487/487 [==============================] - ETA: 0s - loss: 0.8627 - accuracy: 0.7151
***callbacks***
saving losses to model_1_half/losses.log
Epoch 14: val_loss improved from 0.87253 to 0.86002, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 14: val_loss improved from 0.87253 to 0.86002, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 14: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 14: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8627 - accuracy: 0.7151 - val_loss: 0.8600 - val_accuracy: 0.7161 - lr: 1.0000e-04
Epoch 15/30
477/487 [============================>.] - ETA: 0s - loss: 0.8515 - accuracy: 0.7178
***callbacks***
saving losses to model_1_half/losses.log
Epoch 15: val_loss improved from 0.86002 to 0.84929, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 15: val_loss improved from 0.86002 to 0.84929, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 15: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 15: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8512 - accuracy: 0.7179 - val_loss: 0.8493 - val_accuracy: 0.7192 - lr: 1.0000e-04
Epoch 16/30
485/487 [============================>.] - ETA: 0s - loss: 0.8412 - accuracy: 0.7204
***callbacks***
saving losses to model_1_half/losses.log
Epoch 16: val_loss improved from 0.84929 to 0.83998, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 16: val_loss improved from 0.84929 to 0.83998, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 16: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 16: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8412 - accuracy: 0.7204 - val_loss: 0.8400 - val_accuracy: 0.7214 - lr: 1.0000e-04
Epoch 17/30
467/487 [===========================>..] - ETA: 0s - loss: 0.8329 - accuracy: 0.7224
***callbacks***
saving losses to model_1_half/losses.log
Epoch 17: val_loss improved from 0.83998 to 0.83208, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 17: val_loss improved from 0.83998 to 0.83208, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 17: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 17: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8326 - accuracy: 0.7224 - val_loss: 0.8321 - val_accuracy: 0.7233 - lr: 1.0000e-04
Epoch 18/30
452/487 [==========================>...] - ETA: 0s - loss: 0.8253 - accuracy: 0.7236
***callbacks***
saving losses to model_1_half/losses.log
Epoch 18: val_loss improved from 0.83208 to 0.82520, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 18: val_loss improved from 0.83208 to 0.82520, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 18: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 18: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8253 - accuracy: 0.7238 - val_loss: 0.8252 - val_accuracy: 0.7242 - lr: 1.0000e-04
Epoch 19/30
479/487 [============================>.] - ETA: 0s - loss: 0.8193 - accuracy: 0.7249
***callbacks***
saving losses to model_1_half/losses.log
Epoch 19: val_loss improved from 0.82520 to 0.81960, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 19: val_loss improved from 0.82520 to 0.81960, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 19: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 19: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8189 - accuracy: 0.7251 - val_loss: 0.8196 - val_accuracy: 0.7251 - lr: 1.0000e-04
Epoch 20/30
481/487 [============================>.] - ETA: 0s - loss: 0.8134 - accuracy: 0.7260
***callbacks***
saving losses to model_1_half/losses.log
Epoch 20: val_loss improved from 0.81960 to 0.81440, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 20: val_loss improved from 0.81960 to 0.81440, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 20: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 20: saving model to model_1_half/KERAS_check_model_last_weights.h5
Epoch 20: saving model to model_1_half/KERAS_check_model_epoch20.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8134 - accuracy: 0.7260 - val_loss: 0.8144 - val_accuracy: 0.7264 - lr: 1.0000e-04
Epoch 21/30
480/487 [============================>.] - ETA: 0s - loss: 0.8087 - accuracy: 0.7268
***callbacks***
saving losses to model_1_half/losses.log
Epoch 21: val_loss improved from 0.81440 to 0.80992, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 21: val_loss improved from 0.81440 to 0.80992, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 21: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 21: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8086 - accuracy: 0.7268 - val_loss: 0.8099 - val_accuracy: 0.7269 - lr: 1.0000e-04
Epoch 22/30
467/487 [===========================>..] - ETA: 0s - loss: 0.8047 - accuracy: 0.7274
***callbacks***
saving losses to model_1_half/losses.log
Epoch 22: val_loss improved from 0.80992 to 0.80614, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 22: val_loss improved from 0.80992 to 0.80614, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 22: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 22: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8043 - accuracy: 0.7275 - val_loss: 0.8061 - val_accuracy: 0.7276 - lr: 1.0000e-04
Epoch 23/30
475/487 [============================>.] - ETA: 0s - loss: 0.8009 - accuracy: 0.7281
***callbacks***
saving losses to model_1_half/losses.log
Epoch 23: val_loss improved from 0.80614 to 0.80257, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 23: val_loss improved from 0.80614 to 0.80257, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 23: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 23: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8005 - accuracy: 0.7282 - val_loss: 0.8026 - val_accuracy: 0.7282 - lr: 1.0000e-04
Epoch 24/30
467/487 [===========================>..] - ETA: 0s - loss: 0.7967 - accuracy: 0.7290
***callbacks***
saving losses to model_1_half/losses.log
Epoch 24: val_loss improved from 0.80257 to 0.79940, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 24: val_loss improved from 0.80257 to 0.79940, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 24: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 24: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7971 - accuracy: 0.7288 - val_loss: 0.7994 - val_accuracy: 0.7288 - lr: 1.0000e-04
Epoch 25/30
448/487 [==========================>...] - ETA: 0s - loss: 0.7943 - accuracy: 0.7291
***callbacks***
saving losses to model_1_half/losses.log
Epoch 25: val_loss improved from 0.79940 to 0.79659, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 25: val_loss improved from 0.79940 to 0.79659, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 25: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 25: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7940 - accuracy: 0.7294 - val_loss: 0.7966 - val_accuracy: 0.7290 - lr: 1.0000e-04
Epoch 26/30
473/487 [============================>.] - ETA: 0s - loss: 0.7908 - accuracy: 0.7301
***callbacks***
saving losses to model_1_half/losses.log
Epoch 26: val_loss improved from 0.79659 to 0.79385, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 26: val_loss improved from 0.79659 to 0.79385, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 26: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 26: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7911 - accuracy: 0.7299 - val_loss: 0.7939 - val_accuracy: 0.7298 - lr: 1.0000e-04
Epoch 27/30
481/487 [============================>.] - ETA: 0s - loss: 0.7885 - accuracy: 0.7305
***callbacks***
saving losses to model_1_half/losses.log
Epoch 27: val_loss improved from 0.79385 to 0.79150, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 27: val_loss improved from 0.79385 to 0.79150, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 27: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 27: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7885 - accuracy: 0.7305 - val_loss: 0.7915 - val_accuracy: 0.7304 - lr: 1.0000e-04
Epoch 28/30
480/487 [============================>.] - ETA: 0s - loss: 0.7859 - accuracy: 0.7310
***callbacks***
saving losses to model_1_half/losses.log
Epoch 28: val_loss improved from 0.79150 to 0.78935, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 28: val_loss improved from 0.79150 to 0.78935, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 28: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 28: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7860 - accuracy: 0.7309 - val_loss: 0.7894 - val_accuracy: 0.7306 - lr: 1.0000e-04
Epoch 29/30
468/487 [===========================>..] - ETA: 0s - loss: 0.7836 - accuracy: 0.7315
***callbacks***
saving losses to model_1_half/losses.log
Epoch 29: val_loss improved from 0.78935 to 0.78712, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 29: val_loss improved from 0.78935 to 0.78712, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 29: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 29: saving model to model_1_half/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7837 - accuracy: 0.7315 - val_loss: 0.7871 - val_accuracy: 0.7313 - lr: 1.0000e-04
Epoch 30/30
472/487 [============================>.] - ETA: 0s - loss: 0.7819 - accuracy: 0.7318
***callbacks***
saving losses to model_1_half/losses.log
Epoch 30: val_loss improved from 0.78712 to 0.78498, saving model to model_1_half/KERAS_check_best_model.h5
Epoch 30: val_loss improved from 0.78712 to 0.78498, saving model to model_1_half/KERAS_check_best_model_weights.h5
Epoch 30: saving model to model_1_half/KERAS_check_model_last.h5
Epoch 30: saving model to model_1_half/KERAS_check_model_last_weights.h5
Epoch 30: saving model to model_1_half/KERAS_check_model_epoch30.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7815 - accuracy: 0.7319 - val_loss: 0.7850 - val_accuracy: 0.7318 - lr: 1.0000e-04
How does this small model compare in terms of performance?
import plotting
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from tensorflow.keras.models import load_model
if not COLAB:
model_ref = load_model("model_1/KERAS_check_best_model.h5")
else:
model_ref = load_model("iaifi-summer-school/book/model_1/KERAS_check_best_model.h5")
y_ref = model_ref.predict(X_test)
y_prune = model.predict(X_test)
y_small = model_small.predict(X_test)
print(
"Accuracy unpruned: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_ref, axis=1))
)
)
print(
"Accuracy pruned: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_prune, axis=1))
)
)
print(
"Accuracy small: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_small, axis=1))
)
)
fig, ax = plt.subplots(figsize=(9, 9))
_ = plotting.makeRoc(y_test, y_ref, classes)
plt.gca().set_prop_cycle(None) # reset the colors
_ = plotting.makeRoc(y_test, y_prune, classes, linestyle="--")
plt.gca().set_prop_cycle(None) # reset the colors
_ = plotting.makeRoc(y_test, y_small, classes, linestyle=":")
from matplotlib.lines import Line2D
lines = [Line2D([0], [0], ls="-"), Line2D([0], [0], ls="--"), Line2D([0], [0], ls=":")]
from matplotlib.legend import Legend
leg = Legend(
ax, lines, labels=["unpruned", "pruned", "small"], loc="lower right", frameon=False
)
ax.add_artist(leg)
5188/5188 [==============================] - 3s 617us/step
5188/5188 [==============================] - 3s 593us/step
5188/5188 [==============================] - 3s 614us/step
Accuracy unpruned: 0.7516506024096385
Accuracy pruned: 0.7429879518072289
Accuracy small: 0.7301265060240963
<matplotlib.legend.Legend at 0x14dd18340>
This looks quite good. Can we go further? Let’s try a sparsity of 95%.
from tensorflow_model_optimization.python.core.sparsity.keras import (
prune,
pruning_callbacks,
pruning_schedule,
)
from tensorflow_model_optimization.sparsity.keras import strip_pruning
pruning_params = {
"pruning_schedule": pruning_schedule.ConstantSparsity(
0.95, begin_step=2000, frequency=100
)
}
model = prune.prune_low_magnitude(model, **pruning_params)
Train the model#
We’ll use the same settings as the model for part 1: Adam optimizer with categorical crossentropy loss.
The callbacks will decay the learning rate and save the model into a directory ‘model_2’
The model isn’t very complex, so this should just take a few minutes even on the CPU.
If you’ve restarted the notebook kernel after training once, set train = False
to load the trained model rather than training again.
train = True
if train:
adam = Adam(lr=0.0001)
model.compile(
optimizer=adam, loss=["categorical_crossentropy"], metrics=["accuracy"]
)
callbacks = all_callbacks(
stop_patience=1000,
lr_factor=0.5,
lr_patience=10,
lr_epsilon=0.000001,
lr_cooldown=2,
lr_minimum=0.0000001,
outputDir="model_4",
)
callbacks.callbacks.append(pruning_callbacks.UpdatePruningStep())
model.fit(
X_train_val,
y_train_val,
batch_size=1024,
epochs=30,
validation_split=0.25,
shuffle=True,
callbacks=callbacks.callbacks,
)
# Save the model again but with the pruning 'stripped' to use the regular layer types
model = strip_pruning(model)
model.save("model_4/KERAS_check_best_model.h5")
else:
from tensorflow.keras.models import load_model
model = load_model("model_4/KERAS_check_best_model.h5")
WARNING:tensorflow:`epsilon` argument is deprecated and will be removed, use `min_delta` instead.
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Epoch 1/30
/Users/wmccorma/miniconda3/envs/ml-iaifi/lib/python3.9/site-packages/keras/optimizers/optimizer_v2/adam.py:110: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super(Adam, self).__init__(name, **kwargs)
1/487 [..............................] - ETA: 14:42 - loss: 0.7493 - accuracy: 0.7510WARNING:tensorflow:Callback method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0019s vs `on_train_batch_end` time: 0.0058s). Check your callbacks.
464/487 [===========================>..] - ETA: 0s - loss: 0.7543 - accuracy: 0.7457
***callbacks***
saving losses to model_4/losses.log
Epoch 1: val_loss improved from inf to 0.75662, saving model to model_4/KERAS_check_best_model.h5
Epoch 1: val_loss improved from inf to 0.75662, saving model to model_4/KERAS_check_best_model_weights.h5
Epoch 1: saving model to model_4/KERAS_check_model_last.h5
Epoch 1: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 3s 3ms/step - loss: 0.7540 - accuracy: 0.7458 - val_loss: 0.7566 - val_accuracy: 0.7457 - lr: 1.0000e-04
Epoch 2/30
480/487 [============================>.] - ETA: 0s - loss: 0.7504 - accuracy: 0.7468
***callbacks***
saving losses to model_4/losses.log
Epoch 2: val_loss improved from 0.75662 to 0.75323, saving model to model_4/KERAS_check_best_model.h5
Epoch 2: val_loss improved from 0.75662 to 0.75323, saving model to model_4/KERAS_check_best_model_weights.h5
Epoch 2: saving model to model_4/KERAS_check_model_last.h5
Epoch 2: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 0.7504 - accuracy: 0.7468 - val_loss: 0.7532 - val_accuracy: 0.7470 - lr: 1.0000e-04
Epoch 3/30
482/487 [============================>.] - ETA: 0s - loss: 0.7473 - accuracy: 0.7477
***callbacks***
saving losses to model_4/losses.log
Epoch 3: val_loss improved from 0.75323 to 0.75066, saving model to model_4/KERAS_check_best_model.h5
Epoch 3: val_loss improved from 0.75323 to 0.75066, saving model to model_4/KERAS_check_best_model_weights.h5
Epoch 3: saving model to model_4/KERAS_check_model_last.h5
Epoch 3: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 0.7473 - accuracy: 0.7478 - val_loss: 0.7507 - val_accuracy: 0.7477 - lr: 1.0000e-04
Epoch 4/30
483/487 [============================>.] - ETA: 0s - loss: 0.7445 - accuracy: 0.7488
***callbacks***
saving losses to model_4/losses.log
Epoch 4: val_loss improved from 0.75066 to 0.74793, saving model to model_4/KERAS_check_best_model.h5
Epoch 4: val_loss improved from 0.75066 to 0.74793, saving model to model_4/KERAS_check_best_model_weights.h5
Epoch 4: saving model to model_4/KERAS_check_model_last.h5
Epoch 4: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 0.7446 - accuracy: 0.7488 - val_loss: 0.7479 - val_accuracy: 0.7488 - lr: 1.0000e-04
Epoch 5/30
466/487 [===========================>..] - ETA: 0s - loss: 1.2871 - accuracy: 0.5409
***callbacks***
saving losses to model_4/losses.log
Epoch 5: val_loss did not improve from 0.74793
Epoch 5: val_loss did not improve from 0.74793
Epoch 5: saving model to model_4/KERAS_check_model_last.h5
Epoch 5: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 1.2867 - accuracy: 0.5414 - val_loss: 1.2841 - val_accuracy: 0.5455 - lr: 1.0000e-04
Epoch 6/30
482/487 [============================>.] - ETA: 0s - loss: 1.2397 - accuracy: 0.5549
***callbacks***
saving losses to model_4/losses.log
Epoch 6: val_loss did not improve from 0.74793
Epoch 6: val_loss did not improve from 0.74793
Epoch 6: saving model to model_4/KERAS_check_model_last.h5
Epoch 6: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 1.2394 - accuracy: 0.5548 - val_loss: 1.2036 - val_accuracy: 0.5566 - lr: 1.0000e-04
Epoch 7/30
476/487 [============================>.] - ETA: 0s - loss: 1.1740 - accuracy: 0.5626
***callbacks***
saving losses to model_4/losses.log
Epoch 7: val_loss did not improve from 0.74793
Epoch 7: val_loss did not improve from 0.74793
Epoch 7: saving model to model_4/KERAS_check_model_last.h5
Epoch 7: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 1.1736 - accuracy: 0.5627 - val_loss: 1.1523 - val_accuracy: 0.5637 - lr: 1.0000e-04
Epoch 8/30
463/487 [===========================>..] - ETA: 0s - loss: 1.1347 - accuracy: 0.5688
***callbacks***
saving losses to model_4/losses.log
Epoch 8: val_loss did not improve from 0.74793
Epoch 8: val_loss did not improve from 0.74793
Epoch 8: saving model to model_4/KERAS_check_model_last.h5
Epoch 8: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.1334 - accuracy: 0.5692 - val_loss: 1.1204 - val_accuracy: 0.5690 - lr: 1.0000e-04
Epoch 9/30
472/487 [============================>.] - ETA: 0s - loss: 1.1062 - accuracy: 0.5742
***callbacks***
saving losses to model_4/losses.log
Epoch 9: val_loss did not improve from 0.74793
Epoch 9: val_loss did not improve from 0.74793
Epoch 9: saving model to model_4/KERAS_check_model_last.h5
Epoch 9: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.1057 - accuracy: 0.5744 - val_loss: 1.0965 - val_accuracy: 0.5737 - lr: 1.0000e-04
Epoch 10/30
462/487 [===========================>..] - ETA: 0s - loss: 1.0842 - accuracy: 0.5787
***callbacks***
saving losses to model_4/losses.log
Epoch 10: val_loss did not improve from 0.74793
Epoch 10: val_loss did not improve from 0.74793
Epoch 10: saving model to model_4/KERAS_check_model_last.h5
Epoch 10: saving model to model_4/KERAS_check_model_last_weights.h5
Epoch 10: saving model to model_4/KERAS_check_model_epoch10.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.0844 - accuracy: 0.5783 - val_loss: 1.0777 - val_accuracy: 0.5768 - lr: 1.0000e-04
Epoch 11/30
465/487 [===========================>..] - ETA: 0s - loss: 1.0678 - accuracy: 0.5814
***callbacks***
saving losses to model_4/losses.log
Epoch 11: val_loss did not improve from 0.74793
Epoch 11: val_loss did not improve from 0.74793
Epoch 11: saving model to model_4/KERAS_check_model_last.h5
Epoch 11: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.0674 - accuracy: 0.5813 - val_loss: 1.0623 - val_accuracy: 0.5795 - lr: 1.0000e-04
Epoch 12/30
475/487 [============================>.] - ETA: 0s - loss: 1.0530 - accuracy: 0.5838
***callbacks***
saving losses to model_4/losses.log
Epoch 12: val_loss did not improve from 0.74793
Epoch 12: val_loss did not improve from 0.74793
Epoch 12: saving model to model_4/KERAS_check_model_last.h5
Epoch 12: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.0531 - accuracy: 0.5838 - val_loss: 1.0490 - val_accuracy: 0.5823 - lr: 1.0000e-04
Epoch 13/30
482/487 [============================>.] - ETA: 0s - loss: 1.0407 - accuracy: 0.5859
***callbacks***
saving losses to model_4/losses.log
Epoch 13: val_loss did not improve from 0.74793
Epoch 13: val_loss did not improve from 0.74793
Epoch 13: saving model to model_4/KERAS_check_model_last.h5
Epoch 13: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 1.0405 - accuracy: 0.5859 - val_loss: 1.0369 - val_accuracy: 0.5842 - lr: 1.0000e-04
Epoch 14/30
484/487 [============================>.] - ETA: 0s - loss: 1.0285 - accuracy: 0.5877
***callbacks***
saving losses to model_4/losses.log
Epoch 14: val_loss did not improve from 0.74793
Epoch 14: val_loss did not improve from 0.74793
Epoch 14: saving model to model_4/KERAS_check_model_last.h5
Epoch 14: saving model to model_4/KERAS_check_model_last_weights.h5
Epoch 14: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-05.
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.0285 - accuracy: 0.5877 - val_loss: 1.0255 - val_accuracy: 0.5863 - lr: 1.0000e-04
Epoch 15/30
476/487 [============================>.] - ETA: 0s - loss: 1.0209 - accuracy: 0.5889
***callbacks***
saving losses to model_4/losses.log
Epoch 15: val_loss did not improve from 0.74793
Epoch 15: val_loss did not improve from 0.74793
Epoch 15: saving model to model_4/KERAS_check_model_last.h5
Epoch 15: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 1.0205 - accuracy: 0.5890 - val_loss: 1.0205 - val_accuracy: 0.5870 - lr: 5.0000e-05
Epoch 16/30
484/487 [============================>.] - ETA: 0s - loss: 1.0156 - accuracy: 0.5898
***callbacks***
saving losses to model_4/losses.log
Epoch 16: val_loss did not improve from 0.74793
Epoch 16: val_loss did not improve from 0.74793
Epoch 16: saving model to model_4/KERAS_check_model_last.h5
Epoch 16: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 1.0157 - accuracy: 0.5898 - val_loss: 1.0157 - val_accuracy: 0.5878 - lr: 5.0000e-05
Epoch 17/30
461/487 [===========================>..] - ETA: 0s - loss: 1.0111 - accuracy: 0.5904
***callbacks***
saving losses to model_4/losses.log
Epoch 17: val_loss did not improve from 0.74793
Epoch 17: val_loss did not improve from 0.74793
Epoch 17: saving model to model_4/KERAS_check_model_last.h5
Epoch 17: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 1.0110 - accuracy: 0.5905 - val_loss: 1.0111 - val_accuracy: 0.5886 - lr: 5.0000e-05
Epoch 18/30
485/487 [============================>.] - ETA: 0s - loss: 1.0063 - accuracy: 0.5912
***callbacks***
saving losses to model_4/losses.log
Epoch 18: val_loss did not improve from 0.74793
Epoch 18: val_loss did not improve from 0.74793
Epoch 18: saving model to model_4/KERAS_check_model_last.h5
Epoch 18: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 3ms/step - loss: 1.0064 - accuracy: 0.5911 - val_loss: 1.0067 - val_accuracy: 0.5892 - lr: 5.0000e-05
Epoch 19/30
471/487 [============================>.] - ETA: 0s - loss: 1.0021 - accuracy: 0.5917
***callbacks***
saving losses to model_4/losses.log
Epoch 19: val_loss did not improve from 0.74793
Epoch 19: val_loss did not improve from 0.74793
Epoch 19: saving model to model_4/KERAS_check_model_last.h5
Epoch 19: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 1.0021 - accuracy: 0.5916 - val_loss: 1.0024 - val_accuracy: 0.5898 - lr: 5.0000e-05
Epoch 20/30
471/487 [============================>.] - ETA: 0s - loss: 0.9980 - accuracy: 0.5922
***callbacks***
saving losses to model_4/losses.log
Epoch 20: val_loss did not improve from 0.74793
Epoch 20: val_loss did not improve from 0.74793
Epoch 20: saving model to model_4/KERAS_check_model_last.h5
Epoch 20: saving model to model_4/KERAS_check_model_last_weights.h5
Epoch 20: saving model to model_4/KERAS_check_model_epoch20.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9978 - accuracy: 0.5922 - val_loss: 0.9982 - val_accuracy: 0.5902 - lr: 5.0000e-05
Epoch 21/30
461/487 [===========================>..] - ETA: 0s - loss: 0.9937 - accuracy: 0.5926
***callbacks***
saving losses to model_4/losses.log
Epoch 21: val_loss did not improve from 0.74793
Epoch 21: val_loss did not improve from 0.74793
Epoch 21: saving model to model_4/KERAS_check_model_last.h5
Epoch 21: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9936 - accuracy: 0.5928 - val_loss: 0.9940 - val_accuracy: 0.5905 - lr: 5.0000e-05
Epoch 22/30
465/487 [===========================>..] - ETA: 0s - loss: 0.9899 - accuracy: 0.5929
***callbacks***
saving losses to model_4/losses.log
Epoch 22: val_loss did not improve from 0.74793
Epoch 22: val_loss did not improve from 0.74793
Epoch 22: saving model to model_4/KERAS_check_model_last.h5
Epoch 22: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9895 - accuracy: 0.5931 - val_loss: 0.9899 - val_accuracy: 0.5910 - lr: 5.0000e-05
Epoch 23/30
468/487 [===========================>..] - ETA: 0s - loss: 0.9860 - accuracy: 0.5934
***callbacks***
saving losses to model_4/losses.log
Epoch 23: val_loss did not improve from 0.74793
Epoch 23: val_loss did not improve from 0.74793
Epoch 23: saving model to model_4/KERAS_check_model_last.h5
Epoch 23: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9854 - accuracy: 0.5936 - val_loss: 0.9858 - val_accuracy: 0.5915 - lr: 5.0000e-05
Epoch 24/30
481/487 [============================>.] - ETA: 0s - loss: 0.9813 - accuracy: 0.5941
***callbacks***
saving losses to model_4/losses.log
Epoch 24: val_loss did not improve from 0.74793
Epoch 24: val_loss did not improve from 0.74793
Epoch 24: saving model to model_4/KERAS_check_model_last.h5
Epoch 24: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9813 - accuracy: 0.5940 - val_loss: 0.9817 - val_accuracy: 0.5920 - lr: 5.0000e-05
Epoch 25/30
468/487 [===========================>..] - ETA: 0s - loss: 0.9774 - accuracy: 0.5942
***callbacks***
saving losses to model_4/losses.log
Epoch 25: val_loss did not improve from 0.74793
Epoch 25: val_loss did not improve from 0.74793
Epoch 25: saving model to model_4/KERAS_check_model_last.h5
Epoch 25: saving model to model_4/KERAS_check_model_last_weights.h5
Epoch 25: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05.
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 0.9771 - accuracy: 0.5944 - val_loss: 0.9775 - val_accuracy: 0.5923 - lr: 5.0000e-05
Epoch 26/30
485/487 [============================>.] - ETA: 0s - loss: 0.9737 - accuracy: 0.5946
***callbacks***
saving losses to model_4/losses.log
Epoch 26: val_loss did not improve from 0.74793
Epoch 26: val_loss did not improve from 0.74793
Epoch 26: saving model to model_4/KERAS_check_model_last.h5
Epoch 26: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9738 - accuracy: 0.5945 - val_loss: 0.9753 - val_accuracy: 0.5925 - lr: 2.5000e-05
Epoch 27/30
468/487 [===========================>..] - ETA: 0s - loss: 0.9718 - accuracy: 0.5944
***callbacks***
saving losses to model_4/losses.log
Epoch 27: val_loss did not improve from 0.74793
Epoch 27: val_loss did not improve from 0.74793
Epoch 27: saving model to model_4/KERAS_check_model_last.h5
Epoch 27: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9716 - accuracy: 0.5946 - val_loss: 0.9730 - val_accuracy: 0.5927 - lr: 2.5000e-05
Epoch 28/30
480/487 [============================>.] - ETA: 0s - loss: 0.9693 - accuracy: 0.5949
***callbacks***
saving losses to model_4/losses.log
Epoch 28: val_loss did not improve from 0.74793
Epoch 28: val_loss did not improve from 0.74793
Epoch 28: saving model to model_4/KERAS_check_model_last.h5
Epoch 28: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 0.9694 - accuracy: 0.5949 - val_loss: 0.9707 - val_accuracy: 0.5928 - lr: 2.5000e-05
Epoch 29/30
464/487 [===========================>..] - ETA: 0s - loss: 0.9672 - accuracy: 0.5951
***callbacks***
saving losses to model_4/losses.log
Epoch 29: val_loss did not improve from 0.74793
Epoch 29: val_loss did not improve from 0.74793
Epoch 29: saving model to model_4/KERAS_check_model_last.h5
Epoch 29: saving model to model_4/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9670 - accuracy: 0.5952 - val_loss: 0.9684 - val_accuracy: 0.5934 - lr: 2.5000e-05
Epoch 30/30
468/487 [===========================>..] - ETA: 0s - loss: 0.9646 - accuracy: 0.5958
***callbacks***
saving losses to model_4/losses.log
Epoch 30: val_loss did not improve from 0.74793
Epoch 30: val_loss did not improve from 0.74793
Epoch 30: saving model to model_4/KERAS_check_model_last.h5
Epoch 30: saving model to model_4/KERAS_check_model_last_weights.h5
Epoch 30: saving model to model_4/KERAS_check_model_epoch30.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 0.9646 - accuracy: 0.5958 - val_loss: 0.9659 - val_accuracy: 0.5941 - lr: 2.5000e-05
WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. `model.compile_metrics` will be empty until you train or evaluate the model.
Check sparsity#
w = model.layers[0].weights[0].numpy()
h, b = np.histogram(w, bins=100)
plt.figure(figsize=(7, 7))
plt.bar(b[:-1], h, width=b[1] - b[0])
plt.semilogy()
print("% of zeros = {}".format(np.sum(w == 0) / np.size(w)))
% of zeros = 0.9501953125
Check performance#
How does this 95% sparse model compare against the other models?
import plotting
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from tensorflow.keras.models import load_model
if not COLAB:
model_ref = load_model("model_1/KERAS_check_best_model.h5")
model_prune75 = load_model("model_2/KERAS_check_best_model.h5")
else:
model_ref = load_model("iaifi-summer-school/book/model_1/KERAS_check_best_model.h5")
model_prune75 = load_model("iaifi-summer-school/book/model_2/KERAS_check_best_model.h5")
y_ref = model_ref.predict(X_test)
y_prune75 = model_prune75.predict(X_test)
y_prune95 = model.predict(X_test)
print(
"Accuracy unpruned: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_ref, axis=1))
)
)
print(
"Accuracy pruned (75%): {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_prune75, axis=1))
)
)
print(
"Accuracy pruned (95%): {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_prune95, axis=1))
)
)
fig, ax = plt.subplots(figsize=(9, 9))
_ = plotting.makeRoc(y_test, y_ref, classes)
plt.gca().set_prop_cycle(None) # reset the colors
_ = plotting.makeRoc(y_test, y_prune75, classes, linestyle="--")
plt.gca().set_prop_cycle(None) # reset the colors
_ = plotting.makeRoc(y_test, y_prune95, classes, linestyle=":")
from matplotlib.lines import Line2D
lines = [Line2D([0], [0], ls="-"), Line2D([0], [0], ls="--"), Line2D([0], [0], ls=":")]
from matplotlib.legend import Legend
leg = Legend(
ax,
lines,
labels=["unpruned", "pruned (75%)", "pruned (95%)"],
loc="lower right",
frameon=False,
)
ax.add_artist(leg)
5188/5188 [==============================] - 3s 622us/step
5188/5188 [==============================] - 4s 691us/step
5188/5188 [==============================] - 4s 709us/step
Accuracy unpruned: 0.7516506024096385
Accuracy pruned (75%): 0.7493855421686747
Accuracy pruned (95%): 0.5956506024096385
<matplotlib.legend.Legend at 0x14dd49520>
Ok, clearly 95% is too sparse for this model (at least using this scheme). For some classes you see that the performance is not terrible, but overall the performance loss is quite substantial.