Getting Started
Contents
Getting Started#
First we need to install some packages and download some files for Colab.
!apt-get -qq install -y graphviz && pip install pydot
!pip install -U matplotlib
!pip install git+https://github.com/fastmachinelearning/hls4ml.git@main#egg=hls4ml[profiling]
!pip install qkeras==0.9.0
!pip install wget
import wget
import os.path
# WGET for colab
if not os.path.exists("callbacks.py"):
url = "https://raw.githubusercontent.com/jmduarte/iaifi-summer-school/main/book/callbacks.py"
callbacksFile = wget.download(url)
if not os.path.exists("plotting.py"):
urlPlot = "https://raw.githubusercontent.com/jmduarte/iaifi-summer-school/main/book/plotting.py"
plotFile = wget.download(urlPlot)
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
Requirement already satisfied: matplotlib in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (3.5.3)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (3.0.9)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (21.3)
Requirement already satisfied: python-dateutil>=2.7 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (2.8.2)
Requirement already satisfied: numpy>=1.17 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (1.21.6)
Requirement already satisfied: pillow>=6.2.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (9.3.0)
Requirement already satisfied: fonttools>=4.22.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (4.38.0)
Requirement already satisfied: cycler>=0.10 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib) (0.11.0)
Requirement already satisfied: typing-extensions in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib) (4.4.0)
Requirement already satisfied: six>=1.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)
Collecting hls4ml[profiling]
Cloning https://github.com/fastmachinelearning/hls4ml.git (to revision main) to /tmp/pip-install-kkeze52q/hls4ml_22dde887014b4cdcb749e52575a9daaf
Running command git clone --filter=blob:none --quiet https://github.com/fastmachinelearning/hls4ml.git /tmp/pip-install-kkeze52q/hls4ml_22dde887014b4cdcb749e52575a9daaf
Resolved https://github.com/fastmachinelearning/hls4ml.git to commit fe9d3e71b03e0422c7643027880310bd2cc02cb1
Running command git submodule update --init --recursive -q
Installing build dependencies ... ?25l-
\
|
/
-
\
done
?25h Getting requirements to build wheel ... ?25l-
done
?25h Preparing metadata (pyproject.toml) ... ?25l-
\
done
?25hRequirement already satisfied: pyyaml in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (6.0)
Requirement already satisfied: calmjs.parse in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.3.0)
Requirement already satisfied: tabulate in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (0.9.0)
Requirement already satisfied: h5py in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (3.7.0)
Requirement already satisfied: qkeras in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (0.9.0)
Requirement already satisfied: six in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.16.0)
Requirement already satisfied: numpy in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.21.6)
Requirement already satisfied: pydigitalwavetools==1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.1)
Requirement already satisfied: onnx>=1.4.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.12.0)
Requirement already satisfied: seaborn in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (0.12.1)
Requirement already satisfied: matplotlib in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (3.5.3)
Requirement already satisfied: pandas in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from hls4ml[profiling]) (1.3.5)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from onnx>=1.4.0->hls4ml[profiling]) (4.4.0)
Requirement already satisfied: protobuf<=3.20.1,>=3.12.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from onnx>=1.4.0->hls4ml[profiling]) (3.19.6)
Requirement already satisfied: setuptools in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from calmjs.parse->hls4ml[profiling]) (65.5.1)
Requirement already satisfied: ply>=3.6 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from calmjs.parse->hls4ml[profiling]) (3.11)
Requirement already satisfied: pillow>=6.2.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (9.3.0)
Requirement already satisfied: fonttools>=4.22.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (4.38.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (1.4.4)
Requirement already satisfied: packaging>=20.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (21.3)
Requirement already satisfied: python-dateutil>=2.7 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (2.8.2)
Requirement already satisfied: cycler>=0.10 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (0.11.0)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from matplotlib->hls4ml[profiling]) (3.0.9)
Requirement already satisfied: pytz>=2017.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pandas->hls4ml[profiling]) (2022.6)
Requirement already satisfied: pyparser in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.0)
Requirement already satisfied: tqdm>=4.48.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (4.64.1)
Requirement already satisfied: tensorflow-model-optimization>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (0.7.3)
Requirement already satisfied: scipy>=1.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.7.3)
Requirement already satisfied: networkx>=2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (2.6.3)
Requirement already satisfied: keras-tuner>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.1.3)
Requirement already satisfied: scikit-learn>=0.23.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras->hls4ml[profiling]) (1.0.2)
Requirement already satisfied: ipython in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (7.34.0)
Requirement already satisfied: kt-legacy in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.0.4)
Requirement already satisfied: requests in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.28.1)
Requirement already satisfied: tensorboard in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.10.1)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras->hls4ml[profiling]) (3.1.0)
Requirement already satisfied: joblib>=0.11 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras->hls4ml[profiling]) (1.2.0)
Requirement already satisfied: dm-tree~=0.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorflow-model-optimization>=0.2.1->qkeras->hls4ml[profiling]) (0.1.7)
Requirement already satisfied: parse==1.6.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyparser->qkeras->hls4ml[profiling]) (1.6.5)
Requirement already satisfied: matplotlib-inline in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.1.6)
Requirement already satisfied: pexpect>4.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (4.8.0)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.0.32)
Requirement already satisfied: pygments in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.13.0)
Requirement already satisfied: jedi>=0.16 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.18.1)
Requirement already satisfied: backcall in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.2.0)
Requirement already satisfied: pickleshare in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.7.5)
Requirement already satisfied: decorator in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.1.1)
Requirement already satisfied: traitlets>=4.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.5.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.26.12)
Requirement already satisfied: certifi>=2017.4.17 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2022.9.24)
Requirement already satisfied: idna<4,>=2.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.4)
Requirement already satisfied: charset-normalizer<3,>=2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.1.1)
Requirement already satisfied: google-auth<3,>=1.6.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.14.1)
Requirement already satisfied: werkzeug>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.2.2)
Requirement already satisfied: wheel>=0.26 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.38.2)
Requirement already satisfied: grpcio>=1.24.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.50.0)
Requirement already satisfied: markdown>=2.6.8 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.4.1)
Requirement already satisfied: absl-py>=0.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.3.0)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.6.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.8.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.4.6)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (4.9)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.2.8)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.2.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (1.3.1)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from jedi>=0.16->ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.8.3)
Requirement already satisfied: importlib-metadata>=4.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (5.0.0)
Requirement already satisfied: ptyprocess>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pexpect>4.3->ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.7.0)
Requirement already satisfied: wcwidth in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.2.5)
Requirement already satisfied: MarkupSafe>=2.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from werkzeug>=1.0.1->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (2.1.1)
Requirement already satisfied: zipp>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.10.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras->hls4ml[profiling]) (3.2.2)
Requirement already satisfied: qkeras==0.9.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (0.9.0)
Requirement already satisfied: keras-tuner>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.1.3)
Requirement already satisfied: tensorflow-model-optimization>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (0.7.3)
Requirement already satisfied: networkx>=2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (2.6.3)
Requirement already satisfied: numpy>=1.16.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.21.6)
Requirement already satisfied: scipy>=1.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.7.3)
Requirement already satisfied: setuptools>=41.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (65.5.1)
Requirement already satisfied: scikit-learn>=0.23.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.0.2)
Requirement already satisfied: pyparser in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (1.0)
Requirement already satisfied: tqdm>=4.48.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from qkeras==0.9.0) (4.64.1)
Requirement already satisfied: tensorboard in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (2.10.1)
Requirement already satisfied: packaging in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (21.3)
Requirement already satisfied: requests in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (2.28.1)
Requirement already satisfied: ipython in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (7.34.0)
Requirement already satisfied: kt-legacy in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from keras-tuner>=1.0.1->qkeras==0.9.0) (1.0.4)
Requirement already satisfied: joblib>=0.11 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras==0.9.0) (1.2.0)
Requirement already satisfied: threadpoolctl>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from scikit-learn>=0.23.1->qkeras==0.9.0) (3.1.0)
Requirement already satisfied: dm-tree~=0.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorflow-model-optimization>=0.2.1->qkeras==0.9.0) (0.1.7)
Requirement already satisfied: six~=1.10 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorflow-model-optimization>=0.2.1->qkeras==0.9.0) (1.16.0)
Requirement already satisfied: parse==1.6.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyparser->qkeras==0.9.0) (1.6.5)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (3.0.32)
Requirement already satisfied: pexpect>4.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (4.8.0)
Requirement already satisfied: decorator in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (5.1.1)
Requirement already satisfied: traitlets>=4.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (5.5.0)
Requirement already satisfied: pickleshare in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.7.5)
Requirement already satisfied: backcall in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.2.0)
Requirement already satisfied: jedi>=0.16 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.18.1)
Requirement already satisfied: matplotlib-inline in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.1.6)
Requirement already satisfied: pygments in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (2.13.0)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from packaging->keras-tuner>=1.0.1->qkeras==0.9.0) (3.0.9)
Requirement already satisfied: charset-normalizer<3,>=2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (2.1.1)
Requirement already satisfied: certifi>=2017.4.17 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (2022.9.24)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (1.26.12)
Requirement already satisfied: idna<4,>=2.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests->keras-tuner>=1.0.1->qkeras==0.9.0) (3.4)
Requirement already satisfied: absl-py>=0.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.3.0)
Requirement already satisfied: google-auth<3,>=1.6.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (2.14.1)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.6.1)
Requirement already satisfied: wheel>=0.26 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.38.2)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.8.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.4.6)
Requirement already satisfied: protobuf<3.20,>=3.9.2 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.19.6)
Requirement already satisfied: werkzeug>=1.0.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (2.2.2)
Requirement already satisfied: grpcio>=1.24.3 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.50.0)
Requirement already satisfied: markdown>=2.6.8 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.4.1)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (5.2.0)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (4.9)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.2.8)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (1.3.1)
Requirement already satisfied: parso<0.9.0,>=0.8.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from jedi>=0.16->ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.8.3)
Requirement already satisfied: importlib-metadata>=4.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (5.0.0)
Requirement already satisfied: ptyprocess>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pexpect>4.3->ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.7.0)
Requirement already satisfied: wcwidth in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->keras-tuner>=1.0.1->qkeras==0.9.0) (0.2.5)
Requirement already satisfied: MarkupSafe>=2.1.1 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from werkzeug>=1.0.1->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (2.1.1)
Requirement already satisfied: zipp>=0.5 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.10.0)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (4.4.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard->keras-tuner>=1.0.1->qkeras==0.9.0) (3.2.2)
Requirement already satisfied: wget in /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages (3.2)
from tensorflow.keras.utils import to_categorical
from sklearn.datasets import fetch_openml
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler
import numpy as np
%matplotlib inline
seed = 0
np.random.seed(seed)
import tensorflow as tf
tf.random.set_seed(seed)
# import os
# os.environ['PATH'] = '/opt/Xilinx/Vivado/2019.2/bin:' + os.environ['PATH']
# for this tutorial we wont be actually running Vivado, so I have commented these lines out
# but if you want to look into actually running on an FPGA then simply uncomment these lines
2022-11-08 17:45:20.355434: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-08 17:45:20.814099: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.7.15/x64/lib
2022-11-08 17:45:20.814121: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-08 17:45:20.865544: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-08 17:45:22.238400: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.7.15/x64/lib
2022-11-08 17:45:22.238508: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.7.15/x64/lib
2022-11-08 17:45:22.238518: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Fetch the jet tagging dataset from Open ML#
data = fetch_openml("hls4ml_lhc_jets_hlf")
X, y = data["data"], data["target"]
---------------------------------------------------------------------------
KeyboardInterrupt Traceback (most recent call last)
/tmp/ipykernel_5429/1830636180.py in <module>
----> 1 data = fetch_openml("hls4ml_lhc_jets_hlf")
2 X, y = data["data"], data["target"]
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/sklearn/datasets/_openml.py in fetch_openml(name, version, data_id, data_home, target_column, cache, return_X_y, as_frame)
965 target_columns=target_columns,
966 data_columns=data_columns,
--> 967 md5_checksum=data_description["md5_checksum"],
968 )
969
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/sklearn/datasets/_openml.py in _download_data_to_bunch(url, sparse, data_home, as_frame, features_list, data_columns, target_columns, shape, md5_checksum)
659 encode_nominal=not as_frame,
660 parse_arff=parse_arff,
--> 661 md5_checksum=md5_checksum,
662 )
663 X, y, frame, nominal_attributes = postprocess(*out)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/sklearn/datasets/_openml.py in wrapper(*args, **kw)
59 return f(*args, **kw)
60 try:
---> 61 return f(*args, **kw)
62 except HTTPError:
63 raise
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/sklearn/datasets/_openml.py in _load_arff_response(url, data_home, return_type, encode_nominal, parse_arff, md5_checksum)
509 ) -> Tuple:
510 """Load arff data with url and parses arff response with parse_arff"""
--> 511 response = _open_openml_url(url, data_home)
512
513 with closing(response):
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/sklearn/datasets/_openml.py in _open_openml_url(openml_path, data_home)
123 opener = gzip.GzipFile
124 with opener(os.path.join(tmpdir, file_name), "wb") as fdst:
--> 125 shutil.copyfileobj(fsrc, fdst)
126 shutil.move(fdst.name, local_path)
127 except Exception:
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/shutil.py in copyfileobj(fsrc, fdst, length)
77 """copy data from file-like object fsrc to file-like object fdst"""
78 while 1:
---> 79 buf = fsrc.read(length)
80 if not buf:
81 break
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/http/client.py in read(self, amt)
463 # Amount is given, implement using readinto
464 b = bytearray(amt)
--> 465 n = self.readinto(b)
466 return memoryview(b)[:n].tobytes()
467 else:
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/http/client.py in readinto(self, b)
497
498 if self.chunked:
--> 499 return self._readinto_chunked(b)
500
501 if self.length is not None:
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/http/client.py in _readinto_chunked(self, b)
592 try:
593 while True:
--> 594 chunk_left = self._get_chunk_left()
595 if chunk_left is None:
596 return total_bytes
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/http/client.py in _get_chunk_left(self)
558 if chunk_left is not None:
559 # We are at the end of chunk, discard chunk end
--> 560 self._safe_read(2) # toss the CRLF at the end of the chunk
561 try:
562 chunk_left = self._read_next_chunk_size()
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/http/client.py in _safe_read(self, amt)
626 s = []
627 while amt > 0:
--> 628 chunk = self.fp.read(min(amt, MAXAMOUNT))
629 if not chunk:
630 raise IncompleteRead(b''.join(s), amt)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/socket.py in readinto(self, b)
587 while True:
588 try:
--> 589 return self._sock.recv_into(b)
590 except timeout:
591 self._timeout_occurred = True
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/ssl.py in recv_into(self, buffer, nbytes, flags)
1069 "non-zero flags not allowed in calls to recv_into() on %s" %
1070 self.__class__)
-> 1071 return self.read(nbytes, buffer)
1072 else:
1073 return super().recv_into(buffer, nbytes, flags)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/ssl.py in read(self, len, buffer)
927 try:
928 if buffer is not None:
--> 929 return self._sslobj.read(len, buffer)
930 else:
931 return self._sslobj.read(len)
KeyboardInterrupt:
Let’s print some information about the dataset#
Print the feature names and the dataset shape
print(data["feature_names"])
print(X.shape, y.shape)
print(X[:5])
print(y[:5])
['zlogz', 'c1_b0_mmdt', 'c1_b1_mmdt', 'c1_b2_mmdt', 'c2_b1_mmdt', 'c2_b2_mmdt', 'd2_b1_mmdt', 'd2_b2_mmdt', 'd2_a1_b1_mmdt', 'd2_a1_b2_mmdt', 'm2_b1_mmdt', 'm2_b2_mmdt', 'n2_b1_mmdt', 'n2_b2_mmdt', 'mass_mmdt', 'multiplicity']
(830000, 16) (830000,)
zlogz c1_b0_mmdt c1_b1_mmdt c1_b2_mmdt c2_b1_mmdt c2_b2_mmdt \
0 -2.935125 0.383155 0.005126 0.000084 0.009070 0.000179
1 -1.927335 0.270699 0.001585 0.000011 0.003232 0.000029
2 -3.112147 0.458171 0.097914 0.028588 0.124278 0.038487
3 -2.666515 0.437068 0.049122 0.007978 0.047477 0.004802
4 -2.484843 0.428981 0.041786 0.006110 0.023066 0.001123
d2_b1_mmdt d2_b2_mmdt d2_a1_b1_mmdt d2_a1_b2_mmdt m2_b1_mmdt \
0 1.769445 2.123898 1.769445 0.308185 0.135687
1 2.038834 2.563099 2.038834 0.211886 0.063729
2 1.269254 1.346238 1.269254 0.246488 0.115636
3 0.966505 0.601864 0.966505 0.160756 0.082196
4 0.552002 0.183821 0.552002 0.084338 0.048006
m2_b2_mmdt n2_b1_mmdt n2_b2_mmdt mass_mmdt multiplicity
0 0.083278 0.412136 0.299058 8.926882 75.0
1 0.036310 0.310217 0.226661 3.886512 31.0
2 0.079094 0.357559 0.289220 162.144669 61.0
3 0.033311 0.238871 0.094516 91.258934 39.0
4 0.014450 0.141906 0.036665 79.725777 35.0
0 g
1 w
2 t
3 z
4 w
Name: class, dtype: category
Categories (5, object): ['g', 'q', 'w', 'z', 't']
As you see above, the y
target is an array of strings, e.g. [‘g’, ‘w’,…] etc. These correspond to different source particles for the jets. You will notice that except for quark- and gluon-initiated jets (‘g’), all other jets in the dataset have at least one ‘prong’.
Lets see what the jet variables look like#
Many of these variables are energy correlation functions \(N\), \(M\), \(C\), and \(D\) (1305.0007, 1609.07483). The others are the jet mass (computed with modified mass drop) \(m_\textrm{mMDT}\), \(\Sigma~z\log z\) where the sum is over the particles in the jet and \(z\) is the fraction of jet momentum carried by a given particle, and the overall multiplicity of particles in the jet.
import matplotlib.pyplot as plt
fig, axs = plt.subplots(
int(np.ceil(len(data["feature_names"]) / 3)),
3,
figsize=(8 * 3, 8 * len(data["feature_names"]) / 3),
)
for feat in data["feature_names"]:
for c in ["g", "q", "w", "z", "t"]:
X[y == c][feat]
ix = 0
for ax1 in axs:
for ax in ax1:
feat = data["feature_names"][ix]
bins = np.linspace(np.min(X[:][feat]), np.max(X[:][feat]), 20)
for c in ["g", "q", "w", "z", "t"]:
X[y == c][feat]
ax.hist(
X[y == c][feat], bins=bins, histtype="step", label=c, lw=2
) # ,density=True)
ax.set_xlabel(feat)
ax.legend()
ix = ix + 1
if ix >= len(data["feature_names"]):
break
plt.show()
Because the y
target is an array of strings, e.g. [‘g’, ‘w’,…], we need to make this a “One Hot” encoding for the training.
Then, split the dataset into training and validation sets
le = LabelEncoder()
y = le.fit_transform(y)
y = to_categorical(y, 5)
X_train_val, X_test, y_train_val, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
print(y[:5])
[[1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 0. 1.]
[0. 0. 0. 1. 0.]]
scaler = StandardScaler()
X_train_val = scaler.fit_transform(X_train_val)
X_test = scaler.transform(X_test)
We now save the datasets as raw numpy arrays so that we can restart later without redownloading the dataset and converting.
np.save("X_train_val.npy", X_train_val)
np.save("X_test.npy", X_test)
np.save("y_train_val.npy", y_train_val)
np.save("y_test.npy", y_test)
np.save("classes.npy", le.classes_)
classes = le.classes_
Now construct a simple neural network#
We’ll use 3 hidden layers with 64, then 32, then 32 neurons. Each layer will use relu
activation.
Add an output layer with 5 neurons (one for each class), then finish with Softmax activation.
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, BatchNormalization
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l1
from callbacks import all_callbacks
model = Sequential()
model.add(Dense(64, input_shape=(16,), name="fc1", kernel_initializer="lecun_uniform"))
model.add(Activation(activation="relu", name="relu1"))
model.add(Dense(32, name="fc2", kernel_initializer="lecun_uniform"))
model.add(Activation(activation="relu", name="relu2"))
model.add(Dense(32, name="fc3", kernel_initializer="lecun_uniform"))
model.add(Activation(activation="relu", name="relu3"))
model.add(Dense(5, name="output", kernel_initializer="lecun_uniform"))
model.add(Activation(activation="softmax", name="softmax"))
2022-08-01 19:52:52.253900: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Train the model#
We’ll use Adam optimizer with categorical crossentropy loss.
The callbacks will decay the learning rate and save the model into a directory ‘model_1’
The model isn’t very complex, so this should just take a few minutes even on the CPU.
If you’ve restarted the notebook kernel after training once, set train = False
to load the trained model.
train = True
if train:
adam = Adam(lr=0.0001)
model.compile(
optimizer=adam, loss=["categorical_crossentropy"], metrics=["accuracy"]
)
callbacks = all_callbacks(
stop_patience=1000,
lr_factor=0.5,
lr_patience=10,
lr_epsilon=0.000001,
lr_cooldown=2,
lr_minimum=0.0000001,
outputDir="model_1",
)
model.fit(
X_train_val,
y_train_val,
batch_size=1024,
epochs=30,
validation_split=0.25,
shuffle=True,
callbacks=callbacks.callbacks,
)
else:
from tensorflow.keras.models import load_model
model = load_model("model_1/KERAS_check_best_model.h5")
WARNING:tensorflow:`epsilon` argument is deprecated and will be removed, use `min_delta` instead.
WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.
Epoch 1/30
/Users/wmccorma/miniconda3/envs/ml-iaifi/lib/python3.9/site-packages/keras/optimizers/optimizer_v2/adam.py:110: UserWarning: The `lr` argument is deprecated, use `learning_rate` instead.
super(Adam, self).__init__(name, **kwargs)
463/487 [===========================>..] - ETA: 0s - loss: 1.2752 - accuracy: 0.5104
***callbacks***
saving losses to model_1/losses.log
Epoch 1: val_loss improved from inf to 1.06040, saving model to model_1/KERAS_check_best_model.h5
Epoch 1: val_loss improved from inf to 1.06040, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 1: saving model to model_1/KERAS_check_model_last.h5
Epoch 1: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 2s 2ms/step - loss: 1.2650 - accuracy: 0.5163 - val_loss: 1.0604 - val_accuracy: 0.6376 - lr: 1.0000e-04
Epoch 2/30
477/487 [============================>.] - ETA: 0s - loss: 0.9910 - accuracy: 0.6674
***callbacks***
saving losses to model_1/losses.log
Epoch 2: val_loss improved from 1.06040 to 0.94140, saving model to model_1/KERAS_check_best_model.h5
Epoch 2: val_loss improved from 1.06040 to 0.94140, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 2: saving model to model_1/KERAS_check_model_last.h5
Epoch 2: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9899 - accuracy: 0.6678 - val_loss: 0.9414 - val_accuracy: 0.6907 - lr: 1.0000e-04
Epoch 3/30
486/487 [============================>.] - ETA: 0s - loss: 0.9050 - accuracy: 0.7010
***callbacks***
saving losses to model_1/losses.log
Epoch 3: val_loss improved from 0.94140 to 0.87842, saving model to model_1/KERAS_check_best_model.h5
Epoch 3: val_loss improved from 0.94140 to 0.87842, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 3: saving model to model_1/KERAS_check_model_last.h5
Epoch 3: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.9051 - accuracy: 0.7010 - val_loss: 0.8784 - val_accuracy: 0.7101 - lr: 1.0000e-04
Epoch 4/30
469/487 [===========================>..] - ETA: 0s - loss: 0.8522 - accuracy: 0.7148
***callbacks***
saving losses to model_1/losses.log
Epoch 4: val_loss improved from 0.87842 to 0.83454, saving model to model_1/KERAS_check_best_model.h5
Epoch 4: val_loss improved from 0.87842 to 0.83454, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 4: saving model to model_1/KERAS_check_model_last.h5
Epoch 4: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8519 - accuracy: 0.7148 - val_loss: 0.8345 - val_accuracy: 0.7191 - lr: 1.0000e-04
Epoch 5/30
461/487 [===========================>..] - ETA: 0s - loss: 0.8176 - accuracy: 0.7216
***callbacks***
saving losses to model_1/losses.log
Epoch 5: val_loss improved from 0.83454 to 0.80730, saving model to model_1/KERAS_check_best_model.h5
Epoch 5: val_loss improved from 0.83454 to 0.80730, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 5: saving model to model_1/KERAS_check_model_last.h5
Epoch 5: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.8167 - accuracy: 0.7219 - val_loss: 0.8073 - val_accuracy: 0.7243 - lr: 1.0000e-04
Epoch 6/30
481/487 [============================>.] - ETA: 0s - loss: 0.7943 - accuracy: 0.7263
***callbacks***
saving losses to model_1/losses.log
Epoch 6: val_loss improved from 0.80730 to 0.78999, saving model to model_1/KERAS_check_best_model.h5
Epoch 6: val_loss improved from 0.80730 to 0.78999, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 6: saving model to model_1/KERAS_check_model_last.h5
Epoch 6: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7943 - accuracy: 0.7263 - val_loss: 0.7900 - val_accuracy: 0.7278 - lr: 1.0000e-04
Epoch 7/30
474/487 [============================>.] - ETA: 0s - loss: 0.7793 - accuracy: 0.7296
***callbacks***
saving losses to model_1/losses.log
Epoch 7: val_loss improved from 0.78999 to 0.77712, saving model to model_1/KERAS_check_best_model.h5
Epoch 7: val_loss improved from 0.78999 to 0.77712, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 7: saving model to model_1/KERAS_check_model_last.h5
Epoch 7: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7792 - accuracy: 0.7297 - val_loss: 0.7771 - val_accuracy: 0.7308 - lr: 1.0000e-04
Epoch 8/30
483/487 [============================>.] - ETA: 0s - loss: 0.7679 - accuracy: 0.7325
***callbacks***
saving losses to model_1/losses.log
Epoch 8: val_loss improved from 0.77712 to 0.76751, saving model to model_1/KERAS_check_best_model.h5
Epoch 8: val_loss improved from 0.77712 to 0.76751, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 8: saving model to model_1/KERAS_check_model_last.h5
Epoch 8: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7678 - accuracy: 0.7324 - val_loss: 0.7675 - val_accuracy: 0.7330 - lr: 1.0000e-04
Epoch 9/30
484/487 [============================>.] - ETA: 0s - loss: 0.7585 - accuracy: 0.7350
***callbacks***
saving losses to model_1/losses.log
Epoch 9: val_loss improved from 0.76751 to 0.75910, saving model to model_1/KERAS_check_best_model.h5
Epoch 9: val_loss improved from 0.76751 to 0.75910, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 9: saving model to model_1/KERAS_check_model_last.h5
Epoch 9: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7585 - accuracy: 0.7350 - val_loss: 0.7591 - val_accuracy: 0.7349 - lr: 1.0000e-04
Epoch 10/30
484/487 [============================>.] - ETA: 0s - loss: 0.7506 - accuracy: 0.7369
***callbacks***
saving losses to model_1/losses.log
Epoch 10: val_loss improved from 0.75910 to 0.75154, saving model to model_1/KERAS_check_best_model.h5
Epoch 10: val_loss improved from 0.75910 to 0.75154, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 10: saving model to model_1/KERAS_check_model_last.h5
Epoch 10: saving model to model_1/KERAS_check_model_last_weights.h5
Epoch 10: saving model to model_1/KERAS_check_model_epoch10.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7505 - accuracy: 0.7370 - val_loss: 0.7515 - val_accuracy: 0.7371 - lr: 1.0000e-04
Epoch 11/30
471/487 [============================>.] - ETA: 0s - loss: 0.7438 - accuracy: 0.7385
***callbacks***
saving losses to model_1/losses.log
Epoch 11: val_loss improved from 0.75154 to 0.74543, saving model to model_1/KERAS_check_best_model.h5
Epoch 11: val_loss improved from 0.75154 to 0.74543, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 11: saving model to model_1/KERAS_check_model_last.h5
Epoch 11: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7435 - accuracy: 0.7386 - val_loss: 0.7454 - val_accuracy: 0.7386 - lr: 1.0000e-04
Epoch 12/30
461/487 [===========================>..] - ETA: 0s - loss: 0.7377 - accuracy: 0.7401
***callbacks***
saving losses to model_1/losses.log
Epoch 12: val_loss improved from 0.74543 to 0.73952, saving model to model_1/KERAS_check_best_model.h5
Epoch 12: val_loss improved from 0.74543 to 0.73952, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 12: saving model to model_1/KERAS_check_model_last.h5
Epoch 12: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7374 - accuracy: 0.7402 - val_loss: 0.7395 - val_accuracy: 0.7402 - lr: 1.0000e-04
Epoch 13/30
476/487 [============================>.] - ETA: 0s - loss: 0.7324 - accuracy: 0.7417
***callbacks***
saving losses to model_1/losses.log
Epoch 13: val_loss improved from 0.73952 to 0.73464, saving model to model_1/KERAS_check_best_model.h5
Epoch 13: val_loss improved from 0.73952 to 0.73464, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 13: saving model to model_1/KERAS_check_model_last.h5
Epoch 13: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7321 - accuracy: 0.7418 - val_loss: 0.7346 - val_accuracy: 0.7417 - lr: 1.0000e-04
Epoch 14/30
478/487 [============================>.] - ETA: 0s - loss: 0.7275 - accuracy: 0.7433
***callbacks***
saving losses to model_1/losses.log
Epoch 14: val_loss improved from 0.73464 to 0.73066, saving model to model_1/KERAS_check_best_model.h5
Epoch 14: val_loss improved from 0.73464 to 0.73066, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 14: saving model to model_1/KERAS_check_model_last.h5
Epoch 14: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7275 - accuracy: 0.7432 - val_loss: 0.7307 - val_accuracy: 0.7424 - lr: 1.0000e-04
Epoch 15/30
469/487 [===========================>..] - ETA: 0s - loss: 0.7239 - accuracy: 0.7443
***callbacks***
saving losses to model_1/losses.log
Epoch 15: val_loss improved from 0.73066 to 0.72685, saving model to model_1/KERAS_check_best_model.h5
Epoch 15: val_loss improved from 0.73066 to 0.72685, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 15: saving model to model_1/KERAS_check_model_last.h5
Epoch 15: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7236 - accuracy: 0.7444 - val_loss: 0.7269 - val_accuracy: 0.7437 - lr: 1.0000e-04
Epoch 16/30
468/487 [===========================>..] - ETA: 0s - loss: 0.7199 - accuracy: 0.7454
***callbacks***
saving losses to model_1/losses.log
Epoch 16: val_loss improved from 0.72685 to 0.72346, saving model to model_1/KERAS_check_best_model.h5
Epoch 16: val_loss improved from 0.72685 to 0.72346, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 16: saving model to model_1/KERAS_check_model_last.h5
Epoch 16: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7200 - accuracy: 0.7454 - val_loss: 0.7235 - val_accuracy: 0.7449 - lr: 1.0000e-04
Epoch 17/30
471/487 [============================>.] - ETA: 0s - loss: 0.7166 - accuracy: 0.7465
***callbacks***
saving losses to model_1/losses.log
Epoch 17: val_loss improved from 0.72346 to 0.72086, saving model to model_1/KERAS_check_best_model.h5
Epoch 17: val_loss improved from 0.72346 to 0.72086, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 17: saving model to model_1/KERAS_check_model_last.h5
Epoch 17: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7169 - accuracy: 0.7463 - val_loss: 0.7209 - val_accuracy: 0.7456 - lr: 1.0000e-04
Epoch 18/30
475/487 [============================>.] - ETA: 0s - loss: 0.7139 - accuracy: 0.7475
***callbacks***
saving losses to model_1/losses.log
Epoch 18: val_loss improved from 0.72086 to 0.71806, saving model to model_1/KERAS_check_best_model.h5
Epoch 18: val_loss improved from 0.72086 to 0.71806, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 18: saving model to model_1/KERAS_check_model_last.h5
Epoch 18: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7143 - accuracy: 0.7474 - val_loss: 0.7181 - val_accuracy: 0.7463 - lr: 1.0000e-04
Epoch 19/30
457/487 [===========================>..] - ETA: 0s - loss: 0.7125 - accuracy: 0.7480
***callbacks***
saving losses to model_1/losses.log
Epoch 19: val_loss improved from 0.71806 to 0.71602, saving model to model_1/KERAS_check_best_model.h5
Epoch 19: val_loss improved from 0.71806 to 0.71602, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 19: saving model to model_1/KERAS_check_model_last.h5
Epoch 19: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7118 - accuracy: 0.7482 - val_loss: 0.7160 - val_accuracy: 0.7468 - lr: 1.0000e-04
Epoch 20/30
486/487 [============================>.] - ETA: 0s - loss: 0.7095 - accuracy: 0.7486
***callbacks***
saving losses to model_1/losses.log
Epoch 20: val_loss improved from 0.71602 to 0.71361, saving model to model_1/KERAS_check_best_model.h5
Epoch 20: val_loss improved from 0.71602 to 0.71361, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 20: saving model to model_1/KERAS_check_model_last.h5
Epoch 20: saving model to model_1/KERAS_check_model_last_weights.h5
Epoch 20: saving model to model_1/KERAS_check_model_epoch20.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7094 - accuracy: 0.7486 - val_loss: 0.7136 - val_accuracy: 0.7476 - lr: 1.0000e-04
Epoch 21/30
479/487 [============================>.] - ETA: 0s - loss: 0.7074 - accuracy: 0.7493
***callbacks***
saving losses to model_1/losses.log
Epoch 21: val_loss improved from 0.71361 to 0.71148, saving model to model_1/KERAS_check_best_model.h5
Epoch 21: val_loss improved from 0.71361 to 0.71148, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 21: saving model to model_1/KERAS_check_model_last.h5
Epoch 21: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7072 - accuracy: 0.7493 - val_loss: 0.7115 - val_accuracy: 0.7481 - lr: 1.0000e-04
Epoch 22/30
481/487 [============================>.] - ETA: 0s - loss: 0.7052 - accuracy: 0.7497
***callbacks***
saving losses to model_1/losses.log
Epoch 22: val_loss improved from 0.71148 to 0.70976, saving model to model_1/KERAS_check_best_model.h5
Epoch 22: val_loss improved from 0.71148 to 0.70976, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 22: saving model to model_1/KERAS_check_model_last.h5
Epoch 22: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7052 - accuracy: 0.7497 - val_loss: 0.7098 - val_accuracy: 0.7491 - lr: 1.0000e-04
Epoch 23/30
485/487 [============================>.] - ETA: 0s - loss: 0.7032 - accuracy: 0.7505
***callbacks***
saving losses to model_1/losses.log
Epoch 23: val_loss improved from 0.70976 to 0.70773, saving model to model_1/KERAS_check_best_model.h5
Epoch 23: val_loss improved from 0.70976 to 0.70773, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 23: saving model to model_1/KERAS_check_model_last.h5
Epoch 23: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 3ms/step - loss: 0.7032 - accuracy: 0.7505 - val_loss: 0.7077 - val_accuracy: 0.7492 - lr: 1.0000e-04
Epoch 24/30
470/487 [===========================>..] - ETA: 0s - loss: 0.7012 - accuracy: 0.7512
***callbacks***
saving losses to model_1/losses.log
Epoch 24: val_loss improved from 0.70773 to 0.70592, saving model to model_1/KERAS_check_best_model.h5
Epoch 24: val_loss improved from 0.70773 to 0.70592, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 24: saving model to model_1/KERAS_check_model_last.h5
Epoch 24: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.7014 - accuracy: 0.7510 - val_loss: 0.7059 - val_accuracy: 0.7497 - lr: 1.0000e-04
Epoch 25/30
485/487 [============================>.] - ETA: 0s - loss: 0.6998 - accuracy: 0.7514
***callbacks***
saving losses to model_1/losses.log
Epoch 25: val_loss improved from 0.70592 to 0.70423, saving model to model_1/KERAS_check_best_model.h5
Epoch 25: val_loss improved from 0.70592 to 0.70423, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 25: saving model to model_1/KERAS_check_model_last.h5
Epoch 25: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.6996 - accuracy: 0.7514 - val_loss: 0.7042 - val_accuracy: 0.7502 - lr: 1.0000e-04
Epoch 26/30
483/487 [============================>.] - ETA: 0s - loss: 0.6980 - accuracy: 0.7519
***callbacks***
saving losses to model_1/losses.log
Epoch 26: val_loss improved from 0.70423 to 0.70261, saving model to model_1/KERAS_check_best_model.h5
Epoch 26: val_loss improved from 0.70423 to 0.70261, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 26: saving model to model_1/KERAS_check_model_last.h5
Epoch 26: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.6980 - accuracy: 0.7519 - val_loss: 0.7026 - val_accuracy: 0.7512 - lr: 1.0000e-04
Epoch 27/30
471/487 [============================>.] - ETA: 0s - loss: 0.6967 - accuracy: 0.7522
***callbacks***
saving losses to model_1/losses.log
Epoch 27: val_loss improved from 0.70261 to 0.70105, saving model to model_1/KERAS_check_best_model.h5
Epoch 27: val_loss improved from 0.70261 to 0.70105, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 27: saving model to model_1/KERAS_check_model_last.h5
Epoch 27: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.6965 - accuracy: 0.7524 - val_loss: 0.7010 - val_accuracy: 0.7513 - lr: 1.0000e-04
Epoch 28/30
482/487 [============================>.] - ETA: 0s - loss: 0.6948 - accuracy: 0.7529
***callbacks***
saving losses to model_1/losses.log
Epoch 28: val_loss improved from 0.70105 to 0.70011, saving model to model_1/KERAS_check_best_model.h5
Epoch 28: val_loss improved from 0.70105 to 0.70011, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 28: saving model to model_1/KERAS_check_model_last.h5
Epoch 28: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.6949 - accuracy: 0.7528 - val_loss: 0.7001 - val_accuracy: 0.7515 - lr: 1.0000e-04
Epoch 29/30
485/487 [============================>.] - ETA: 0s - loss: 0.6936 - accuracy: 0.7532
***callbacks***
saving losses to model_1/losses.log
Epoch 29: val_loss improved from 0.70011 to 0.69826, saving model to model_1/KERAS_check_best_model.h5
Epoch 29: val_loss improved from 0.70011 to 0.69826, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 29: saving model to model_1/KERAS_check_model_last.h5
Epoch 29: saving model to model_1/KERAS_check_model_last_weights.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.6936 - accuracy: 0.7532 - val_loss: 0.6983 - val_accuracy: 0.7521 - lr: 1.0000e-04
Epoch 30/30
474/487 [============================>.] - ETA: 0s - loss: 0.6926 - accuracy: 0.7533
***callbacks***
saving losses to model_1/losses.log
Epoch 30: val_loss improved from 0.69826 to 0.69691, saving model to model_1/KERAS_check_best_model.h5
Epoch 30: val_loss improved from 0.69826 to 0.69691, saving model to model_1/KERAS_check_best_model_weights.h5
Epoch 30: saving model to model_1/KERAS_check_model_last.h5
Epoch 30: saving model to model_1/KERAS_check_model_last_weights.h5
Epoch 30: saving model to model_1/KERAS_check_model_epoch30.h5
***callbacks end***
487/487 [==============================] - 1s 2ms/step - loss: 0.6923 - accuracy: 0.7535 - val_loss: 0.6969 - val_accuracy: 0.7526 - lr: 1.0000e-04
Check performance#
Check the accuracy and make a ROC curve
import plotting
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
y_keras = model.predict(X_test)
print(
"Accuracy: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_keras, axis=1))
)
)
plt.figure(figsize=(9, 9))
_ = plotting.makeRoc(y_test, y_keras, le.classes_)
5188/5188 [==============================] - 3s 575us/step
Accuracy: 0.7516506024096385
Convert the model to an hls4ml project#
Now we will go through the steps to convert the model we trained to an hls4ml project. With High Level Synthesis (HLS) tools this project can be synthesized into FPGA firmware. For this tutorial we will use hls4ml to explore the possibilities for quantized and pruned implementations of our neural network.
With a Vivado HLS installation we could also synthesize the model with Vivado HLS and check the metrics of latency and FPGA resource usage.
Make an hls4ml config & model#
The hls4ml Neural Network inference library is controlled through a configuration dictionary.
In this example we’ll use the most simple variation.
The part
argument denotes the target FPGA for the project, but this will not matter for our purposes.
import hls4ml
config = hls4ml.utils.config_from_keras_model(model, granularity="model")
print("-----------------------------------")
print("Configuration")
plotting.print_dict(config)
print("-----------------------------------")
hls_model = hls4ml.converters.convert_from_keras_model(
model,
hls_config=config,
output_dir="model_1/hls4ml_prj",
part="xcu250-figd2104-2L-e",
)
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
-----------------------------------
Configuration
Model
Precision: ap_fixed<16,6>
ReuseFactor: 1
Strategy: Latency
-----------------------------------
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Let’s visualise what we created. The model architecture is shown, annotated with the shape and data types
hls4ml.utils.plot_model(hls_model, show_shapes=True, show_precision=True, to_file=None)
Failed to import pydot. You must install pydot and graphviz for `pydotprint` to work.
Precision#
All the numbers we use in the hls4ml models will be in what is called fixed-point encoding. Traditional floating point numbers that you are likely more used to are encoded using the scheme shown below. This provides a wide range of possible values and fine granularity.
However, using this many bits can be excessive, and in the case of running algorithms on FPGAs or other similar devices, requires substantial overhead. Instead, what is typically used is a flexible fixed-point encoding scheme, shown below.
In this case the integer component and the fractional component of the number of separated and a fixed number of bits are used to encode each part. The ap_fixed<width,integer>
notation is specific to Vivado HLS (and hls4ml), but the concept of fixed-point encoding is general.
Compile, predict#
Now that we have the hls4ml model we need to check that this model performance is still good. We compile the hls_model, and then use hls_model.predict
to execute the fixed-point model.
hls_model.compile()
X_test = np.ascontiguousarray(X_test)
y_hls = hls_model.predict(X_test)
Writing HLS project
Done
Compare#
That was easy! Now let’s see how the performance compares to Keras:
print(
"Keras Accuracy: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_keras, axis=1))
)
)
print(
"hls4ml Accuracy: {}".format(
accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_hls, axis=1))
)
)
fig, ax = plt.subplots(figsize=(9, 9))
_ = plotting.makeRoc(y_test, y_keras, le.classes_)
plt.gca().set_prop_cycle(None) # reset the colors
_ = plotting.makeRoc(y_test, y_hls, le.classes_, linestyle="--")
from matplotlib.lines import Line2D
lines = [Line2D([0], [0], ls="-"), Line2D([0], [0], ls="--")]
from matplotlib.legend import Legend
leg = Legend(ax, lines, labels=["keras", "hls4ml"], loc="lower right", frameon=False)
ax.add_artist(leg)
Keras Accuracy: 0.7516506024096385
hls4ml Accuracy: 0.7513795180722892
<matplotlib.legend.Legend at 0x15ed9b850>
_, _, aucs_keras = plotting.rocData(y_test, y_keras, le.classes_)
_, _, aucs_hls = plotting.rocData(y_test, y_hls, le.classes_)
print("Keras: ", aucs_keras)
print("HLS: ", aucs_hls)
print("Ratio: ", {p: aucs_hls[p] / aucs_keras[p] for p in aucs_hls})
Keras: {'g': 0.9290347294457584, 'q': 0.8958873188224001, 't': 0.9558045087100817, 'w': 0.9483956644430941, 'z': 0.9404622540400235}
HLS: {'g': 0.9289642860154309, 'q': 0.8957588010999974, 't': 0.9557260108601827, 'w': 0.948361881379129, 'z': 0.9401852099273351}
Ratio: {'g': 0.9999241756760057, 'q': 0.9998565470012772, 't': 0.9999178724841914, 'w': 0.999964378723742, 'z': 0.999705417084526}
AUC information#
Now that we know how to extract the AUCs and compare the floating point and fixed point values, lets look at how we could determine a good choice for the number of bits to use for this model.
We will fix the number of integer bits to 6 and scan the fractional bits used for the model, and then compare the AUCs in each case. This requires re-compiling the hls4ml project for each fixed-point type we are curious about, and can take a bit of time.
auc_ratios = {}
ib_opts = range(6, 7)
fb_opts = range(4, 15)
for int_bits in ib_opts:
for frac_bits in fb_opts:
prec = "%i,%i" % (int_bits + frac_bits, int_bits)
print("Precision: ", prec)
config = hls4ml.utils.config_from_keras_model(
model, granularity="model", default_precision="ap_fixed<%s>" % prec
)
hls_model = hls4ml.converters.convert_from_keras_model(
model,
hls_config=config,
output_dir="model_1/hls4ml_prj",
part="xcu250-figd2104-2L-e",
)
hls_model.compile()
y_hls = hls_model.predict(X_test)
_, _, aucs_hls = plotting.rocData(y_test, y_hls, le.classes_)
auc_ratios[prec] = {p: aucs_hls[p] / aucs_keras[p] for p in aucs_hls}
print(auc_ratios)
Precision: 10,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 11,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 12,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 13,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 14,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 15,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 16,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 17,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 18,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 19,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
Precision: 20,6
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: Input
Layer name: fc1, layer type: Dense
-> Activation (linear), layer name: fc1
Layer name: relu1, layer type: Activation
Layer name: fc2, layer type: Dense
-> Activation (linear), layer name: fc2
Layer name: relu2, layer type: Activation
Layer name: fc3, layer type: Dense
-> Activation (linear), layer name: fc3
Layer name: relu3, layer type: Activation
Layer name: output, layer type: Dense
-> Activation (linear), layer name: output
Layer name: softmax, layer type: Activation
Interpreting Sequential
Topology:
Layer name: fc1_input, layer type: InputLayer, input shapes: [[None, 16]], output shape: [None, 16]
Layer name: fc1, layer type: Dense, input shapes: [[None, 16]], output shape: [None, 64]
Layer name: relu1, layer type: Activation, input shapes: [[None, 64]], output shape: [None, 64]
Layer name: fc2, layer type: Dense, input shapes: [[None, 64]], output shape: [None, 32]
Layer name: relu2, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: fc3, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: relu3, layer type: Activation, input shapes: [[None, 32]], output shape: [None, 32]
Layer name: output, layer type: Dense, input shapes: [[None, 32]], output shape: [None, 5]
Layer name: softmax, layer type: Softmax, input shapes: [[None, 5]], output shape: [None, 5]
Creating HLS model
Writing HLS project
Done
{'10,6': {'g': 0.771805795725377, 'q': 0.7518189587833464, 't': 0.48055491677478596, 'w': 0.5652953800588209, 'z': 0.6338226315744492}, '11,6': {'g': 0.9544743715698848, 'q': 0.9472393878038213, 't': 0.8570649780341788, 'w': 0.7984013999356896, 'z': 0.8247048190855363}, '12,6': {'g': 0.9843146687931297, 'q': 0.9881839658154763, 't': 0.9790830739402799, 'w': 0.980827517114476, 'z': 0.9860770998204087}, '13,6': {'g': 0.9963531888413152, 'q': 0.996832190229542, 't': 0.995404813988611, 'w': 0.9969004898087269, 'z': 0.9956333798202015}, '14,6': {'g': 0.9991284446714196, 'q': 0.999044167359976, 't': 0.9987378388864323, 'w': 0.9994161176400947, 'z': 0.998407371567755}, '15,6': {'g': 0.9997375222338856, 'q': 0.9997772963411847, 't': 0.9996536519095734, 'w': 0.999871888020202, 'z': 0.999411403491377}, '16,6': {'g': 0.9999241756760057, 'q': 0.9998565470012772, 't': 0.9999178724841914, 'w': 0.999964378723742, 'z': 0.999705417084526}, '17,6': {'g': 0.9998680711599471, 'q': 0.9999202324103631, 't': 1.0000219124706096, 'w': 0.9999333645420504, 'z': 0.999839658404064}, '18,6': {'g': 0.9998614673986547, 'q': 0.9999058217044001, 't': 0.9999968245684372, 'w': 0.9999331542828399, 'z': 0.9998600660587773}, '19,6': {'g': 0.9998524225781378, 'q': 0.9999117024254243, 't': 0.9999904889168183, 'w': 0.999910070808327, 'z': 0.9998448194817858}, '20,6': {'g': 0.9998492477676588, 'q': 0.999891386913043, 't': 0.9999932841786254, 'w': 0.9999008688270283, 'z': 0.9998488172506399}}
auc_ratios_grid = {}
for c in le.classes_:
auc_array = []
for int_bits in ib_opts:
int_array = []
for frac_bits in fb_opts:
int_array.append(auc_ratios["%i,%i" % (int_bits + frac_bits, int_bits)][c])
auc_array.append(int_array)
auc_ratios_grid[c] = np.array(auc_array)
avg = np.sum(np.array([auc_ratios_grid[c] for c in le.classes_]), axis=0) / len(
le.classes_
)
auc_ratios_grid["Average"] = avg
print(auc_ratios_grid["w"])
print(auc_ratios_grid["Average"])
[[0.56529538 0.7984014 0.98082752 0.99690049 0.99941612 0.99987189
0.99996438 0.99993336 0.99993315 0.99991007 0.99990087]]
[[0.64065954 0.87637699 0.98369727 0.99622481 0.99894679 0.99969035
0.99987368 0.99991665 0.99991147 0.9999019 0.99989672]]
plt.figure(figsize=(8, 8))
for c in auc_ratios_grid:
plt.plot(fb_opts, auc_ratios_grid[c][0], label=c, lw=2)
plt.ylabel("AUC (HLS) / AUC (Keras)")
plt.xlabel("Fractional Bits")
plt.legend()
<matplotlib.legend.Legend at 0x15dd3a550>
We can see that when the number of fractional bits is above 8 the fixed-point model recovers the full floating-point performance (as measured by the AUC). Performance of the fixed-point model really suffers when the number of fractional bits is less than 6.