atextcrawler/doc/source/tensorflow_model_server.md

2.4 KiB

Tensorflow model server

Setup server

Prepare:

apt install gnupg2

Add repo:

echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -

Install package:

apt update
apt install tensorflow-model-server

Setup models

mkdir -p /srv/tensorflow/workdir
mkdir -p /srv/tensorflow/models

Choose models from tfhub.dev and for each do:

# example: https://tfhub.dev/google/universal-sentence-encoder-multilingual/3
mkdir -p /srv/tensorflow/models/universal-sentence-encoder-multilingual/3
cd /srv/tensorflow/models/universal-sentence-encoder-multilingual/3
wget https://tfhub.dev/google/universal-sentence-encoder-multilingual/3?tf-hub-format=compressed
tar xvfz universal-sentence-encoder-multilingual_3.tar.gz
rm universal-sentence-encoder-multilingual_3.tar.gz

Check:

tensorflow_model_server --rest_api_port=9000 --model_base_path="/srv/tensorflow/models/universal-sentence-encoder-multilingual/" --model_name=sentences

Config file /srv/tensorflow/config:

model_config_list: {
  config: {
    name: "sentences",
    base_path: "/srv/tensorflow/models/universal-sentence-encoder-multilingual",
    model_platform: "tensorflow"
    model_version_policy: {latest{}},
  },
  config: {
    ... (next model)
  },
}

Systemd integration

Edit /etc/systemd/system/tensorflow.service

[Unit]
Description=tensorflow model server
After=network.target auditd.service

[Service]
Type=simple
WorkingDirectory=/srv/tensorflow/workdir
ExecStart=/usr/bin/tensorflow_model_server --rest_api_port=9000 --model_config_file=/srv/tensorflow/config
KillMode=process
Restart=on-failure
RestartSec=30s

[Install]
WantedBy=multi-user.target

and

systemctl daemon-reload
systemctl enable tensorflow
systemctl start tensorflow

Check:

http -j GET http://localhost:9000/v1/models/sentences

Usage

Show model details:

http -j GET http://localhost:9000/v1/models/sentences/metadata

Docs

Datasets: