How to deploy a Flask ML model on Ngnix using wfastcgi & gunicorn gateway (Ubuntu 18.04) — Part I

Prachi More
FAUN — Developer Community 🐾
4 min readSep 9, 2019

--

This guide is not focused on the ML model code & modifications but more on the application environment set-up including wfastcgi and gunicorn configurations required to deploy and host the model.

Conda — open-source package manager, environment manager, and distribution of the Python programming language

Flask — web microframework for python which will give us the model API endpoint

Gunicorn — application server gateway

Nginx — webserver & frontend reverse proxy

Model deployment stack
  1. Prep the Ubuntu

Install the packages which are required to host the python virtual environment. This includes pip, the Python package manager, which will manage our Python components and the environment installables.

$ sudo apt update && apt get upgrade$ python3 --version
Python 3.6.8

2. Install Miniconda

Download the latest installer for your OS from below site.

$ wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
— 2019–08–23 09:47:51 — https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
Resolving repo.anaconda.com (repo.anaconda.com)… 104.16.130.3, 104.16.131.3, 2606:4700::6810:8303, …
Connecting to repo.anaconda.com (repo.anaconda.com)|104.16.130.3|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 75257002 (72M) [application/x-sh]
Saving to: ‘Miniconda3-latest-Linux-x86_64.sh’
Miniconda3-latest-Linux-x86_64.sh 100%[==================================================================================>] 71.77M 223MB/s in 0.3s2019–08–23 09:47:51 (223 MB/s) — ‘Miniconda3-latest-Linux-x86_64.sh’ saved [75257002/75257002]

Start the installation, accept the license terms and set the installation location likewise further.

$ chmod +x Miniconda3-latest-Linux-x86_64.sh$ ./Miniconda3-latest-Linux-x86_64.shWelcome to Miniconda3 4.7.10In order to continue the installation process, please review the license
agreement.
Please, press ENTER to continue

Check the installation.

$ cd /home/ubuntu/miniconda3/$ ./bin/conda init$ ./bin/conda env list
# conda environments:
#
base * /home/ubuntu/miniconda3

3. Create the conda environment or a python virtual environment with all the required packages to host your ML model.

Here I have my environment yml file with packages like tensorflow, wheel, keras etc. Make sure you have flask in the list of packages.

Please note that this is only a sample environment file. It is from another environment in my repo and may not work on on the latest installation due to package version incompatibilities.

name: TestEnv
channels:
- defaults
dependencies:
- certifi=2019.6.16=py36_1
- pip=19.1.1=py36_0
- python=3.6.5=h0c2934d_0
- setuptools=41.0.1=py36_0
- vc=14.1=h0510ff6_4
- vs2015_runtime=14.15.26706=h3a45250_4
- wheel=0.33.4=py36_0
- wincertstore=0.2=py36h7fe50ca_0
- pip:
- absl-py==0.7.1
- astor==0.8.0
- awscli==1.15.59
- click==7.0
- cycler==0.10.0
- decorator==4.4.0
- flask==1.1.1
- flask-cors==3.0.8
- gast==0.2.2
- google-pasta==0.1.7
- grpcio==1.22.0
- h5py==2.9.0
- imageio==2.5.0
- imutils==0.5.2
- itsdangerous==1.1.0
- jinja2==2.10.1
- keras==2.1.6
- keras-applications==1.0.8
- keras-preprocessing==1.1.0
- kiwisolver==1.1.0
- markdown==3.1.1
- markupsafe==1.1.1
- matplotlib==3.1.1
- networkx==2.3
- numpy==1.16.4
- opencv-python==4.1.0.25
- pandas==0.23.0
- pillow==6.1.0
- protobuf==3.9.1
- psycopg2-binary==2.8.3
- pyparsing==2.4.2
- python-dateutil==2.8.0
- pytz==2019.2
- pywavelets==1.0.3
- pyyaml==5.1.2
- scikit-image==0.15.0
- scipy==1.3.1
- six==1.12.0
- tensorboard==1.12.2
- tensorflow==1.12.0
- termcolor==1.1.0
- werkzeug==0.15.5
- wrapt==1.11.2
prefix: /home/ubuntu/miniconda3

Create the environment.

$ conda env create -f TestEnv.yml
.
.
(base) $ conda env list
# conda environments:
#
base * /home/ubuntu/miniconda3
TestEnv /home/ubuntu/miniconda3/envs/TestEnv
(base) $ conda activate TestEnv
(TestEnv) $ ----> prompt should change to the new environment
(TestEnv) $ python --version
Python 3.6.3 :: Anaconda, Inc.

4. Install gunicorn

(TestEnv) ubuntu@ip-172-31-19-60:~/miniconda3$ pip install gunicorn
Collecting gunicorn
Downloading https://files.pythonhosted.org/packages/8c/da/b8dd8deb741bff556db53902d4706774c8e1e67265f69528c14c003644e6/gunicorn-19.9.0-py2.py3-none-any.whl (112kB)
|████████████████████████████████| 122kB 1.7MB/s
Installing collected packages: gunicorn
Successfully installed gunicorn-19.9.0

5. Set up the application (ML model)

Here I’m setting up a sample app.

(TestEnv) $ vi sample.py

The application code will import Flask and instantiate a Flask object.

from flask import Flask
app = Flask(__name__)

@app.route("/")
def hello():
return "<h1 style='color:blue'>Hello There!</h1>"

if __name__ == "__main__":
app.run(host='0.0.0.0')

Test your application.

(TestEnv)$ python sample.py
* Serving Flask app "sample" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

Hit the application on the browser on port 5000 and check. Hit Ctrl+C to quit the application once done.

http://your_server_ip:5000

Now that the flask route is working fine, next is to set up the gunicorn server.

Create a WSGI enctrypoint connector, which will tell the gunicorn server how to connect to the application backend.

(TestEnv)$ vi wsgi.py

Import the flask application instance in this connector file by adding this code snippet.

from sample import appif __name__ == "__main__":
app.run()

Now launch gunicorn such that the name of the caller function within the application along with the name of the module as‘wsgi:app’.

(TestEnv) $ gunicorn --bind 0.0.0.0:5000 wsgi:app
[2019-09-09 12:19:50 +0000] [23047] [INFO] Starting gunicorn 19.9.0
[2019-09-09 12:19:50 +0000] [23047] [INFO] Listening at: http://0.0.0.0:5000 (23047)
[2019-09-09 12:19:50 +0000] [23047] [INFO] Using worker: sync
[2019-09-09 12:19:50 +0000] [23050] [INFO] Booting worker with pid: 23050

The application should now serve on public interface on port 5000 via gunicorn.

Launch the application once again in the browser and check.

http://your_server_ip:5000

We’ll see the Nginx configuration in the subsequent part of the article.

Follow us on Twitter 🐦 and Facebook 👥 and join our Facebook Group 💬.

To join our community Slack 🗣️ and read our weekly Faun topics 🗞️, click here⬇

If this post was helpful, please click the clap 👏 button below a few times to show your support for the author! ⬇

--

--