+ docker build

This commit is contained in:
nicobo 2020-12-20 15:46:57 +01:00
parent be6646c1ff
commit a741194deb
No known key found for this signature in database
GPG key ID: 2581E71C5FA5285F
5 changed files with 248 additions and 18 deletions

60
Dockerfile-alpine Normal file
View file

@ -0,0 +1,60 @@
# This image is based on Alpine linux to get a minimum memory footprint
# Some say that alpine should not be used for Python (https://pythonspeed.com/articles/alpine-docker-python/),
# however given some additional work, overall it may be better in the end (https://nickjanetakis.com/blog/the-3-biggest-wins-when-using-alpine-as-a-base-docker-image)
# Currently this Dockerfile is not thoroughfully tested...
# Finally, a better option may probably be to just run the python scripts without any setup/install
# Also : Python version > 3.4.2
FROM python:3-alpine AS builder
# python:3-alpine misses gcc, ffi.h, ...
#
# GCC part :
# https://number1.co.za/alpine-python-docker-base-image-problem-with-gcc/
# https://wiki.alpinelinux.org/wiki/How_to_get_regular_stuff_working
#
# Python cryptography part :
# https://stackoverflow.com/questions/35736598/cannot-pip-install-cryptography-in-docker-alpine-linux-3-3-with-openssl-1-0-2g
# https://github.com/pyca/cryptography/blob/1340c00/docs/installation.rst#building-cryptography-on-linux
# Required to build
RUN apk add --no-cache build-base gcc abuild binutils cmake \
libressl-dev musl-dev libffi-dev
COPY requirements-runtime.txt .
RUN pip install --no-cache-dir --user -r requirements-runtime.txt
# It could be packaged (RUN python setup.py sdist bdist_wheel) to possibly
# improve size and speed ; probably as a multistage build
# And update the version from git using setuptools-scm
# But it requires a bit of work
#RUN python setup.py sdist bdist_wheel
FROM python:3-alpine
WORKDIR /usr/src/app
# Required at runtime
# libressl-dev : seems required for python to locate modules, or for omemo ?
RUN apk add --no-cache libressl-dev
COPY --from=builder /root/.local /root/.local/
#COPY --from=builder /root/.cache /root/.cache/
#COPY --from=builder /usr/local/lib /usr/local/lib/
# https://www.docker.com/blog/containerized-python-development-part-1/
# update PATH environment variable
#ENV PATH=/root/.local/bin:$PATH
# TODO How to do it with one COPY ?
# Or it could be COPY . . with a proper .dockerignore
# Or build the context as a preliminary step
COPY nicobot nicobot/
# This script allows packaging several bots in the same image
# (to be clean they could be in separate images but they're so close that it's a lot easier to package and does not waste space by duplicating images)
# Otherwise the ENTRYPOINT should simply be [ "python"]
# Made a a separate COPY because it's a docker-specific layer
# (other layers don't need to be re-built if this one changes)
COPY docker-entrypoint.sh .
ENTRYPOINT [ "./docker-entrypoint.sh" ]

29
Dockerfile-debian Normal file
View file

@ -0,0 +1,29 @@
# Python version > 3.4.2
FROM python:3
RUN apt-get update && \
apt install -y cmake g++ make && \
rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src/app
# TODO How to do it with one COPY ?
# Or it could be COPY . . with a proper .dockerignore
# Or build the context as a preliminary step
COPY nicobot nicobot/
COPY requirements-runtime.txt .
RUN pip install --no-cache-dir -r requirements-runtime.txt
# It could be packaged (RUN python setup.py sdist bdist_wheel) to possibly
# improve size and speed ; probably as a multistage build
# And update the version from git using setuptools-scm
# But it requires a bit of work
#RUN python setup.py sdist bdist_wheel
# This script allows packaging several bots in the same image
# (to be clean they could be in separate images but they're so close that it's a lot easier to package and does not waste space by duplicating images)
# Otherwise the ENTRYPOINT should simply be [ "python"]
# Made a a separate COPY because it's a docker-specific layer
# (other layers don't need to be re-built if this one changes)
COPY docker-entrypoint.sh .
ENTRYPOINT [ "./docker-entrypoint.sh" ]

29
Dockerfile-debian-slim Normal file
View file

@ -0,0 +1,29 @@
# Python version > 3.4.2
FROM python:3-slim
RUN apt-get update && \
apt install -y cmake g++ make && \
rm -rf /var/lib/apt/lists/*
WORKDIR /usr/src/app
# TODO How to do it with one COPY ?
# Or it could be COPY . . with a proper .dockerignore
# Or build the context as a preliminary step
COPY nicobot nicobot/
COPY requirements-runtime.txt .
RUN pip install --no-cache-dir -r requirements-runtime.txt
# It could be packaged (RUN python setup.py sdist bdist_wheel) to possibly
# improve size and speed ; probably as a multistage build
# And update the version from git using setuptools-scm
# But it requires a bit of work
#RUN python setup.py sdist bdist_wheel
# This script allows packaging several bots in the same image
# (to be clean they could be in separate images but they're so close that it's a lot easier to package and does not waste space by duplicating images)
# Otherwise the ENTRYPOINT should simply be [ "python"]
# Made a a separate COPY because it's a docker-specific layer
# (other layers don't need to be re-built if this one changes)
COPY docker-entrypoint.sh .
ENTRYPOINT [ "./docker-entrypoint.sh" ]

117
README.md
View file

@ -17,17 +17,41 @@ This project features :
## Requirements & installation
Requires :
### Classic installation
A classic (virtual) machine installation requires :
- Python 3 (>= 3.4.2)
- [signal-cli](https://github.com/AsamK/signal-cli) (for the *Signal* backend)
- [signal-cli](https://github.com/AsamK/signal-cli) for the *Signal* backend (see [Using the Signal backend] below for requirements)
- For *transbot* : an IBM Cloud account ([free account ok](https://www.ibm.com/cloud/free))
Install Python dependencies with :
### Docker usage
pip3 install -r requirements.txt
There are several [Docker](https://docker.com) images available, with the following tags :
See below for *Signal* requirements.
- **debian** : if you have several images with the debian base, this may be the most efficient (as base layers will be shared with other images)
- **debian-slim** : if you want a smaller-sized image and you don't run other images based on debian (as it will not share as much layers as with the above `debian` tag)
- **alpine** : this is the smallest image (<100MB) but it may have more bugs than debian ones because it's more complex to maintain
Since those bots are probably not going be enterprise-level critical at any point, I suggest you use the _alpine_ image and switch to _debian_ or _debian-slim_ if you encounter performance issues or other problems.
Those images should be able to run on all CPU architectures supported by [the base images](https://hub.docker.com/_/python).
Sample command to run :
docker run --rm -it -v "myconfdir:/etc/nicobot" nicobot:alpine transbot -C /etc/nicobot
### Installation from source
You can also install from source (you need _python3_ & _pip_) :
# Sample command to install python3 & pip3 on Debian ; update it according to your OS
sudo apt install python3 python3-pip
git clone https://github.com/nicolabs/nicobot.git
cd nicobot
pip3 install -r requirements-runtime.txt
Then simply follow the instructions below to configure & run it.
@ -44,34 +68,32 @@ The sample configuration shows how to make it translate any message containing "
### Quick start
1. Install prerequisites ; for Debian systems this will look like :
1. Install the package for systems this will look like :
```
sudo apt install python3 python3-pip
git clone https://github.com/nicolabs/nicobot.git
cd nicobot
pip3 install -r requirements.txt
pip3 install nicobot
```
2. [Create a *Language Translator* service instance on IBM Cloud](https://cloud.ibm.com/catalog/services/language-translator) and [get the URL and API key from your console](https://cloud.ibm.com/resources?groups=resource-instance)
3. Fill them into `tests/transbot-sample-conf/config.yml` (`ibmcloud_url` and `ibmcloud_apikey`)
4. Run `python3 nicobot/transbot.py -C tests/transbot-sample-conf`
4. Run `transbot -C tests/transbot-sample-conf`
5. Input `Hello world` in the console : the bot will print a random translation of "Hello World"
6. Input `Bye nicobot` : the bot will terminate
If you want to send & receive messages through *Signal* instead of reading from the keyboard & printing to the console :
1. Install and configure `signal-cli` (see below for details)
2. Run `python3 nicobot/transbot.py -C tests/transbot-sample-conf -b signal -U '+33123456789' -r '+34987654321'` with `-U +33123456789` your *Signal* number and `-r +33987654321` the one of the person you want to make the bot chat with
2. Run `transbot -C tests/transbot-sample-conf -b signal -U '+33123456789' -r '+34987654321'` with `-U +33123456789` your *Signal* number and `-r +33987654321` the one of the person you want to make the bot chat with
See dedicated chapters below for more options...
### Main configuration options and files
Run `transbot.py -h` to get a description of all options.
Run `transbot -h` to get a description of all options.
Below are the most important configuration options for this bot (please also check the generic options below) :
- **--keyword** and **--keywords-file** will help you generate the list of keywords that will trigger the bot. To do this, run `transbot.py --keyword <a_keyword> --keyword <another_keyword> ...` a **first time with** : this will download all known translations for these keywords and save them into a `keywords.json` file. Next time you run the bot, **don't** use the `--keyword` option : it will reuse this saved keywords list. You can use `--keywords-file` to change the default name.
- **--keyword** and **--keywords-file** will help you generate the list of keywords that will trigger the bot. To do this, run `transbot --keyword <a_keyword> --keyword <another_keyword> ...` a **first time with** : this will download all known translations for these keywords and save them into a `keywords.json` file. Next time you run the bot, **don't** use the `--keyword` option : it will reuse this saved keywords list. You can use `--keywords-file` to change the default name.
- **--languages-file** : The first time the bot runs, it will download the list of supported languages into `languages.<locale>.json` and reuse it afterwards but you can give it a specific file with the set of languages you want. You can use `--locale` to set the desired locale.
- **--locale** will select the locale to use for default translations (with no target language specified) and as the default parsing language for keywords.
- **--ibmcloud-url** and **--ibmcloud-apikey** can be obtained from your IBM Cloud account ([create a Language Translator instance](https://cloud.ibm.com/apidocs/language-translator) then go to [the resource list](https://cloud.ibm.com/resources?groups=resource-instance))
@ -96,7 +118,7 @@ This JSON structure will have to be parsed in order to retrieve the answer and d
### Main configuration options
Run `askbot.py -h` to get a description of all options.
Run `askbot -h` to get a description of all options.
Below are the most important configuration options for this bot (please also check the generic options below) :
@ -113,7 +135,7 @@ The following command will :
- Wait for a maximum of 3 messages in answer and return
- Or return immediately if one message matches one of the given patterns labeled 'yes', 'no' or 'cancel'
python3 askbot.py -m "Do you like me ?" -p yes '(?i)\b(yes|ok)\b' -p no '(?i)\bno\b' -p cancel '(?i)\b(cancel|abort)\b' --max-count 3 -b signal -U '+33123456789' --recipient '+34987654321'
askbot -m "Do you like me ?" -p yes '(?i)\b(yes|ok)\b' -p no '(?i)\bno\b' -p cancel '(?i)\b(cancel|abort)\b' --max-count 3 -b signal -U '+33123456789' --recipient '+34987654321'
If the user *+34987654321* would reply "I don't know" then "Ok then : NO !", the output would be :
@ -226,7 +248,6 @@ Then you must [*register* or *link*](https://github.com/AsamK/signal-cli/blob/ma
Please see the [man page](https://github.com/AsamK/signal-cli/blob/master/man/signal-cli.1.adoc) for more details.
### Signal-specific options
- `--signal-username` selects the account to use to send and read message : it is a phone number in international format (e.g. `+33123456789`). In `config.yml`, make sure to put quotes around it to prevent YAML thinking it's an integer (because of the 'plus' sign). If missing, `--username` will be used.
@ -234,14 +255,74 @@ Please see the [man page](https://github.com/AsamK/signal-cli/blob/master/man/si
Sample command line to run the bot with Signal :
python3 nicobot/transbot.py -b signal -U +33612345678 -g "mABCDNVoEFGz0YeZM1234Q==" --ibmcloud-url https://api.eu-de.language-translator.watson.cloud.ibm.com/instances/a234567f-4321-abcd-efgh-1234abcd7890 --ibmcloud-apikey "f5sAznhrKQyvBFFaZbtF60m5tzLbqWhyALQawBg5TjRI"
transbot -b signal -U +33612345678 -g "mABCDNVoEFGz0YeZM1234Q==" --ibmcloud-url https://api.eu-de.language-translator.watson.cloud.ibm.com/instances/a234567f-4321-abcd-efgh-1234abcd7890 --ibmcloud-apikey "f5sAznhrKQyvBFFaZbtF60m5tzLbqWhyALQawBg5TjRI"
## Development
Install Python dependencies with :
pip3 install -r requirements-build.txt -r requirements-runtime.txt
To run unit tests :
python -m unittest discover -v -s tests
python3 -m unittest discover -v -s tests
To run directly from source (without packaging, e.g. for development) :
python3 -m nicobot.askbot
To build locally (more at [pypi.org](https://packaging.python.org/tutorials/packaging-projects/)) :
python3 setup.py sdist bdist_wheel
To upload to test.pypi.org :
# Defines username and password (or '__token__' and API key) ; alternatively CLI `-u` and `-p` options or user input may be used (or even certificates, see `python3 -m twine upload --help`)
TWINE_USERNAME=__token__
TWINE_PASSWORD=`pass pypi/test.pypi.org/api_token | head -1`
python3 -m twine upload --repository testpypi dist/*
To upload to PROD pypi.org :
Otherwise, it is automatically tested, built and uploaded to pypi.org using Travis CI on each push to GitHub.
### Docker build
There are several Dockerfile, each made for specific use cases (see [Docker-usage](#Docker-usage) above) :
`Dockerfile-debian` and `Dockerfile-debian-slim` are quite straight and very similar.
`Dockerfile-alpine` is a multi-stage build because most of the Python dependencies need to be compiled first.
The first stage builds the libraries and the second stage just imports them without all the build tools.
The result is a far smaller image.
There is no special requirement to build those images ; sample build & run commands :
docker build -t nicobot:alpine -f Dockerfile-alpine .
docker run --rm -it -v "$(pwd)/tests:/etc/nicobot" nicobot:debian-slim askbot -c /etc/nicobot/askbot-sample-conf/config.yml
The _multiarch_ compatibility is simply supported by [the base images](https://hub.docker.com/_/python) (no need to run `docker buildx`).
The images have all the bots inside, as they only differ from each other by one script.
The `entrypoint.sh` script takes as arguments : first the name of the bot to invoke, then the bot's arguments.
### Versioning
The command-line option to display the scripts' version relies on _setuptools_scm_, which extracts it from the underlying git metadata.
This is convenient because one does not have to manually update the version (or forget to do it prior a release).
There are several options from which the following one has been retained :
- Running `setup.py` creates / updates the version inside the `version.py` file
- The scripts simply load this module at runtime
This requires `setup.py` to be run before the version can be extracted but :
- it does not require _setuptools_ nor _git_ at runtime
- it frees us from having the `.git` directory around at runtime ; this is especially useful to make the docker images smaller
## Resources

31
docker-entrypoint.sh Executable file
View file

@ -0,0 +1,31 @@
#!/bin/sh
usage()
{
cat << EOF
Usage : $0 [bot's name] [bot's arguments]
Available bots :
- askbot
- transbot
E.g. '$0 transbot -h' to get a more specific help for 'transbot'
EOF
}
# It's not needed to repeat the commands for each bot but it's clearer
bot=$1
case $bot in
transbot)
shift
exec python -m "nicobot.$bot" "$@"
;;
askbot)
shift
exec python -m "nicobot.$bot" "$@"
;;
*)
usage
exit 1
;;
esac