14 KiB
Devops notes for nicobot
Basic development
Install Python dependencies (for both building and running) and generate nicobot/version.py
with :
pip3 install -r requirements-build.txt -r requirements-runtime.txt
python3 setup.py build
To run unit tests :
python3 -m unittest discover -v -s tests
To run directly from source (without packaging) :
python3 -m nicobot.askbot [options...]
To build locally (more at pypi.org) :
rm -rf ./dist ; python3 setup.py build sdist bdist_wheel
PyPi upload
To upload to test.pypi.org :
python3 -m twine upload --repository testpypi dist/*
To install the test package from test.pypi.org and check that it works :
# First create a virtual environment not to mess with the host system
python3 -m venv venv/pypi_test && source venv/pypi_test/bin/activate
# Then install dependencies using the regular pypi repo
pip3 install -r requirements-runtime.txt
# Finally install this package from the test repo
pip3 install -i https://test.pypi.org/simple/ --no-deps nicobot
# Do some test
python -m nicobot.askbot -V
...
# Exit the virtual environment
deactivate
To upload to PROD pypi.org :
python3 -m twine upload dist/*
Both above twine upload commands will ask for a username and a password. To prevent this, you could set variables :
# Defines username and password (or '__token__' and API key)
export TWINE_USERNAME=__token__
# Example reading the token from a local 'passwordstore'
export TWINE_PASSWORD=`pass pypi/test.pypi.org/api_token`
Or store them in ~/.pypirc
(see doc) :
[pypi]
username = __token__
password = <PyPI token>
[testpypi]
username = __token__
password = <TestPyPI token>
Or even use CLI options -u
and -p
, or certificates...
See python3 -m twine upload --help
for details.
Automation for PyPi
The above instructions allow to build manually but otherwise it is automatically tested, built and uploaded to pypi.org using Travis CI on each push to GitHub (see .travis.yml
).
Docker build
There are several Dockerfiles, each made for specific use cases (see README.md). They all have multiple stages.
debian.Dockerfile
is quite straight. It builds using pip in one stage and copies the resulting wheels into the final one.
debian-signal.Dockerfile
is more complex because it needs to address :
- including both Python and Java while keeping the image size small
- compiling native dependencies (both for signal-cli and qr)
- circumventing a number of bugs in multiarch building
debian-alpine.Dockerfile
produces smaller images but may not be as much portable than debian ones and misses Signal support for now.
Note that the signal-cli backend needs a Java runtime environment, and also rust dependencies to support Signal's group V2. This approximately doubles the size of the images and almost ruins the advantage of alpine over debian...
Those images are limited on each OS (debian+glibc / alpine+musl) to CPU architectures which :
- have base images (python, openjdk, rust)
- have Python dependencies have wheels or are able to build them
- can build libzkgroup (native dependencies for signal)
- have the required packages to build
At the time of writing, support is dropped for :
linux/s390x
: lack of python:3 image (at least)linux/riscv64
: lack of python:3 image (at least)- Signal backend on
linux/arm*
for Alpine variants : lack of JRE binaries
All images have all the bots inside (as they would otherwise only differ by one script from each other).
The docker-entrypoint.sh
script takes the name of the bot to invoke as its first argument, then its own options and finally the bot's arguments.
Sample build command (single architecture) :
docker build -t nicolabs/nicobot:debian -f debian.Dockerfile .
Sample buildx command (multi-arch) :
docker buildx build --platform linux/amd64,linux/arm64,linux/386,linux/arm/v7 -t nicolabs/nicobot:debian -f debian.Dockerfile .
Then run with the provided sample configuration :
docker run --rm -it -v "$(pwd)/tests:/etc/nicobot" nicolabs/nicobot:debian askbot -c /etc/nicobot/askbot-sample-conf/config.yml
Automation for Docker Hub
Github actions are currently used (see .github/workflows/dockerhub.yml
to automatically build and push the images to Docker Hub so they are available whenever commits are pushed to the master branch :
- A Github Action is triggered on each push to the central repo
- All images are built in order using caching (see .github/workflows/dockerhub.yml)
- Images are uploaded to Docker Hub
Docker build process overview
This is the view from the master branch on this repository. It emphasizes FROM and COPY relations between the images (base and stages).
Why no image is available for x arch ?
The open issues labelled with docker should reference the reasons for missing arch / configuration.
Docker image structure
Here are the main application files and directories from within the images :
📦 /
┣ 📂 root/
┃ ┗ 📂 .local/
┃ ┣ 📂 bin/ - - - - - - - - - - - - - -> shortcuts
┃ ┃ ┣ 📜 askbot
┃ ┃ ┣ 📜 transbot
┃ ┃ ┗ 📜 ...
┃ ┣ 📂 lib/pythonX.X/site-packages/ - -> Python packages (nicobot & dependencies)
┃ ┗ 📂 share/signal-cli/ - - - - - - - -> signal-cli configuration files
┗ 📂 usr/src/app/ - - - - - - - - - - - -> app's working directory, default configuration files, ...
┣ 📂 .omemo/ - - - - - - - - - - - - - -> OMEMO keys (XMPP)
┣ 📜 docker-entrypoint.sh
┣ 📜 i18n.en.yml
┗ 📜 ...
Versioning
The --version
command-line option that displays the bots' version relies on setuptools_scm, which extracts it from the underlying git metadata.
This is convenient because the developer does not have to manually update the version (or forget to do it), however it either requires the version to be fixed inside a Python module or the .git
directory to be present.
There were several options among which the following one has been retained :
- Running
setup.py
creates / updates the version inside theversion.py
file - The scripts then load this module at runtime
Since the version.py
file is not saved into the project, setup.py build
must be run before the version can be queried. In exchange :
- it does not require setuptools nor git at runtime
- it frees us from having the
.git
directory around at runtime ; this is especially useful to make the docker images smaller
Tip : python3 setup.py --version
will print the guessed version.
Building signal-cli
The signal backend (actually signal-cli) requires a Java runtime, which approximately doubles the image size. This led to build separate images (same repo but different tags), to allow using smaller images when only the XMPP backend is needed.
Resources
IBM Cloud
Signal
Jabber
- Official XMPP libraries : https://xmpp.org/software/libraries.html
- OMEMO compatible clients : https://omemo.top/
- OMEMO official Python library : looks very immature
- Gaijim, a Windows/MacOS/Linux XMPP client with OMEMO support : gajim.org | dev.gajim.org/gajim
- Conversations, an Android XMPP client with OMEMO support and paid hosting : https://conversations.im
Python libraries
- xmpppy : this library is very easy to use but it does allow easy access to thread or timestamp, and no OMEMO...
- github.com/horazont/aioxmpp : officially referenced library from xmpp.org, seems the most complete but misses practical introduction and does not provide OMEMO OOTB.
- slixmpp : seems like a cool library too and pretends to require minimal dependencies ; plus it supports OMEMO so it's the winner. API doc.
Dockerfile
- Best practices for writing Dockerfiles
- Docker development best practices
- DEBIAN_FRONTEND=noninteractive trick
- Dockerfile reference
JRE + Python in Docker
- Docker hub - python images
- docker-library/openjdk - ubuntu java package has broken cacerts
- Openjdk Dockerfiles @ github
- phusion/baseimage-docker @ github - not used in the end, because not so portable
- Azul JDK - not used in the end because not better than openjdk
- rappdw/docker-java-python image - not used because only for amd64
- Use OpenJDK builds provided by jdk.java.net?
- How to install tzdata on a ubuntu docker image?
Multiarch & native dependencies
- docker.com - Automatic platform ARGs in the global scope
- docker/buildx @ github
- Compiling 'crytography' for Python
- signal-cli - Providing native lib for libsignal
- github.com/signalapp/zkgroup - Compiling on raspberry pi fails
- Multi-Platform Docker Builds (including cargo-specific cross-building)
- How to build ARMv6 and ARMv7 in the same manifest file. (Compatible tag for ARMv7, ARMv6, ARM64 and AMD64)
- The "dpkg-split: No such file or directory" bug
- The "Command '('lsb_release', '-a')' returned non-zero exit status 1" bug
- Binfmt / Installing emulators
- Cross-Compile for Raspberry Pi With Docker
Python build & Python in Docker
- Packaging Python Projects
- What Are Python Wheels and Why Should You Care?
- Using Alpine can make Python Docker builds 50× slower
- pip install manual
- pip is showing error 'lsb_release -a' returned non-zero exit status 1