Compare commits

...

7 Commits

Author SHA1 Message Date
Ariel Rin
424246df26 Version Bump 3.7.1 2023-10-19 14:00:29 +10:00
Ariel Rin
563e2210ef Bump Django-ESI to >=5.0.0 2023-10-19 13:11:35 +10:00
Ariel Rin
02a1078005 Merge branch 'remove-thirdparty' into 'master'
Remove outdated supervisor configs - refer to docs

See merge request allianceauth/allianceauth!1533
2023-10-19 03:03:35 +00:00
Ariel Rin
30107de44e Merge branch 'docs-precommit' into 'master'
Add code-style docs

Closes #1379

See merge request allianceauth/allianceauth!1534
2023-10-19 03:03:14 +00:00
Ariel Rin
77a08cd218 Add code-style docs 2023-10-07 22:32:19 +10:00
Ariel Rin
e5a09027e5 Remove outdated supervisor configs - refer to docs 2023-10-07 21:53:03 +10:00
Ariel Rin
52b6c5d341 Rework default celery configuration and documentation 2023-10-07 19:31:53 +10:00
10 changed files with 100 additions and 133 deletions

View File

@@ -5,7 +5,7 @@ manage online service access.
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
__version__ = '3.7.0'
__version__ = '3.7.1'
__title__ = 'Alliance Auth'
__url__ = 'https://gitlab.com/allianceauth/allianceauth'
NAME = f'{__title__} v{__version__}'

View File

@@ -1,7 +1,7 @@
PROTOCOL=https://
AUTH_SUBDOMAIN=%AUTH_SUBDOMAIN%
DOMAIN=%DOMAIN%
AA_DOCKER_TAG=registry.gitlab.com/allianceauth/allianceauth/auth:v3.7.0
AA_DOCKER_TAG=registry.gitlab.com/allianceauth/allianceauth/auth:v3.7.1
# Nginx Proxy Manager
PROXY_HTTP_PORT=80

View File

@@ -1,5 +1,5 @@
FROM python:3.9-slim
ARG AUTH_VERSION=v3.7.0
ARG AUTH_VERSION=v3.7.1
ARG AUTH_PACKAGE=allianceauth==${AUTH_VERSION}
ENV VIRTUAL_ENV=/opt/venv
ENV AUTH_USER=allianceauth

View File

@@ -0,0 +1,49 @@
# Code Style
## Pre-Commit
Alliance Auth is a team effort with developers of various skill levels and background. To avoid significant drift or formatting changes between developers we use [pre-commit](https://pre-commit.com/) to apply a very minimal set of formatting checks to code contributed to the project.
Pre-commit is also very popular with our Community Apps and may be significantly more opinionated or looser depending on the project.
To get started, `pip install pre-commit`, then `pre-commit install` to add the git hooks.
Before any code is "git push"-ed, pre-commit will check it for uniformity and correct it if possible
```bash
check python ast.....................................(no files to check)Skipped
check yaml...........................................(no files to check)Skipped
check json...........................................(no files to check)Skipped
check toml...........................................(no files to check)Skipped
check xml............................................(no files to check)Skipped
check for merge conflicts............................(no files to check)Skipped
check for added large files..........................(no files to check)Skipped
detect private key...................................(no files to check)Skipped
check for case conflicts.............................(no files to check)Skipped
debug statements (python)............................(no files to check)Skipped
fix python encoding pragma...........................(no files to check)Skipped
fix utf-8 byte order marker..........................(no files to check)Skipped
mixed line ending....................................(no files to check)Skipped
trim trailing whitespace.............................(no files to check)Skipped
check that executables have shebangs.................(no files to check)Skipped
fix end of files.....................................(no files to check)Skipped
Check .editorconfig rules............................(no files to check)Skipped
django-upgrade.......................................(no files to check)Skipped
pyupgrade............................................(no files to check)Skipped
```
## Editorconfig
[Editorconfig](https://editorconfig.org/) is supported my most IDE's to streamline the most common editor disparities. While checked by our pre-commit file, using this in your IDE (Either automatically or via a plugin) will minimize the corrections that may need to be made.
## Doc Strings
We prefer either [PEP-287](https://peps.python.org/pep-0287/)/[reStructuredText](https://docutils.sourceforge.io/rst.html) or [Google](https://google.github.io/styleguide/pyguide.html#381-docstrings) Docstrings.
These can be used to automatically generate our Sphinx documentation in either format.
## Best Practice
It is advisable to avoid wide formatting changes on code that is not being modified by an MR. Further to this, automated code formatting should be kept to a minimal when modifying sections of existing files.
If you are contributing whole modules or rewriting large sections of code you may use any legible code formatting valid under Python.

View File

@@ -7,4 +7,5 @@ This section contains important information on how to develop Alliance Auth itse
:maxdepth: 1
documentation
code-style
```

View File

@@ -399,16 +399,10 @@ Update & install basic tools before installing further Python packages:
pip install -U pip setuptools wheel
```
You can install **Alliance Auth** with the following command. This will install AA and all its Python dependencies.
You can install **Alliance Auth** with the following command. This will install AA, AA's Python dependencies, superlance for memory monitoring and gunicorn as a wsgi server
```bash
pip install allianceauth
```
You should also install Gunicorn now unless you want to use another WSGI server (see [Gunicorn](#gunicorn) for details):
```bash
pip install gunicorn
pip install allianceauth superlance gunicorn
```
#### Create Alliance Auth project

View File

@@ -28,43 +28,11 @@ command=/home/allianceserver/venv/auth/bin/celery -A myauth worker -l info
Celery workers often have memory leaks and will therefore grow in size over time. While the Alliance Auth team is working hard to ensure Auth is free of memory leaks some may still be cause by bugs in different versions of libraries or community apps. It is therefore good practice to enable features that protect against potential memory leaks.
There are two ways to protect against memory leaks:
- Worker
- Supervisor
### Worker
Celery workers can be configured to automatically restart if they grow above a defined memory threshold. Restarts will be graceful, so current tasks will be allowed to complete before the restart happens.
To add protection against memory leaks add the following to the command configuration of your worker in the `supervisor.conf` file. This sets the upper limit to 256MB.
```text
--max-memory-per-child 262144
```
Full example:
```text
command=/home/allianceserver/venv/auth/bin/celery -A myauth worker --max-memory-per-child 262144
```
```eval_rst
.. hint::
The 256 MB limit is just an example and should be adjusted to your system configuration. We would suggest to not go below 128MB though, since new workers start with around 80 MB already. Also take into consideration that this value is per worker and that you properly have more than one worker running in your system (if your workers run as processes, which is the default).
The 256 MB limit is just an example and should be adjusted to your system configuration. We would suggest to not go below 128MB though, since new workers start with around 80 MB already. Also take into consideration that this value is per worker and that you may have more than one worker running in your system.
```
```eval_rst
.. warning::
The ``max-memory-per-child`` parameter only works when workers run as processes (which is the default). It does not work for threads.
```
```eval_rst
.. note::
Alternatively, you can also limit the number of runs per worker until a restart is performed with the worker parameter ``max-tasks-per-child``. This can also protect against memory leaks if you set the threshold is low enough. However, it is less precise since than using ``max-memory-per-child``.
```
See also the [official Celery documentation](https://docs.celeryproject.org/en/stable/userguide/workers.html#max-memory-per-child-setting) for more information about these two worker parameters.
### Supervisor
@@ -78,35 +46,68 @@ To setup install superlance into your venv with:
pip install superlance
```
You can then add `memmon` to your `supervisor.conf`. Here is an example setup with a worker that runs with gevent:
You can then add `memmon` to your `supervisor.conf`:
```text
[eventlistener:memmon]
command=/home/allianceserver/venv/auth/bin/memmon -p worker=512MB
command=/home/allianceserver/venv/auth/bin/memmon -p worker=256MB
directory=/home/allianceserver/myauth
events=TICK_60
```
This setup will check the memory consumption of the program "worker" every 60 secs and automatically restart it if is goes above 512 MB. Note that it will use the stop signal configured in supervisor, which is `TERM` by default. `TERM` will cause a "warm shutdown" of your worker, so all currently running tasks are completed before the restart.
This setup will check the memory consumption of the program "worker" every 60 secs and automatically restart it if is goes above 256 MB. Note that it will use the stop signal configured in supervisor, which is `TERM` by default. `TERM` will cause a "warm shutdown" of your worker, so all currently running tasks are completed before the restart.
Again, the 512 MB is just an example and should be adjusted to fit your system configuration.
Again, the 256 MB is just an example and should be adjusted to fit your system configuration.
## Increasing task throughput
Celery tasks are designed to run concurrently, so one obvious way to increase task throughput is run more tasks in parallel.
Celery tasks are designed to run concurrently, so one obvious way to increase task throughput is run more tasks in parallel. The default celery worker configuration will allow either of these options to be configured out of the box.
### Extra Worker Threads
The easiest way to increate throughput can be achieved by increasing the `numprocs` parameter of the suprvisor process. For example:
```text
[program:worker]
...
numprocs=2
process_name=%(program_name)s_%(process_num)02d
...
```
This number will be multiplied by your concurrency setting,
```
numprocs * concurency = workers
```
increasing this number will require a modification to the memmon settings as each `numproc` worker will get a unique name for example with `numproc=3`
```text
[eventlistener:memmon]
...
command=... -p worker_00=256MB -p worker_01=256MB -p worker_02=256MB
...
```
```eval_rst
.. hint::
You will want to experiment with different settings to find the optimal. One way to generate task load and verify your configuration is to run a model update with the following command:
::
celery -A myauth call allianceauth.eveonline.tasks.run_model_update
```
### Concurrency
This can be achieved by the setting the concurrency parameter of the celery worker to a higher number. For example:
```text
--concurrency=4
--concurrency=10
```
However, there is a catch: In the default configuration each worker will spawn as it's own process. So increasing the number of workers will increase both CPU load and memory consumption in your system.
The recommended number of workers is one per core, which is what you get automatically with the default configuration. Going beyond that can quickly reduce you overall system performance. i.e. the response time for Alliance Auth or other apps running on the same system may take a hit while many tasks are running.
```eval_rst
.. hint::
The optimal number will hugely depend on your individual system configuration and you may want to experiment with different settings to find the optimal. One way to generate task load and verify your configuration is to run a model update with the following command:
@@ -117,43 +118,6 @@ The recommended number of workers is one per core, which is what you get automat
```
### Processes vs. Threads
A better way to increase concurrency without impacting is to switch from processes to threads for celery workers. In general celery workers perform better with processes when tasks are primarily CPU bound. And they perform better with threads when tasks that are primarily I/O bound.
Alliance Auth tasks are primarily I/O bound (most tasks are fetching data from ESI and/or updating the local database), so threads are clearly the better choice for Alliance Auth. However, there is a catch. Celery's out-of-the-box support for threads is limited and additional packages and configurations is required to make it work. Nonetheless, the performance gain - especially in smaller systems - is significant, so it may well be worth the additional configuration complexity.
```eval_rst
.. warning::
One important feature that no longer works with threads is the worker parameter ``--max-memory-per-child`` that protects against memory leaks. But you can alternatively use supervisor_ to monitor and restart your workers.
```
See also the also [this guide](https://www.distributedpython.com/2018/10/26/celery-execution-pool/) on more information about how to configure the execution pool for workers.
### Setting up for threads
First, you need to install a threads packages. Celery supports both gevent and eventlet. We will go with gevent, since it's newer and better supported. Should you encounter any issues with gevent, you may want to try eventlet.
To install gevent make sure you are in your venv and install the following:
```bash
pip install gevent
```
Next we need to reconfigure the workers to use gevent threads. For that add the following parameters to your worker config:
```text
--pool=gevent --concurrency=10
```
Full example:
```text
command=/home/allianceserver/venv/auth/bin/celery -A myauth worker --pool=gevent --concurrency=10
```
Make sure to restart supervisor to activate the changes.
```eval_rst
.. hint::
The optimal number of concurrent workers will be different for every system and we recommend experimenting with different figures to find the optimal for your system. Note, that the example of 10 threads is conservative and should work even with smaller systems.

View File

@@ -37,7 +37,7 @@ dependencies = [
"celery>=5.2.0,<6",
"django-bootstrap-form",
"django-celery-beat>=2.3.0",
"django-esi>=4.0.1",
"django-esi>=5.0.0",
"django-redis>=5.2.0",
"django-registration>=3.3,<3.4",
"django-sortedm2m",

View File

@@ -1,13 +0,0 @@
[program:auth-mumble]
command=python authenticator.py
directory=/home/allianceserver/allianceauth/thirdparty/Mumble
user=allianceserver
numprocs=1
stdout_logfile=/home/allianceserver/allianceauth/log/authenticator.log
stderr_logfile=/home/allianceserver/allianceauth/log/authenticator.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=500

View File

@@ -1,28 +0,0 @@
[program:celerybeat]
command=celery -A alliance_auth beat
directory=/home/allianceserver/allianceauth
user=allianceserver
stdout_logfile=/home/allianceserver/allianceauth/log/beat.log
stderr_logfile=/home/allianceserver/allianceauth/log/beat.log
autostart=true
autorestart=true
startsecs=10
priority=998
[program:celeryd]
command=celery -A alliance_auth worker
directory=/home/allianceserver/allianceauth
user=allianceserver
numprocs=1
stdout_logfile=/home/allianceserver/allianceauth/log/worker.log
stderr_logfile=/home/allianceserver/allianceauth/log/worker.log
autostart=true
autorestart=true
startsecs=10
stopwaitsecs = 600
killasgroup=true
priority=998
[group:auth]
programs=celerybeat,celeryd
priority=999