diff --git a/docs/_static/images/features/apps/corpstats.png b/docs/_static/images/features/apps/corpstats.png new file mode 100644 index 00000000..c6de9f2c Binary files /dev/null and b/docs/_static/images/features/apps/corpstats.png differ diff --git a/docs/conf.py b/docs/conf.py index 3e06966d..abe95fa6 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -101,7 +101,10 @@ html_theme = 'sphinx_rtd_theme' # further. For a list of options available for each theme, see the # documentation. # -# html_theme_options = {} + +html_theme_options = { + 'navigation_depth': 4, +} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, diff --git a/docs/maintenance/customizing.md b/docs/customizing/index.md similarity index 97% rename from docs/maintenance/customizing.md rename to docs/customizing/index.md index 3197d99a..ef1d87b6 100644 --- a/docs/maintenance/customizing.md +++ b/docs/customizing/index.md @@ -4,7 +4,7 @@ It is possible to customize your **Alliance Auth** instance. ```eval_rst .. warning:: - Keep in mind that you may need to update some of your customizations manually after new release (e.g. when replacing AA templates). + Keep in mind that you may need to update some of your customizations manually after new Auth releases (e.g. when replacing templates). ``` ## Site name diff --git a/docs/features/apps/corpstats.md b/docs/features/apps/corpstats.md index 2818a5de..194cd359 100644 --- a/docs/features/apps/corpstats.md +++ b/docs/features/apps/corpstats.md @@ -2,7 +2,7 @@ This module is used to check the registration status of Corp members and to determine character relationships, being mains or alts. -![corpstats](https://i.imgur.com/9lZhf5g.png) +![corpstats](/_static/images/features/apps/corpstats.png) ## Installation diff --git a/docs/features/services/mumble.md b/docs/features/services/mumble.md index 2b4081e3..05a22956 100644 --- a/docs/features/services/mumble.md +++ b/docs/features/services/mumble.md @@ -1,50 +1,93 @@ # Mumble -## Prepare Your Settings -In your auth project's settings file, do the following: - - Add `'allianceauth.services.modules.mumble',` to your `INSTALLED_APPS` list - - Append the following to your local.py settings file: - -```python - # Mumble Configuration - MUMBLE_URL = "" -``` - -## Overview Mumble is a free voice chat server. While not as flashy as TeamSpeak, it has all the functionality and is easier to customize. And is better. I may be slightly biased. -## Dependencies -The mumble server package can be retrieved from a repository we need to add, mumble/release. +```eval_rst +.. note:: + Note that this guide assumes that you have installed Auth with the official :doc:`/installation/allianceauth` guide under ``/home/allianceserver`` and that it is called ``myauth``. Accordingly it assumes that you have a service user called ``allianceserver`` that is used to run all Auth services under supervisor. +``` - apt-add-repository ppa:mumble/release - apt-get update +```eval_rst +.. note:: + Same as the official installation guide this guide is assuming you are performing all steps as ``root`` user. +``` -Now two packages need to be installed: +```eval_rst +.. warning:: + This guide is currently for Ubuntu only. +``` - apt-get install python-software-properties mumble-server +## Installations -Download the appropriate authenticator release from [the authenticator repository](https://gitlab.com/allianceauth/mumble-authenticator) and install the python dependencies for it: +### Installing Mumble Server - pip install -r requirements.txt +The mumble server package can be retrieved from a repository, which we need to add: -## Configuring Mumble -Mumble ships with a configuration file that needs customization. By default it’s located at /etc/mumble-server.ini. Open it with your favourite text editor: +```bash +apt-add-repository ppa:mumble/release +``` - nano /etc/mumble-server.ini +```bash +apt-get update +``` -REQUIRED: To enable the ICE authenticator, edit the following: +Now three packages need to be installed: - - `icesecretwrite=MY_CLEVER_PASSWORD`, obviously choosing a secure password - - ensure the line containing `Ice="tcp -h 127.0.0.1 -p 6502"` is uncommented +```bash +apt-get install python-software-properties mumble-server libqt5sql5-mysql +``` -By default mumble operates on SQLite which is fine, but slower than a dedicated MySQL server. To customize the database, edit the following: +### Installing Mumble Authenticator - - uncomment the database line, and change it to `database=alliance_mumble` - - `dbDriver=QMYSQL` - - `dbUsername=allianceserver` or whatever you called the Alliance Auth MySQL user - - `dbPassword=` that user’s password - - `dbPort=3306` - - `dbPrefix=murmur_` +Next, we need to download the latest authenticator release from the [authenticator repository](https://gitlab.com/allianceauth/mumble-authenticator). + +```bash +git clone https://gitlab.com/allianceauth/mumble-authenticator /home/allianceserver/mumble-authenticator +``` + +We will now install the authenticator into your Auth virtual environment. Please make sure to activate it first: + +```bash +source /home/allianceserver/venv/auth/bin/activate +``` + +Install the python dependencies for the mumble authenticator. Note that this process can take a couple minutes to complete. + +```bash +pip install -r requirements.txt +``` + +## Configuring Mumble Server + +The mumble server needs it's own database. Open an SQL shell with `mysql -u root -p` and execute the SQL commands to create it: + +```sql +CREATE DATABASE alliance_mumble CHARACTER SET utf8mb4; +``` + +```sql +GRANT ALL PRIVILEGES ON alliance_mumble . * TO 'allianceserver'@'localhost'; +``` + +Mumble ships with a configuration file that needs customization. By default it’s located at `/etc/mumble-server.ini`. Open it with your favorite text editor: + +```bash +nano /etc/mumble-server.ini +``` + +We need to enable the ICE authenticator. Edit the following: + +- `icesecretwrite=MY_CLEVER_PASSWORD`, obviously choosing a secure password +- ensure the line containing `Ice="tcp -h 127.0.0.1 -p 6502"` is uncommented + +We also want to enable Mumble to use the previously created MySQL / MariaDB database, edit the following: + +- uncomment the database line, and change it to `database=alliance_mumble` +- `dbDriver=QMYSQL` +- `dbUsername=allianceserver` or whatever you called the Alliance Auth MySQL user +- `dbPassword=` that user’s password +- `dbPort=3306` +- `dbPrefix=murmur_` To name your root channel, uncomment and set `registerName=` to whatever cool name you want @@ -52,55 +95,123 @@ Save and close the file. To get Mumble superuser account credentials, run the following: - dpkg-reconfigure mumble-server +```bash +dpkg-reconfigure mumble-server +``` -Set the password to something you’ll remember and write it down. This is needed to manage ACLs. +Set the password to something you’ll remember and write it down. This is your superuser password and later needed to manage ACLs. Now restart the server to see the changes reflected. - service mumble-server restart +```bash +service mumble-server restart +``` That’s it! Your server is ready to be connected to at example.com:64738 -## Configuring the Authenticator +## Configuring Mumble Authenticator The ICE authenticator lives in the mumble-authenticator repository, cd to the directory where you cloned it. Make a copy of the default config: - cp authenticator.ini.example authenticator.ini +```bash +cp authenticator.ini.example authenticator.ini +``` Edit `authenticator.ini` and change these values: - - `[database]` - - `user = ` your allianceserver MySQL user - - `password = ` your allianceserver MySQL user's password - - `[ice]` - - `secret = ` the `icewritesecret` password set earlier +- `[database]` + - `user =` your allianceserver MySQL user + - `password =` your allianceserver MySQL user's password +- `[ice]` + - `secret =` the `icewritesecret` password set earlier -Test your configuration by starting it: `python authenticator.py` +Test your configuration by starting it: -## Running the Authenticator +```bash +python /home/allianceserver/mumble-authenticator/authenticator.py +``` + +And finally ensure the allianceserver user has read/write permissions to the mumble authenticator files before proceeding: + +```bash +chown -R allianceserver:allianceserver /home/allianceserver/mumble-authenticator +``` The authenticator needs to be running 24/7 to validate users on Mumble. This can be achieved by adding a section to your auth project's supervisor config file like the following example: -``` +```text [program:authenticator] -command=/path/to/venv/bin/python authenticator.py -directory=/path/to/authenticator/directory/ +command=/home/allianceserver/venv/auth/bin/python authenticator.py +directory=/home/allianceserver/mumble-authenticator user=allianceserver -stdout_logfile=/path/to/authenticator/directory/authenticator.log -stderr_logfile=/path/to/authenticator/directory/authenticator.log +stdout_logfile=/home/allianceserver/myauth/log/authenticator.log +stderr_logfile=/home/allianceserver/myauth/log/authenticator.log autostart=true autorestart=true startsecs=10 -priority=998 +priority=996 ``` +In addition we'd recommend to add the authenticator to Auth's restart group in your supervisor conf. For that you need to add it to the group line as shown in the following example: -Note that groups will only be created on Mumble automatically when a user joins who is in the group. +```text +[group:myauth] +programs=beat,worker,gunicorn,authenticator +priority=999 +``` -## Prepare Auth -In your project's settings file, set `MUMBLE_URL` to the public address of your mumble server. Do not include any leading `http://` or `mumble://`. +To enable the changes in your supervisor configuration you need to restart the supervisor process itself. And before we do that we are shutting down the current Auth supervisors gracefully: -Run migrations and restart Gunicorn and Celery to complete setup. +```bash +supervisor stop myauth: +systemctl restart supervisor +``` + +## Configuring Auth + +In your auth project's settings file (`myauth/settings/local.py`), do the following: + +- Add `'allianceauth.services.modules.mumble',` to your `INSTALLED_APPS` list +- set `MUMBLE_URL` to the public address of your mumble server. Do not include any leading `http://` or `mumble://`. + +Example config: + +```python +# Installed apps +INSTALLED_APPS += [ + # ... + 'allianceauth.services.modules.mumble' + # ... +] + +# Mumble Configuration +MUMBLE_URL = "mumble.example.com" +``` + +Finally, run migrations and restart your supervisor to complete the setup: + +```bash +python /home/allianceserver/myauth/manage.py migrate +``` + +```bash +supervisorctl restart myauth: +``` + +## Permissions on Auth + +To enable the mumble service for users on Auth you need to give them the `access_mumble` permission. This permission is often added to the `Member` state. + +```eval_rst +.. note:: + Note that groups will only be created on Mumble automatically when a user joins who is in the group. +``` + +## ACL configuration + +On a freshly installed mumble server only your superuser has the right to configure ACLs and create channels. The credentials for logging in with your superuser are: + +- user: `SuperUser` +- password: *what you defined when configuring your mumble server* diff --git a/docs/index.md b/docs/index.md index ffd58f64..56f6587d 100644 --- a/docs/index.md +++ b/docs/index.md @@ -15,6 +15,7 @@ Welcome to the official documentation for **Alliance Auth**! installation/index features/index maintenance/index - development/index support/index + customizing/index + development/index ``` diff --git a/docs/installation/allianceauth.md b/docs/installation/allianceauth.md index 889c137d..c7ed8ead 100644 --- a/docs/installation/allianceauth.md +++ b/docs/installation/allianceauth.md @@ -9,7 +9,7 @@ This document describes how to install **Alliance Auth** from scratch. ```eval_rst .. note:: - There are additional installation steps for activating services and apps that come with **Alliance Auth**. Please see the page for the respective service or apps in chapter **Features** for details. + There are additional installation steps for activating services and apps that come with **Alliance Auth**. Please see the page for the respective service or apps in chapter :doc:`/features/index` for details. ``` ## Dependencies diff --git a/docs/installation/upgrade_python.md b/docs/installation/upgrade_python.md index c45bb330..c47191bf 100644 --- a/docs/installation/upgrade_python.md +++ b/docs/installation/upgrade_python.md @@ -97,10 +97,10 @@ If you unsure which apps you have installed from repos check `INSTALLED_APPS` in pip list ``` -Some AA installations might still be running an older version of django_celery_beat. We would recommend to upgrade to the current version before doing the Python update: +Some AA installations might still be running an older version of django-celery-beat. We would recommend to upgrade to the current version before doing the Python update: ```bash -pip install -U django_celery_beat +pip install -U 'django-celery-beat<2.00' ``` ```bash diff --git a/docs/maintenance/index.md b/docs/maintenance/index.md index d014a9b8..7b99b061 100644 --- a/docs/maintenance/index.md +++ b/docs/maintenance/index.md @@ -1,4 +1,4 @@ -# Maintenance & Customizing +# Maintenance In the maintenance chapter you find details about where important log files are found, how you can customize your AA installation and how to solve common issues. @@ -7,8 +7,7 @@ In the maintenance chapter you find details about where important log files are :maxdepth: 1 apps - customizing project troubleshooting - + tuning/index ``` diff --git a/docs/maintenance/tuning/celery.md b/docs/maintenance/tuning/celery.md new file mode 100644 index 00000000..c5631d55 --- /dev/null +++ b/docs/maintenance/tuning/celery.md @@ -0,0 +1,160 @@ +# Celery + +```eval_rst +.. hint:: + Most tunings will require a change to your supervisor configuration in your `supervisor.conf` file. Note that you need to restart the supervisor daemon in order for any changes to take effect. And before restarting the daemon you may want to make sure your supervisors stop gracefully:(Ubuntu): + + :: + + supervisor stop myauth: + systemctl supervisor restart +``` + +## Task Logging + +By default task logging is deactivated. Enabling task logging allows you to monitor what tasks are doing in addition to getting all warnings and error messages. To enable info logging for tasks add the following to the command configuration of your worker in the `supervisor.conf` file: + +```text +-l info +``` + +Full example: + +```text +command=/home/allianceserver/venv/auth/bin/celery -A myauth worker -l info +``` + +## Protection against memory leaks + +Celery workers often have memory leaks and will therefore grow in size over time. While the Alliance Auth team is working hard to ensure Auth is free of memory leaks some may still be cause by bugs in different versions of libraries or community apps. It is therefore good practice to enable features that protect against potential memory leaks. + +There are two ways to protect against memory leaks: + +- Worker +- Supervisor + +### Worker + +Celery workers can be configured to automatically restart if they grow above a defined memory threshold. Restarts will be graceful, so current tasks will be allowed to complete before the restart happens. + +To add protection against memory leaks add the following to the command configuration of your worker in the `supervisor.conf` file. This sets the upper limit to 256MB. + +```text +--max-memory-per-child 262144 +``` + +Full example: + +```text +command=/home/allianceserver/venv/auth/bin/celery -A myauth worker --max-memory-per-child 262144 +``` + +```eval_rst +.. hint:: + The 256 MB limit is just an example and should be adjusted to your system configuration. We would suggest to not go below 128MB though, since new workers start with around 80 MB already. Also take into consideration that this value is per worker and that you properly have more than one worker running in your system (if your workers run as processes, which is the default). +``` + +```eval_rst +.. warning:: + The ``max-memory-per-child`` parameter only works when workers run as processes (which is the default). It does not work for threads. +``` + +```eval_rst +.. note:: + Alternatively, you can also limit the number of runs per worker until a restart is performed with the worker parameter ``max-tasks-per-child``. This can also protect against memory leaks if you set the threshold is low enough. However, it is less precise since than using ``max-memory-per-child``. +``` + +See also the [official Celery documentation](https://docs.celeryproject.org/en/stable/userguide/workers.html#max-memory-per-child-setting) for more information about these two worker parameters. + +### Supervisor + +It is also possible to configure your supervisor to monitor and automatically restart programs that exceed a memory threshold. + +This is not a built in feature and requires the 3rd party extension [superlance](https://superlance.readthedocs.io/en/latest/), which includes a set of plugin utilities for supervisor. The one that watches memory consumption is [memmon](https://superlance.readthedocs.io/en/latest/memmon.html). + +To setup install superlance into your venv with: + +```bash +pip install superlance +``` + +You can then add `memmon` to your `supervisor.conf`. Here is an example setup with a worker that runs with gevent: + +```text +[eventlistener:memmon] +command=/home/allianceserver/venv/auth/bin/memmon -p worker=512MB +directory=/home/allianceserver/myauth +events=TICK_60 +``` + +This setup will check the memory consumption of the program "worker" every 60 secs and automatically restart it if is goes above 512 MB. Note that it will use the stop signal configured in supervisor, which is `TERM` by default. `TERM` will cause a "warm shutdown" of your worker, so all currently running tasks are completed before the restart. + +Again, the 512 MB is just an example and should be adjusted to fit your system configuration. + +## Increasing task throughput + +Celery tasks are designed to run concurrently, so one obvious way to increase task throughput is run more tasks in parallel. + +### Concurrency + +This can be achieved by the setting the concurrency parameter of the celery worker to a higher number. For example: + +```text +--concurrency=4 +``` + +However, there is a catch: In the default configuration each worker will spawn as it's own process. So increasing the number of workers will increase both CPU load and memory consumption in your system. + +The recommended number of workers is one per core, which is what you get automatically with the default configuration. Going beyond that can quickly reduce you overall system performance. i.e. the response time for Alliance Auth or other apps running on the same system may take a hit while many tasks are running. + +```eval_rst +.. hint:: + The optimal number will hugely depend on your individual system configuration and you may want to experiment with different settings to find the optimal. One way to generate task load and verify your configuration is to run a model update with the following command: + + :: + + celery -A myauth call allianceauth.eveonline.tasks.run_model_update + +``` + +### Processes vs. Threads + +A better way to increase concurrency without impacting is to switch from processes to threads for celery workers. In general celery workers perform better with processes when tasks are primarily CPU bound. And they perform better with threads when tasks that are primarily I/O bound. + +Alliance Auth tasks are primarily I/O bound (most tasks are fetching data from ESI and/or updating the local database), so threads are clearly the better choice for Alliance Auth. However, there is a catch. Celery's out-of-the-box support for threads is limited and additional packages and configurations is required to make it work. Nonetheless, the performance gain - especially in smaller systems - is significant, so it may well be worth the additional configuration complexity. + +```eval_rst +.. warning:: + One important feature that no longer works with threads is the worker parameter ``--max-memory-per-child`` that protects against memory leaks. But you can alternatively use supervisor_ to monitor and restart your workers. +``` + +See also the also [this guide](https://www.distributedpython.com/2018/10/26/celery-execution-pool/) on more information about how to configure the execution pool for workers. + +### Setting up for threads + +First, you need to install a threads packages. Celery supports both gevent and eventlet. We will go with gevent, since it's newer and better supported. Should you encounter any issues with gevent, you may want to try eventlet. + +To install gevent make sure you are in your venv and install the following: + +```bash +pip install gevent +``` + +Next we need to reconfigure the workers to use gevent threads. For that add the following parameters to your worker config: + +```text +--pool=gevent --concurrency=10 +``` + +Full example: + +```text +command=/home/allianceserver/venv/auth/bin/celery -A myauth worker --pool=gevent --concurrency=10 +``` + +Make sure to restart supervisor to activate the changes. + +```eval_rst +.. hint:: + The optimal number of concurrent workers will be different for every system and we recommend experimenting with different figures to find the optimal for your system. Note, that the example of 10 threads is conservative and should work even with smaller systems. +``` diff --git a/docs/maintenance/tuning/gunicorn.md b/docs/maintenance/tuning/gunicorn.md new file mode 100644 index 00000000..8cdef965 --- /dev/null +++ b/docs/maintenance/tuning/gunicorn.md @@ -0,0 +1,13 @@ +# Gunicorn + +## Number of workers + +The default installation will have 3 workers configured for Gunicorn. This will be fine on most system, but if your system as more than one core than you might want to increase the number of workers to get better response times. Note that more workers will also need more RAM though. + +The number you set this to will depend on your own server environment, how many visitors you have etc. Gunicorn suggests `(2 x $num_cores) + 1` for the number of workers. So for example if you have 2 cores you want 2 x 2 + 1 = 5 workers. See [here](https://docs.gunicorn.org/en/stable/design.html#how-many-workers) for the official discussion on this topic. + +For example to get 5 workers change the setting `--workers=5` in your `supervisor.conf` file and then reload the supervisor with the following command to activate the change (Ubuntu): + +```bash +systemctl restart supervisor +``` diff --git a/docs/maintenance/tuning/index.md b/docs/maintenance/tuning/index.md new file mode 100644 index 00000000..f9055749 --- /dev/null +++ b/docs/maintenance/tuning/index.md @@ -0,0 +1,16 @@ +# Tuning + +The official installation guide will install a stable version of Alliance Auth that will work fine for most cases. However, there are a lot of levels that can be used to optimize a system. For example some installations may we short on RAM and want to reduce the total memory footprint, even though that may reduce system performance. Others are fine with further increasing the memory footprint to get better system performance. + +```eval_rst +.. warning:: + Tuning usually has benefits and costs and should only be performed by experienced Linux administrators who understand the impact of tuning decisions on to their system. +``` + +```eval_rst +.. toctree:: + :maxdepth: 1 + + gunicorn + celery +```