[FIX] Grammar and spelling

This commit is contained in:
Peter Pfeufer
2023-12-17 20:07:14 +01:00
parent 8aeb061635
commit 6e3219fd1b
52 changed files with 422 additions and 424 deletions

View File

@@ -11,7 +11,7 @@ Your auth project is just a regular Django project - you can add in [other Djang
## Removing Apps
The following instructions will explain how you can remove an app properly fom your Alliance Auth installation.
The following instructions will explain how you can remove an app properly from your Alliance Auth installation.
:::{note}
We recommend following these instructions to avoid dangling foreign keys or orphaned Python packages on your system, which might cause conflicts with other apps down the road.
@@ -29,9 +29,9 @@ Let's first try the automatic approach by running the following command:
python manage.py migrate appname zero
```
If that worked you'll get a confirmation message.
If that works, you'll get a confirmation message.
If that did not work and you got error messages, you will need to remove the tables manually. This is pretty common btw, because many apps use sophisticated table setups, which can not be removed automatically by Django.
If that did not work, and you got error messages, you will need to remove the tables manually. This is pretty common btw, because many apps use sophisticated table setups, which cannot be removed automatically by Django.
#### Manual table removal
@@ -48,7 +48,7 @@ sudo mysql -u root
use alliance_auth;
```
Next disable foreign key check. This makes it much easier to drop tables in any order.
Next, disable foreign key check. This makes it much easier to drop tables in any order.
```shell
SET FOREIGN_KEY_CHECKS=0;
@@ -77,7 +77,7 @@ exit;
### Step 2 - Remove the app from Alliance Auth
Once the tables have been removed, you you can remove the app from Alliance Auth. This is done by removing the applabel from the `INSTALLED_APPS` list in your local settings file.
Once the tables have been removed, you can remove the app from Alliance Auth. This is done by removing the applabel from the `INSTALLED_APPS` list in your local settings file.
### Step 3 - Remove the Python package
@@ -101,4 +101,4 @@ python manage.py remove_stale_contenttypes --include-stale-apps
This inbuilt Django command will step through each contenttype and offer to delete it, displaying what exactly this will cascade to delete. Pay attention and ensure you understand exactly what is being removed before answering `yes`.
This should only cleanup uninstalled apps, deprecated permissions within apps should be cleaned up using Data Migrations by each responsible application.
This should only clean up uninstalled apps, deprecated permissions within apps should be cleaned up using Data Migrations by each responsible application.

View File

@@ -1,6 +1,6 @@
# Maintenance
In the maintenance chapter you find details about where important log files are found, how you can customize your AA installation and how to solve common issues.
In the maintenance chapter, you find details about where important log files are found, how you can customize your AA installation and how to solve common issues.
:::{toctree}
:maxdepth: 1

View File

@@ -1,6 +1,6 @@
# Folder structure
When installing Alliance Auth you are instructed to run the `allianceauth start` command which generates a folder containing your auth project. This auth project is based off Alliance Auth but can be customized how you wish.
When installing Alliance Auth, you are instructed to run the `allianceauth start` command which generates a folder containing your auth project. This auth project is based off Alliance Auth but can be customized how you wish.
## The myauth folder
@@ -26,9 +26,9 @@ And finally the settings folder.
With the settings folder lives two settings files: `base.py` and `local.py`
The base settings file contains everything needed to run Alliance Auth. It handles configuration of Django and Celery, defines logging, and many other Django-required settings. This file should not be edited. While updating Alliance Auth you may be instructed to update the base settings file - this is achieved through the `allianceauth update` command which overwrites the existing base settings file.
The base settings file contains everything needed to run Alliance Auth. It handles configuration of Django and Celery, defines logging, and many other Django-required settings. This file should not be edited. While updating Alliance Auth, you may be instructed to update the base settings file - this is achieved through the `allianceauth update` command which overwrites the existing base settings file.
The local settings file is referred to as "your auth project's settings file" and you are instructed to edit it during the install process. You can add any additional settings required by other apps to this file. Upon creation the first line is `from .base import *` meaning all settings defined in the base settings file are loaded. You can override any base setting by simply redefining it in your local settings file.
The local settings file is referred to as "your auth project's settings file" and you are instructed to edit it during the installation process. You can add any additional settings required by other apps to this file. Upon creation the first line is `from .base import *` meaning all settings defined in the base settings file are loaded. You can override any base setting by simply redefining it in your local settings file.
## Log Files
@@ -39,4 +39,4 @@ Your auth project comes with four log file definitions by default. These are cre
- `beat.log` contains logging messages from the background task scheduler. This is of limited use unless the scheduler isn't starting.
- `gunicorn.log` contains logging messages from Gunicorn workers. This contains all web-sourced messages found in `allianceauth.log` as well as runtime errors from the workers themselves.
When asking for assistance with your auth project be sure to first read the logs, and share any relevant entries.
When asking for assistance with your auth project, be sure to first read the logs, and share any relevant entries.

View File

@@ -2,13 +2,13 @@
## Logging
In its default configuration your auth project logs INFO and above messages to myauth/log/allianceauth.log. If you're encountering issues it's a good idea to view DEBUG messages as these greatly assist the troubleshooting process. These are printed to the console with manually starting the webserver via `python manage.py runserver`.
In its default configuration, your auth project logs INFO and higher messages to myauth/log/allianceauth.log. If you're encountering issues, it's a good idea to view DEBUG messages as these greatly assist the troubleshooting process. These are printed to the console with manually starting the webserver via `python manage.py runserver`.
To record DEBUG messages in the log file, alter a setting in your auth project's settings file: `LOGGING['handlers']['log_file']['level'] = 'DEBUG'`. After restarting gunicorn and celery your log file will record all logging messages.
To record DEBUG messages in the log file, alter a setting in your auth project's settings file: `LOGGING['handlers']['log_file']['level'] = 'DEBUG'`. After restarting gunicorn and celery, your log file will record all logging messages.
## Common Problems
### I'm getting an error 500 trying to connect to the website on a new install
### I'm getting error 500 when trying to connect to the website on a new installation
*Great.* Error 500 is the generic message given by your web server when *anything* breaks. The actual error message is hidden in one of your auth project's log files. Read them to identify it.
@@ -39,7 +39,7 @@ This usually indicates an issue with your email settings. Ensure these are corre
### No images are available to users accessing the website
This is likely due to a permissions mismatch. Check the setup guide for your web server. Additionally ensure the user who owns `/var/www/myauth/static` is the same user as running your webserver, as this can be non-standard.
This is likely due to a permission mismatch. Check the setup guide for your web server. Additionally ensure the user who owns `/var/www/myauth/static` is the same user as running your webserver, as this can be non-standard.
### Unable to execute 'gunicorn myauth.wsgi' or ImportError: No module named 'myauth.wsgi'
@@ -55,4 +55,4 @@ Specified key was too long; max key length is 767 bytes
This error will occur if one is trying to use Maria DB prior to 10.2.x, which is not compatible with Alliance Auth.
Install a never Maria DB version to fix this issue another DBMS supported by Django 2.2.
Install a newer Maria DB version to fix this issue another DBMS supported by Django 2.2.

View File

@@ -1,7 +1,7 @@
# Celery
:::{hint}
Most tunings will require a change to your supervisor configuration in your `supervisor.conf` file. Note that you need to restart the supervisor daemon in order for any changes to take effect. And before restarting the daemon you may want to make sure your supervisors stop gracefully:(Ubuntu):
Most tunings will require a change to your supervisor configuration in your `supervisor.conf` file. Note that you need to restart the supervisor daemon in order for any changes to take effect. And before restarting the daemon, you may want to make sure your supervisors stop gracefully:(Ubuntu):
```bash
supervisor stop myauth:
@@ -12,7 +12,7 @@ systemctl supervisor restart
## Task Logging
By default task logging is deactivated. Enabling task logging allows you to monitor what tasks are doing in addition to getting all warnings and error messages. To enable info logging for tasks add the following to the command configuration of your worker in the `supervisor.conf` file:
By default, task logging is deactivated. Enabling task logging allows you to monitor what tasks are doing in addition to getting all warnings and error messages. To enable info logging for tasks, add the following to the command configuration of your worker in the `supervisor.conf` file:
```ini
-l info
@@ -26,7 +26,7 @@ command=/home/allianceserver/venv/auth/bin/celery -A myauth worker -l info
## Protection against memory leaks
Celery workers often have memory leaks and will therefore grow in size over time. While the Alliance Auth team is working hard to ensure Auth is free of memory leaks some may still be cause by bugs in different versions of libraries or community apps. It is therefore good practice to enable features that protect against potential memory leaks.
Celery workers often have memory leaks and will therefore grow in size over time. While the Alliance Auth team is working hard to ensure Auth is free of memory leaks, some may still be caused by bugs in different versions of libraries or community apps. It is therefore good practice to enable features that protect against potential memory leaks.
:::{hint}
The 256 MB limit is just an example and should be adjusted to your system configuration. We would suggest to not go below 128MB though, since new workers start with around 80 MB already. Also take into consideration that this value is per worker and that you may have more than one worker running in your system.
@@ -36,9 +36,9 @@ The 256 MB limit is just an example and should be adjusted to your system config
It is also possible to configure your supervisor to monitor and automatically restart programs that exceed a memory threshold.
This is not a built in feature and requires the 3rd party extension [superlance](https://superlance.readthedocs.io/en/latest/), which includes a set of plugin utilities for supervisor. The one that watches memory consumption is [memmon](https://superlance.readthedocs.io/en/latest/memmon.html).
This is not a built-in feature and requires the 3rd party extension [superlance](https://superlance.readthedocs.io/en/latest/), which includes a set of plugin utilities for supervisor. The one that watches memory consumption is [memmon](https://superlance.readthedocs.io/en/latest/memmon.html).
To setup install superlance into your venv with:
To install superlance into your venv, run:
```shell
pip install superlance
@@ -53,13 +53,13 @@ directory=/home/allianceserver/myauth
events=TICK_60
```
This setup will check the memory consumption of the program "worker" every 60 secs and automatically restart it if is goes above 256 MB. Note that it will use the stop signal configured in supervisor, which is `TERM` by default. `TERM` will cause a "warm shutdown" of your worker, so all currently running tasks are completed before the restart.
This setup will check the memory consumption of the program "worker" every 60 secs and automatically restart it if it goes above 256 MB. Note that it will use the stop signal configured in supervisor, which is `TERM` by default. `TERM` will cause a "warm shutdown" of your worker, so all currently running tasks are completed before the restart.
Again, the 256 MB is just an example and should be adjusted to fit your system configuration.
## Increasing task throughput
Celery tasks are designed to run concurrently, so one obvious way to increase task throughput is run more tasks in parallel. The default celery worker configuration will allow either of these options to be configured out of the box.
Celery tasks are designed to run concurrently, so one obvious way to increase task throughput is to run more tasks in parallel. The default celery worker configuration will allow either of these options to be configured out of the box.
### Extra Worker Threads
@@ -73,13 +73,13 @@ process_name=%(program_name)s_%(process_num)02d
...
```
This number will be multiplied by your concurrency setting,
This number will be multiplied by your concurrency setting. For example:
```text
numprocs * concurency = workers
```
increasing this number will require a modification to the memmon settings as each `numproc` worker will get a unique name for example with `numproc=3`
Increasing this number will require a modification to the memmon settings as each `numproc` worker will get a unique name for example with `numproc=3`
```ini
[eventlistener:memmon]
@@ -89,7 +89,7 @@ command=... -p worker_00=256MB -p worker_01=256MB -p worker_02=256MB
```
:::{hint}
You will want to experiment with different settings to find the optimal. One way to generate task load and verify your configuration is to run a model update with the following command:
You will want to experiment with different settings to find the optimal. One way to generate some task load and verify your configuration is to run a model update with the following command:
```bash
celery -A myauth call allianceauth.eveonline.tasks.run_model_update
@@ -106,7 +106,7 @@ This can be achieved by the setting the concurrency parameter of the celery work
```
:::{hint}
The optimal number will hugely depend on your individual system configuration and you may want to experiment with different settings to find the optimal. One way to generate task load and verify your configuration is to run a model update with the following command:
The optimal number will hugely depend on your individual system configuration, and you may want to experiment with different settings to find the optimal. One way to generate some task load and verify your configuration is to run a model update with the following command:
```bash
celery -A myauth call allianceauth.eveonline.tasks.run_model_update
@@ -115,5 +115,5 @@ celery -A myauth call allianceauth.eveonline.tasks.run_model_update
:::
:::{hint}
The optimal number of concurrent workers will be different for every system and we recommend experimenting with different figures to find the optimal for your system. Note, that the example of 10 threads is conservative and should work even with smaller systems.
The optimal number of concurrent workers will be different for every system, and we recommend experimenting with different figures to find the optimal for your system. Note that the example of 10 threads is conservative and should work even with smaller systems.
:::

View File

@@ -2,11 +2,11 @@
## Number of workers
The default installation will have 3 workers configured for Gunicorn. This will be fine on most system, but if your system as more than one core than you might want to increase the number of workers to get better response times. Note that more workers will also need more RAM though.
The default installation will have 3 workers configured for Gunicorn. This will be fine on most systems, but if your system as more than one core than you might want to increase the number of workers to get better response times. Note that more workers will also need more RAM though.
The number you set this to will depend on your own server environment, how many visitors you have etc. Gunicorn suggests `(2 x $num_cores) + 1` for the number of workers. So for example if you have 2 cores you want 2 x 2 + 1 = 5 workers. See [here](https://docs.gunicorn.org/en/stable/design.html#how-many-workers) for the official discussion on this topic.
The number you set this to will depend on your own server environment, how many visitors you have etc. Gunicorn suggests `(2 x $num_cores) + 1` for the number of workers. So for example, if you have 2 cores, you want 2 x 2 + 1 = 5 workers. See [here](https://docs.gunicorn.org/en/stable/design.html#how-many-workers) for the official discussion on this topic.
For example to get 5 workers change the setting `--workers=5` in your `supervisor.conf` file and then reload the supervisor with the following command to activate the change (Ubuntu):
For example, to get 5 workers change the setting `--workers=5` in your `supervisor.conf` file and then reload the supervisor with the following command to activate the change (Ubuntu):
```shell
systemctl restart supervisor

View File

@@ -1,9 +1,9 @@
# Tuning
The official installation guide will install a stable version of Alliance Auth that will work fine for most cases. However, there are a lot of levels that can be used to optimize a system. For example some installations may we short on RAM and want to reduce the total memory footprint, even though that may reduce system performance. Others are fine with further increasing the memory footprint to get better system performance.
The official installation guide will install a stable version of Alliance Auth that will work fine for most cases. However, there are a lot of levels that can be used to optimize a system. For example, some installations may we short on RAM and want to reduce the total memory footprint, even though that may reduce system performance. Others are fine with further increasing the memory footprint to get better system performance.
:::{warning}
Tuning usually has benefits and costs and should only be performed by experienced Linux administrators who understand the impact of tuning decisions on to their system.
Tuning usually has benefits and costs and should only be performed by experienced Linux administrators who understand the impact of tuning decisions on their system.
:::
:::{toctree}

View File

@@ -6,9 +6,9 @@ Newer versions of python can focus heavily on performance improvements, some mor
As a general rule, Python 3.9 and Python 3.11 both had a strong focus on performance improvements. Python 3.12 is looking promising but has yet to have widespread testing, adoption and deployment. A simple comparison is available at [speed.python.org](https://speed.python.org/comparison/?exe=12%2BL%2B3.11%2C12%2BL%2B3.12%2C12%2BL%2B3.10%2C12%2BL%2B3.9%2C12%2BL%2B3.8&ben=746&env=1&hor=false&bas=none&chart=normal+bars).
[Djangobench](https://github.com/django/djangobench/tree/master) is source of synthetic benchmarks and a useful tool for running comparisons. Below are some examples to inform your investigations.
[Djangobench](https://github.com/django/djangobench/tree/master) is the source of synthetic benchmarks and a useful tool for running comparisons. Below are some examples to inform your investigations.
Keep in mind while a 1.2x faster result is significant, it's only one step of the process, Celery, SQL, Redis and many other factors will influence the end result and this _python_ speed improvement will not translate 1:1 into real world performance.
Keep in mind while a 1.2x faster result is significant, it's only one step of the process, Celery, SQL, Redis, and many other factors will influence the endresult, and this _python_ speed improvement will not translate 1:1 into real world performance.
### Django 4.0.10

View File

@@ -2,15 +2,15 @@
SQL Tuning is usually the realm of experienced Database Admins, as it can be full of missteps leading to worse performance. It is _extremely_ important that you take it slowly, make one change at a time with dedicated research and test before and after.
Before you start down this path its best to update [MariaDB](https://mariadb.org/download/?t=repo-config) / MySQL. Performance Schemas, some default tuning and other general performance improvements are only available on new versions. You must also allow your server to run for 24 hours at least to gather accurate data.
Before you start down this path, it's best to update [MariaDB](https://mariadb.org/download/?t=repo-config) / MySQL. Performance Schemas, some default tuning and other general performance improvements are only available on new versions. You must also allow your server to run for 24 hours at least to gather accurate data.
## MySQLTuner
[MySQLTuner](https://github.com/major/MySQLTuner-perl) is a Perl script that will analyze inbuilt metrics and spit out recommendations.
[MySQLTuner](https://github.com/major/MySQLTuner-perl) is a Perl script that will analyze inbuilt metrics and provide recommendations.
### [performance_schema](https://mariadb.com/kb/en/performance-schema-system-variables/#performance_schema)
This should be ON for 24 hours before applying any recommendations, then can be turned _OFF_ to save Memory while its not needed.
This should be ON for 24 hours before applying any recommendations, then can be turned _OFF_ to save Memory while it's not needed.
```bash
-------- Performance schema ------------------------------------------------------------------------
@@ -30,7 +30,7 @@ Note you must use 127.0.0.1 for localhost connections, and all entries in GRANT
### [table_definition_cache](https://mariadb.com/kb/en/server-system-variables/#table_definition_cache)
Will usually need to be expanded on installs with many extensions
Usually need to be expanded on installations with many extensions.
Most installs should cache all their tables, but if your hit rate is still quite high, you may have a lot of rarely used tables that you don't need to waste memory caching.
@@ -49,9 +49,9 @@ table_definition_cache (400) > 567 or -1 (autosizing if supported)
### [innodb_buffer_pool_size](https://mariadb.com/kb/en/innodb-system-variables/#innodb_buffer_pool_size)
This is in short, the amount of memory assigned to store data for faster reads. If you are memory starved you should not increase this variable regardless of the suggestions of this tool. Pushing SQL cache to pagefile will not result in faster queries.
This is in short, the amount of memory assigned to store data for faster reads. If you are memory starved, you should not increase this variable regardless of the suggestions of this tool. Pushing SQL cache to pagefile will not result in faster queries.
If you are not memory starved, you can wind this up to the amount of total data you have to store it all in memory. This would be a significant performance increase for larger installs on dedicated hardware with memory to spare.
If you are not memory starved, you can wind this up to the amount of total data you have to store it all in memory. This would be a significant performance increase for larger installations on dedicated hardware with memory to spare.
```bash
[!!] InnoDB buffer pool / data size: 128.0M / 651.6M
@@ -67,7 +67,7 @@ If you are not memory starved, you can wind this up to the amount of total data
[innodb_log_file_size](https://mariadb.com/kb/en/innodb-system-variables/#innodb_log_file_size) This is your _write_ log, used to redo any commits in the event of a crash. MySQLTuner recommends this be 1/4 of your innodb_buffer_pool / read buffer. I would not lower this past the default size.
[innodb_log_buffer_size](https://mariadb.com/kb/en/innodb-system-variables/#innodb_log_buffer_size) This is the memory buffer for the write log. Larger transactions will benefit from a larger setting.
[innodb_log_buffer_size](https://mariadb.com/kb/en/innodb-system-variables/#innodb_log_buffer_size) This is the memory buffer for the "write" log. Larger transactions will benefit from a larger setting.
```bash
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (75%): 96.0M \* 1 / 128.0M should be equal to 25%
@@ -79,7 +79,7 @@ If you are not memory starved, you can wind this up to the amount of total data
### [innodb_file_per_table](https://mariadb.com/kb/en/innodb-system-variables/#innodb_file_per_table)
This is not for performance, but for file system utilization and ease of use. While off all tables are stored in a single monolith file, as opposed to individual files. This is deprecated and set to ON in MariaDB 11.x
This is not for performance, but for file system utilization and ease of use. While off, all tables are stored in a single monolith file, as opposed to individual files. This is deprecated and set to ON in MariaDB 11.x
```bash
[OK] InnoDB File per table is activated
@@ -87,7 +87,7 @@ This is not for performance, but for file system utilization and ease of use. Wh
### [join_buffer_size](https://mariadb.com/kb/en/server-system-variables/#join_buffer_size)
It is always better to optimize a table with indexes, if you have valuable performance data and analysis please reach out to either the Alliance Auth or Community dev responsible for the data that could benefit from indexes. MySQLTuner will likely recommend increasing this number for as long as there are any queries that could benefit, regardless of their resulting performance impact.
It is always better to optimize a table with indexes, if you have valuable performance data and analysis, please reach out to either the Alliance Auth or Community dev responsible for the data that could benefit from indexes. MySQLTuner will likely recommend increasing this number for as long as there are any queries that could benefit, regardless of their resulting performance impact.
Also keep in mind this is _per thread_, if you have a potential 200 connections, 256KB * 200 = 50MB, scaling this setting out too far can result in more memory use than expected.
@@ -121,7 +121,7 @@ tmp_table_size and max_heap_table_size should be increased together.
Index buffer for MyISAM tables, If you use no or very little data in MyISAM tables. You may reclaim some memory here
In this example we still have some MyISAM tables, you may have none
In this example, we still have some MyISAM tables. You may have none.
```bash
[--] General MyIsam metrics:
@@ -152,7 +152,7 @@ Index and data buffer for Aria tables, If you use no or very little data in Aria
## Swappiness
Swappiness is not an SQL variable but part of your system kernel. Swappiness controls how much free memory a server "likes" to have at any given time, and how frequently it shifts data to swapfile in order to free up memory. Desktop operating systems will have this value set quite high, whereas servers are less aggressive with their swapfile.
Swappiness is not an SQL variable but part of your system kernel. Swappiness controls how much free memory a server "likes" to have at any given time, and how frequently it shifts data to swapfile to free up memory. Desktop operating systems will have this value set quite high, whereas servers are less aggressive with their swapfile.
Database workloads especially benefit from having their caches stay in memory and will recommend values under 10 for a dedicated database server. 10 is a good compromise for a mixed use server with adequate memory.
@@ -193,7 +193,7 @@ vm.swappiness <= 10 (echo 10 > /proc/sys/vm/swappiness) or vm.swappiness=10 in /
## Max Asynchronous IO
Unless you are still operating on spinning rust (Hard Disk Drives), or an IO limited VPS, you can likely increase this value. Database workloads appreciate the additional scaling.
Unless you are still operating on spinning rust (Hard Disk Drives), or an IO-limited VPS, you can likely increase this value. Database workloads appreciate the additional scaling.
```bash
[--] Information about kernel tuning: