Commit 66e9378d authored by Ziirish's avatar Ziirish

doc: rework for #311

parent c7fb8a0f
Pipeline #1670 failed with stages
in 8 minutes and 55 seconds
......@@ -114,6 +114,8 @@ follow:
# storage backend for session and cache
# may be either 'default' or 'redis'
storage = default
# redis server to connect to
redis = localhost:6379
# session database to use
# may also be a backend url like: redis://localhost:6379/0
# if set to 'redis', the backend url defaults to:
......@@ -128,8 +130,6 @@ follow:
# where <redis_host> is the host part, and <redis_port> is the port part of
# the below "redis" setting
cache = default
# redis server to connect to
redis = localhost:6379
# whether to use celery or not
# may also be a broker url like: redis://localhost:6379/0
# if set to "true", the broker url defaults to:
......@@ -137,11 +137,6 @@ follow:
# where <redis_host> is the host part, and <redis_port> is the port part of
# the above "redis" setting
celery = false
# database url to store some persistent data
# none or a connect string supported by SQLAlchemy:
# http://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls
# example: sqlite:////var/lib/burpui/store.db
database = none
# whether to rate limit the API or not
# may also be a redis url like: redis://localhost:6379/0
# if set to "true" or "redis" or "default", the url defaults to:
......@@ -153,6 +148,11 @@ follow:
# limiter ratio
# see https://flask-limiter.readthedocs.io/en/stable/#ratelimit-string
ratio = 60/minute
# database url to store some persistent data
# none or a connect string supported by SQLAlchemy:
# http://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls
# example: sqlite:////var/lib/burpui/store.db
database = none
# you can change the prefix if you are behind a reverse-proxy under a custom
# root path. For example: /burpui
# You can also configure your reverse-proxy to announce the prefix through the
......@@ -170,6 +170,23 @@ follow:
proxy_fix_args = "{'x_for': {num_proxies}, 'x_host': {num_proxies}, 'x_prefix': {num_proxies}}"
- *storage*: What storage engine should be used for sessions, cache, etc. Can
only be one of: ``default`` or ``redis``.
- *redis*: redis server to use.
- *session*: redis database to use, by default (if set to ``redis``) we use
database **0** on the server provided in *redis*.
- *cache*: redis database to use, by default (if set to ``redis``) we use
database **1** on the server provided in *redis*.
- *celery*: redis database to use as broker and message queue for Celery, by
default (if set to ``true``) we use database **2** on the server provided in
*redis*. You can also set it to ``false`` to disable Celery support.
- *limiter*: redis database to use, by default (if set to ``redis``) we use
database **3** on the server provided in *redis*.
- *ratio*: Limiter ratio. See `Limiter <https://flask-limiter.readthedocs.io/en/stable/#ratelimit-string>`_
documentation for details.
- *database*: Enable SQL persistent storage. Can be ``none`` (to disable SQL)
or any valid `SQLAlchemy <http://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls>`_
connect string.
- *prefix*: You can host `Burp-UI`_ behind a sub-root path. See the `gunicorn
<gunicorn.html#sub-root-path>`__ page for details.
- *num_proxies*: This is useful only if you host `Burp-UI`_ behind a
......@@ -867,7 +884,7 @@ Now you can add *basic audit* specific options:
.. note::
The *basic* audit backend inherit the global application logger, so you may
see *duplicates* log entry depending of both your loggers debug level.
see *duplicates* log entry depending on both your loggers debug level.
.. _Burp: http://burp.grke.org/
......
......@@ -65,7 +65,16 @@ The architecture is described bellow:
+---------------------+
Requirements
Installation
------------
There is a dedicated Pypi package: ``burp-ui-monitor`` that you can install
with ``pip install burp-ui-monitor`` if you want the bare minimun that you can
use alongside with the `bui-agent`_.
Alternatively, the `bui-monitor` command is also part of the full ``burp-ui``
installation.
Presentation
------------
The monitor pool is powered by asyncio through trio.
......@@ -174,6 +183,283 @@ My feeling is, the more you have CPU cores, the more performance improvements
you'll notice over the `burp2`_ backend because we let the kernel handle the I/O
parallelization with the `parallel`_ backend and `bui-monitor`_.
I also ran similar tests on a *production* environment with more than 100
clients and here are the results:
::
# Tests agains the *parallel* backend with 16 processes in the pool
➜ ~ ab -A user:password -H "X-No-Cache:True" -n 100 -c 10 https://backup1.example.org/api/client/stats/client1
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking backup1.example.org (be patient).....done
Server Software: nginx
Server Hostname: backup1.example.org
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,4096,256
TLS Server Name: backup1.example.org
Document Path: /api/client/stats/client1
Document Length: 2713 bytes
Concurrency Level: 10
Time taken for tests: 18.832 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 313100 bytes
HTML transferred: 271300 bytes
Requests per second: 5.31 [#/sec] (mean)
Time per request: 1883.233 [ms] (mean)
Time per request: 188.323 [ms] (mean, across all concurrent requests)
Transfer rate: 16.24 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 9 16 13.0 12 72
Processing: 75 1862 3347.6 222 13963
Waiting: 75 1862 3347.6 222 13963
Total: 86 1878 3358.2 237 14009
Percentage of the requests served within a certain time (ms)
50% 237
66% 679
75% 2355
80% 2930
90% 8556
95% 11619
98% 11878
99% 14009
100% 14009 (longest request)
# Tests against gunicorn+gevent with the plain *burp2* backend
➜ ~ ab -A user:password -H "X-No-Cache:True" -n 100 -c 10 https://backup1.example.org/api/client/stats/client1
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking backup1.example.org (be patient).....done
Server Software: nginx
Server Hostname: backup1.example.org
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,4096,256
TLS Server Name: backup1.example.org
Document Path: /api/client/stats/client1
Document Length: 2713 bytes
Concurrency Level: 10
Time taken for tests: 54.601 seconds
Complete requests: 100
Failed requests: 0
Total transferred: 313100 bytes
HTML transferred: 271300 bytes
Requests per second: 1.83 [#/sec] (mean)
Time per request: 5460.086 [ms] (mean)
Time per request: 546.009 [ms] (mean, across all concurrent requests)
Transfer rate: 5.60 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 9 18 11.1 13 52
Processing: 27 5357 4021.1 4380 18894
Waiting: 27 5357 4021.0 4380 18894
Total: 40 5375 4024.5 4402 18940
Percentage of the requests served within a certain time (ms)
50% 4402
66% 6048
75% 7412
80% 8114
90% 11077
95% 12767
98% 18916
99% 18940
100% 18940 (longest request)
What's interesting with the *parallel* backend is it can handle even more
requests with a low overhead as you can see here:
::
➜ ~ ab -A user:password -H "X-No-Cache:True" -n 500 -c 10 https://backup1.example.org/api/client/stats/client1
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking backup1.example.org (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software: nginx
Server Hostname: backup1.example.org
Server Port: 443
SSL/TLS Protocol: TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,4096,256
TLS Server Name: backup1.example.org
Document Path: /api/client/stats/client1
Document Length: 2713 bytes
Concurrency Level: 10
Time taken for tests: 28.073 seconds
Complete requests: 500
Failed requests: 0
Total transferred: 1565500 bytes
HTML transferred: 1356500 bytes
Requests per second: 17.81 [#/sec] (mean)
Time per request: 561.454 [ms] (mean)
Time per request: 56.145 [ms] (mean, across all concurrent requests)
Transfer rate: 54.46 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 8 15 8.8 13 72
Processing: 101 546 856.5 209 3589
Waiting: 101 546 856.5 209 3589
Total: 114 561 860.3 223 3661
Percentage of the requests served within a certain time (ms)
50% 223
66% 241
75% 264
80% 298
90% 2221
95% 2963
98% 3316
99% 3585
100% 3661 (longest request)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
➜ ~ ab -A user:password -H "X-No-Cache:True" -n 1000 -c 10 https://backup1.example.org/api/client/stats/client1
Benchmarking backup1.example.org (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx
Server Hostname: backup1.example.org
Server Port: 443
SSL/TLS Protocol: TLSv1/SSLv3,ECDHE-RSA-AES256-GCM-SHA384,4096,256
Document Path: /api/client/stats/client1
Document Length: 2708 bytes
Concurrency Level: 10
Time taken for tests: 69.908 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3126000 bytes
HTML transferred: 2708000 bytes
Requests per second: 14.30 [#/sec] (mean)
Time per request: 699.081 [ms] (mean)
Time per request: 69.908 [ms] (mean, across all concurrent requests)
Transfer rate: 43.67 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 8 12 5.1 10 65
Processing: 77 687 1070.7 245 5122
Waiting: 77 687 1070.7 245 5122
Total: 86 698 1072.4 256 5149
Percentage of the requests served within a certain time (ms)
50% 256
66% 290
75% 329
80% 367
90% 2938
95% 3408
98% 3827
99% 4693
100% 5149 (longest request)
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
In comparison, this is the result for 500 requests against gunicorn+gevent:
::
➜ ~ ab -A user:password -H "X-No-Cache:True" -n 500 -c 10 https://backup1.example.org/api/client/stats/client1
Benchmarking backup1.example.org (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Finished 500 requests
Server Software: nginx
Server Hostname: backup1.example.org
Server Port: 443
SSL/TLS Protocol: TLSv1/SSLv3,ECDHE-RSA-AES256-GCM-SHA384,4096,256
Document Path: /api/client/stats/client1
Document Length: 2708 bytes
Concurrency Level: 10
Time taken for tests: 232.800 seconds
Complete requests: 500
Failed requests: 0
Write errors: 0
Total transferred: 1563000 bytes
HTML transferred: 1354000 bytes
Requests per second: 2.15 [#/sec] (mean)
Time per request: 4655.994 [ms] (mean)
Time per request: 465.599 [ms] (mean, across all concurrent requests)
Transfer rate: 6.56 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 8 14 10.3 10 69
Processing: 25 4628 3601.4 4219 28806
Waiting: 25 4627 3601.4 4219 28806
Total: 34 4642 3602.4 4233 28815
Percentage of the requests served within a certain time (ms)
50% 4233
66% 5306
75% 6131
80% 6505
90% 8856
95% 10798
98% 14538
99% 18397
100% 28815 (longest request)
In conclusion, if you have several users using burp-ui you will probably notice
a nice performance improvement when using the new bui-monitor with the parallel
backend.
Service
-------
......
......@@ -17,6 +17,23 @@ application. I chose `Redis`_ so you will need a working `Redis`_ server
(Basically you just need to run ``apt-get install redis-server`` on Debian based
distributions)
Configure `Burp-UI`_ to enable `Celery`_ support by setting both the ``redis``
and ``celery`` option of the ``[Production]`` section. Example:
::
[Production]
# redis server to connect to
redis = localhost:6379
# whether to use celery or not
# may also be a broker url like: redis://localhost:6379/0
# if set to "true", the broker url defaults to:
# redis://<redis_host>:<redis_port>/2
# where <redis_host> is the host part, and <redis_port> is the port part of
# the above "redis" setting
celery = true
Runner
------
......
......@@ -7,17 +7,48 @@ online restoration) may take some time and thus may block any further requests.
With `Gunicorn`_, you have several workers that can proceed the requests so you
can handle more users.
You need to install ``gunicorn`` and ``gevent``:
You need to install ``gunicorn``:
::
pip install "burp-ui[gunicorn]"
Gunicorn is an application server that will work similarly to php-fpm and the
like. That is, it will fork several processes to handle the load.
Due to this, you may need to enable advanced `Burp-UI`_ features so those
processes can talk to each other (and share resources).
Here is what settings can be changed in the configuration as an illustration:
::
[Production]
# storage backend for session and cache
# may be either 'default' or 'redis'
storage = redis
# redis server to connect to
redis = localhost:6379
# session database to use
# may also be a backend url like: redis://localhost:6379/0
# if set to 'redis', the backend url defaults to:
# redis://<redis_host>:<redis_port>/0
# where <redis_host> is the host part, and <redis_port> is the port part of
# the below "redis" setting
session = redis
# cache database to use
# may also be a backend url like: redis://localhost:6379/0
# if set to 'redis', the backend url defaults to:
# redis://<redis_host>:<redis_port>/1
# where <redis_host> is the host part, and <redis_port> is the port part of
# the below "redis" setting
cache = redis
You will then be able to launch `Burp-UI`_ this way:
::
gunicorn -k gevent -w 4 'burpui:create_app(conf="/path/to/burpui.cfg")'
gunicorn -w 4 'burpui:create_app(conf="/path/to/burpui.cfg")'
.. note:: If you decide to use gunicorn AND the embedded websocket server,
......@@ -68,7 +99,7 @@ Usage example:
Daemon
------
If you wish to run `Burp-UI`_ as a daemon process, the recommanded way is to use
If you wish to run `Burp-UI`_ as a daemon process, the recommended way is to use
`Gunicorn`_.
Requirements
......@@ -166,34 +197,11 @@ Finally you can restart your ``burp-server``.
adapt the paths.
Debian-style
^^^^^^^^^^^^
When installing the *gunicorn* package on debian, there is a handler script that
is able to start several instances of `Gunicorn`_ as daemons.
All you need to do is installing the *gunicorn* package and adding a
configuration file in */etc/gunicorn.d/*.
There is a sample configuration file available
`here <https://git.ziirish.me/ziirish/burp-ui/blob/master/contrib/gunicorn.d/burp-ui>`__.
::
# install the gunicorn package
apt-get install gunicorn
# copy the gunicorn sample configuration
cp /usr/local/share/burpui/contrib/gunicorn.d/burp-ui /etc/gunicorn.d/
# now restart gunicorn
service gunicorn restart
Systemd
^^^^^^^
If you are not running on debian or you prefer not to use the gunicorn debian
package, the handler script may not be available. You will then have to create
your own service. We can do this for systemd for example:
You will have to create your own service. We can do this for systemd for
example:
::
......
......@@ -59,6 +59,11 @@ burp-server for some features.
free to contribute for other distributions!
.. note::
On RedHat/CentOS you'll have to replace every call to ``pip`` with ``pip3``.
This can also apply to debian prior Buster.
Python
^^^^^^
......@@ -71,13 +76,20 @@ compilation errors with one of these version, feel free to report them.
Libraries
^^^^^^^^^
Some libraries are required to be able to compile some requirements:
Some libraries are required to be able to compile ``pyOpenSSL``:
::
apt-get install libffi-dev libssl-dev python-dev python-pip
On RedHat/CentOS the requirements should be:
::
yum install gcc python36-devel openssl-devel
LDAP
^^^^
......
......@@ -88,6 +88,20 @@ You will also need some extra requirements:
pip install --upgrade "burp-ui[sql]"
Configure `Burp-UI`_ in order to enable SQL support by editing the ``database``
option of the ``[Production]`` section. Example:
::
[Production]
# ...
# database url to store some persistent data
# none or a connect string supported by SQLAlchemy:
# http://docs.sqlalchemy.org/en/latest/core/engines.html#database-urls
# example: sqlite:////var/lib/burpui/store.db
database = sqlite:////var/lib/burpui/store.db
Then you just have to run the following command to have your database setup:
::
......@@ -218,17 +232,25 @@ The available options are:
Setup burp client for burp-ui.
Options:
-b, --burp-conf-cli TEXT Burp client configuration file
-s, --burp-conf-serv TEXT Burp server configuration file
-c, --client TEXT Name of the burp client that will be used by
Burp-UI (defaults to "bui")
-h, --host TEXT Address of the status server (defaults to "::1")
-r, --redis TEXT Redis URL to connect to
-d, --database TEXT Database to connect to for persistent storage
-p, --plugins TEXT Plugins location
-n, --dry Dry mode. Do not edit the files but display
changes
--help Show this message and exit.
-b, --burp-conf-cli TEXT Burp client configuration file
-s, --burp-conf-serv TEXT Burp server configuration file
-c, --client TEXT Name of the burp client that will be used by
Burp-UI (defaults to "bui")
-h, --host TEXT Address of the status server (defaults to
"::1")
-r, --redis TEXT Redis URL to connect to
-d, --database TEXT Database to connect to for persistent
storage
-p, --plugins TEXT Plugins location
-m, --monitor TEXT bui-monitor configuration file
-C, --concurrency INTEGER Number of concurrent requests addressed to
the monitor
-P, --pool-size INTEGER Number of burp-client processes to spawn in
the monitor
-B, --backend [burp2|parallel] Switch to another backend
-n, --dry Dry mode. Do not edit the files but display
changes
--help Show this message and exit.
The script needs the `Burp`_ configuration files to be readable **AND**
......
......@@ -13,15 +13,21 @@ For a complete list of changes, you may refer to the
v0.7.0
------
- **Breaking** - you now need python 3.6 or above. `Burp-UI`_ won't work on
- **Breaking** - You now need python 3.6 or above. `Burp-UI`_ won't work on
python 2.7 anymore.
- **Breaking** - the *single* and *version* options within the ``[Global]``
- **Breaking** - The **new** `parallel <advanced_usage.html#parallel>`__ backend
only work with the
`sync <http://docs.gunicorn.org/en/stable/design.html#sync-workers>`_ gunicorn
worker. TL;DR, **don't** use the ``-w gevent`` flag when starting gunicorn if
you use the ``parallel`` backend.
- **Breaking** - The *single* and *version* options within the ``[Global]``
section have been removed in favor of a new unified *backend* option. See the
`Backends <advanced_usage.html#backends>`__ section of the documentation for
details.
- **Breaking** - there was a bug when using burp-server >= 2.1.10 where
- **Breaking** - There was a bug when using burp-server >= 2.1.10 where
timestamps where wrongly computed on the global clients view due to the fact
timestamps have offsets since burp-server 2.1.10. The new behaviour is to
suppose every timestamp have offset whenever we detect your current
......@@ -32,21 +38,31 @@ v0.7.0
The drawback of enabling the ``deep_inspection`` is this requires some extra
work that may slow down burp-ui.
- **Breaking** - the authentication backends section have been renamed with the
- **Breaking** - The authentication backends section have been renamed with the
``:AUTH`` suffix (so ``BASIC`` becomes ``BASIC:AUTH``, etc.).
Please make sure you rename those sections accordingly so you won't be locked
out.
- **Breaking** - the ``bui-agent`` will now exit when its *system* requirements
- **Breaking** - The ``bui-agent`` will now exit when its *system* requirements
are not met at startup time (that is: the burp-server must be up and running
and the burp-client used by burp-ui must be able to reach the burp-server).
A new timeout has been added though in order for ``bui-agent`` to wait for the
burp-server to be ready.
- **Breaking** - the ``prefix`` option has been moved from the ``[Global]``
- **Breaking** - The ``prefix`` option has been moved from the ``[Global]``
configuration section to the ``[Production]`` one for consistency with the new
*Production* options introduced.
- **Breaking** - The database schema evolved between *v0.6.0* and *v0.7.0*. In
order to apply these modifications, you **MUST** run the
``bui-manage db upgrade`` command before restarting your `Burp-UI`_
application (if you are using celery, you must restart it too).
- **New** - `bui-monitor <buimonitor.html>`__ is a distributed pool of burp
client processes. Its purpose is to centralize every requests to the burp
server in a single place with the ability to process hundreds of requests
asynchronously.
v0.6.0
------
......@@ -73,11 +89,16 @@ v0.6.0
you'll have to manually upgrade/migrate your data `following this
documentation <https://github.com/tianon/docker-postgres-upgrade>`_.
- **Breaking** - The database schema evolved between *v0.5.0* and *v0.6.0*. In
order to apply these modifications, you **MUST** run the
``bui-manage db upgrade`` command before restarting your `Burp-UI`_
application (if you are using celery, you must restart it too).
- **Breaking** - The ``docker-compose.yml`` file now uses the ``version: '2'``
format.
- **Breaking** - The old config file format with colons (:) as separator is no
more supported.
longer supported.
- **New** - Plugin system to enhance ACL and Authentication backends. See the
`Plugins <plugins.html>`__ documentation for details.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment