For english description see below.
Ansible playbook zur Installation von BBB auf den LFB Maschinen.
ansible-playbook -i hosts bbb-install.yml --ask-vault-pass
alle Maschinen auf einmal installieren.ansible-playbook -i "bbb.q-gym.de," bbb-install.yml --ask-vault-pass
Das Playbook bbb-without-install-script.yml
arbeitet alle Roles ab, bis auf das eigentliche bbb-Installationssskript. Das kann verwendet werden, um die Umgebung um ein installiertes BBB anzupassen, z.B. wenn man apply-config.sh
verändert.
Wenn man das Passwort für den ansible-Vault nicht kennt, muss man im Variablen-Block der Playbooks seine eigenen Werte direkt eintragen:
scriptoptlemail: "{{ vault_scriptoptlemail }}"
scriptoptsturnsrv: "{{ vault_scriptoptsturnsrv }}"
scriptoptsturnpw: "{{ vault_scriptoptsturnpw }}"
wir dann z.B. zu:
scriptoptlemail: "webmaster.meinedomain.dom"
scriptoptsturnsrv: "turn.meinedomain.dom"
scriptoptsturnpw: "xxggrree55"
und die Zeile
vars_files: vault
muss man auskommentieren.
cd greenlight
, dort docker exec greenlight-v2 bundle exec rake user:create["Lokaler Admin","admin@bbb.local","SUPERGEHEIMESPASSWORT","admin"]
bbb-conf --secret
aus. Wenn man nur das Moodle-Plugin zum Zugriff auf das BBB nutzen will, benötit man keine Greenlight Benutzer.Der Host- und Domainname muss nicht mehr als Variable gesetzt werden, sondern wird aus dem Inventory-Hostnamen abgeleitet.
ansible-playbook -i "bbb.q-gym.de," bbb-install.yml
Sollte also automagisch für den Host bbb.q-gym.de alles richtig machen.
Sollte auf Debian Derivaten laufen (gestetet debian buster). Voraussetzung: Frisch installiertes Debian/Ubuntu mit DNS Eintrag.
Anzupassen ist das Secret im Playbook, das kann erzeugt werden mit openssl rand -hex 16
ansible-playbook -i "turn.q-gym.de," bbb-coturn.yml
Verwendet die Rolles
======================================================================
The ansible playbooks provided here have been developed mostly in April 2020, during the Corona-Pandemic to provide online teaching and conference tools for all schools in Baden-Württemberg, south-west Germany. They are work in progress, but work fine as far as we can tell today, and are used to prepare a total of several hundred BigBlueButton Servers (BBBs) on dozens of powerful (32 core, 64 threads) machines.
Our setup is as follows:
To facilitate the most efficient use of the hardware at hand and the limitation/recommendation to run BBB on Ubuntu 16.04, we set up BBB in containers, which are in turn run and managed by systemd-nspawn. The host system is Debian Buster and no problems running the BBB container with the Debian stable kernel have been observed. This light weight setup provides very good sharing of hardware resources and hopefully sufficiently good response times for the real-time A/V-application, even under heavy load.
Right now, we run 28 BBB container on a single machine (64 threads), which might be a bit to much under-provisioning. The best ratio of threads/cores per BBBs is still an area under investigation.
When preparing the initial Ubuntu 16.04 container, no very special
modifications have been applied. All customization from the straight
forward setup described in the BBB documentation is available in the
playbook bbbcontainerhosts.yml
, especially in
roles/bbbcontainer/tasks/ubuntu-container.yml
.
The initial container can be archived with machinectl export-tar bbb000 bbb000-$(date +%Y%m%d).tar.xz
and provided on roll-out:
vault_container_image: "https://PROVIDE.CONTAINER.TLD/image/bbb000.tar.xz"
However, by default the first container is debootstrap
ed, all further
containers are then cloned from that initial image.
In addition to the BBB containers, every host provides a containerized
STUN/TURN server (coturn
) which is used by all BBBs of the
associated host. The setup is straight forward, based on a
debootstrap
ed Debian Buster.
We use a single NIC of the host with several IP-addresses: The IP-address of the host itself as well as all IP-addresses of the containers. All container configuration is calculated from the subnet provided at install time for every machine. In the ansible inventory hosts file, we provide for example:
[containerhost]
HOST.DOMAIN.TLD vault_guest_network="172.93.28.160/28"
.
With this set, the playbook assigns the first usable subnet address
(172.93.28.161
) to the bridge virbr0
, the second
(172.93.28.162
) to the turn server (a minimal Debian
Buster with coturn
, see above) and then all further addresses to
BBBs, as long as they are resolvable by the DNS
(cf. bbbcontainerhosts.yml
).
It is possible to limit the list of BBBs by defining max_num_bbbs
as the maximum number of BBBs (if available in the DNS).
For example, use --extra-vars="max_num_bbbs=5"
to limit the list
to the first 5 BBBs.
On roll-out, we need the server with minimal Debian Buster installed
and ssh pubkey authentification. In addition, subnet information
(vault_guest_network=…
) needs to be provided. Further more, all
DNS entries need to be ready for the BBBs. After that, the host
carrying the STUN/TURN server and a bunch of BBBs is ready after
running the following command twice:
ansible-playbook -u root -i hosts --vault-password-file vault.pwd --limit HOSTS2INSTALL rollout-master.yml
In the first run, the initial container template is debootstrap
ed.
A second call of the above command clones all the other BBBs from the
template (which should of course be tested thoroughly before).
To remove all BBBs of a host from the load balancer pool, use the
master playbook with the --tags=bbb_disable
option. Add them back
to the pool with --tags=bbb_enable
.
To run only the set of checks on the BBB containers, use the
--tags=bbb_check
option.
To upgrade the BBBs and the TURN server, use --tags=bbb_upgrade
.
Use --tags=debcont_upgrade
to only upgrade and restart the TURN server.
We use several monitoring systems to optimize and further develop the setup. We are happy to provide further information if needed and of course appreciate recommendations and better ideas.