Self-hosting the administration panel
Availability
The administration panel is only available in the SAAS version of WorkAdventure. For most users, we ask you to use the SAAS version of WorkAdventure as it is the easiest and most reliable way to use WorkAdventure. We provide a high-quality service with automatic updates and backups, and we take care of the infrastructure for you. Exceptionally, for some specific use cases (large corporate or institutional clients with a strong need for privacy and a large number of users), we can provide a self-hosted version of the administration panel.
If you are interested in the self-hosted version of the administration panel, please contact us at [email protected].
Architecture
This deployment is split into two Helm charts that run on the same Kubernetes cluster.
Helm chart 1: Administration panel and WorkAdventure
This chart hosts the administration panel, the members website, and the WorkAdventure services exposed through your ingress controller. It also deploys the database used by the administration panel.
Helm chart 2: Matrix Synapse
This chart hosts Matrix Synapse for in world messaging, along with the components it needs to persist data.
External services
The following components run outside the Kubernetes cluster:
- An S3 compatible object storage used for administration panel uploads, database backups, map storage when S3 backed mode is enabled, and optionally Synapse backups
- A TURN service used for media connectivity when direct connectivity is not possible
- A LiveKit server handling group video , deployed either on a dedicated server or on a dedicated cluster of servers
Requirements
Kubernetes cluster
To install the administration panel, you will need a Kubernetes cluster.
WorkAdventure sizing
For WorkAdventure (with the admin panel), plan the Kubernetes nodes capacity as follows:
- A baseline of about 4 CPU and 8 GB of RAM for WorkAdventure itself
- An additional 1 CPU and 1 GB of RAM per 100 concurrent users you plan to have
- An additional 0.5 CPU and 2 GB of RAM per OpenAI/Azure powered bots you plan to host in your maps
Note: if you install Livekit inside the same Kubernetes cluster, you will need additional capacity (see below).
WorkAdventure will deploy PVC for its own database and for the Matrix Synapse database, that hosts messages. Size is usually moderate. Plan 20-30 GB of disk size to get started. You might need to add more space in the future if your users rely heavily on Matrix messages, especially if they share images or attachements.
S3 compatible object storage
You will also need an S3 compatible object storage, used for:
- Storing uploaded files from the administration panel (Wokas, custom logos, etc)
- Storing automated database backups (when the backup feature is enabled)
- Storing maps in the map storage service if you choose the S3 backed mode instead of a PVC
TURN service
For ICE connectivity, you must provide a TURN service:
- Either a dedicated server running Coturn
- Or a Cloudflare TURN account
If you deploy your own Coturn server(s), this server will be used to relay the video/audio signal. It will mostly need a lot of bandwidth on a stable internet connection, with a big number of UDP/TCP ports opened and a lot of CPU.
The TURN service is only used when no peer-to-peer connection is possible between users. This usually happens on restricted networks. In a typical environment, with visitors from various places, you can expect about 15% of users will need to have their stream routed through the TURN server. Your mileage may of course vary. If all your users are on a restricted network, you will need to route 100% of users.
To size the Coturn server appropriately, you need to estimate the number of video streams that could go in and out at a given amount of time in bubbles or meetings with 4 people or less (when you have 5 people or more, the stream goes through Livekit).
A one-way video stream uses about 1 MBit/s of network bandwidth.
Example:
Assuming you have 90 users, each talking in a bubble containing 3 users (so 30 bubbles with 3 people inside).
Each bubble needs 6 video streams (each users sends and receives one video stream with another user).
The total number of video streams is therefore 6x30 = 180.
If all streams go through Coturn, you need at least 180 Mbit/s.
If only 15% of the streams go through Coturn, you need 180*0.15 = 27 MBit/s.
This document does not detail the setup of the TURN server. You can find more information about setting up a Coturn server in the Coturn documentation.
LiveKit deployment and sizing
For video conferencing (5 people or more), the streams will go through a Livekit server:
- Either inside your Kubernetes cluster
- Or on a dedicated server, or a small cluster of servers
The WorkAdventure Helm chart described in this documentation does not install a Livekit server for you. You need to install it separately. All Livekit deployment options are detailed in Livekit documentation. Since WorkAdventure requires a Helm chart to deploy, using the Livekit Helm chart is the easiest way to deploy Livekit for WorkAdventure. However, be aware that the Livekit Helm chart can only be installed if your Kubernetes nodes have a public IP address and if you can deploy containers that can access the "host" network. If you are not able to do so, you will need to deploy Livekit using another method.
LiveKit sizing depends heavily on your video usage and the size of your largest rooms, but CPU and bandwidth are the main constraints, and production setups are recommended on 10 Gbps networking or faster.
You can consult the Livekit Benchmark article to size your server appropriately.
Livekit will be by far the most resource intensive component of your WorkAdventure deployment. Make sure to size it appropriately.
Domain names
You will need the following domain names to host the administration panel:
- A domain name for the administration panel (example
admin.example.com) - A domain name for the members website (example
member.example.com) - A domain name for PHPMyAdmin to troubleshoot the database, optional (example
phpmyadmin.example.com) - A domain name for Matrix Synapse (example
matrix.example.com)
In addition, you will need domain names for WorkAdventure itself:
- Three domain names for WorkAdventure (example
play.example.com,pusher.example.com,roomapi.example.com) - A domain name for the icon fetcher service (example
icon.example.com) - A wildcard domain name for the map storage service (example
*.mapstorage.example.com) or at least one subdomain per world - A domain name for the uploader service (example
uploader.example.com)
Finally, you will need domain names for the external components that you will install along WorkAdventure:
- A domain name for the Livekit server (example
livekit.example.com) - A domain name for the Jitsi Meet server, optional (example
jitsi.example.com) - A domain name for the TURN server, if you deploy your own Coturn server (example
turn.example.com)
Ingress controller and Gateway API
Your Kubernetes cluster must have an ingress controller installed to route HTTP(S) traffic to the administration panel and to WorkAdventure. In preparation for future releases, we recommend installing an ingress controller that supports the Gateway API, such as Traefik 3+.
Installing the administration panel
We provide a Helm chart to install the administration panel on a Kubernetes cluster.
The Helm chart is available at https://charts.workadventu.re.
The very first step is to create a public / private key pair for the Laravel Passport keys and save them in a Kubernetes secret.
openssl genrsa -out private.key 2048
openssl rsa -in private.key -pubout -out public.key
# Create a namespace and put the keys in a secret
kubectl create namespace [your-namespace]
kubectl -n [your-namespace] create secret generic passport-keys \
--from-file=private.key=private.key \
--from-file=public.key=public.key
Then, you can configure the values.yaml file with the correct values for your installation.
Here is an example of a minimal values.yaml file:
domainName: "example.com"
admin:
imageCredentials:
# This password will be provided to you by WorkAdventure
username: "your-docker-registry-username"
password: "your-docker-registry-password"
ingress:
tls: true
# Here, put whatever annotations you need to configure the ingress (and the certificate)
annotationsPath:
cert-manager.io/cluster-issuer: letsencrypt-prod
Once the values.yaml file is configured, you can install the administration panel with the following commands:
helm repo add workadventure-charts https://charts.workadventu.re
helm upgrade --install -n [your-namespace] -f values.yaml workadventure-admin workadventure-charts/workadventure-admin
Setting up database credentials
By default, the Helm chart will set up a MySQL database and generate random credentials for the "root" user, and for the "workadventure" user.
If you want to provide your own credentials, you can do so by adding the following section to your values.yaml file:
# Root password for the MySQL database
# If empty, will be autogenerated
dbRootPassword: ""
# Database password of the "workadventure" user
# If empty, will be autogenerated
dbPassword: ""
Configuring the database
It is advised to configure the disk space for the database PVC, along with the resources requests to be sure the database is allocated on a pod with enough resources.
Here is an example of a values.yaml file to configure the database:
mysql:
primary:
persistence:
size: 10Gi
resources:
requests:
memory: 3Gi
cpu: 1
Configuring the database backup
The administration panel comes with a backup system that can be configured to automatically backup the database to an S3 compatible storage.
Here is an example of a values.yaml file to configure the backup system:
backup:
enabled: false
bucket: null
fileName: workadventure-mysql-backup
endpointUrl: null
accessKeyId: null
secretAccessKey: null
schedule: "0 4 * * *"
count: 30
Adding PHPMyAdmin to the administration panel
In order to debug and maintain the database of the administration panel, you may want to install PHPMyAdmin.
Here is an example of a values.yaml file to install PHPMyAdmin:
phpmyadmin:
enabled: true
ingress:
enabled: true
hostname: phpmyadmin.example.com
tls: true
annotations:
# Here, put whatever annotations you need to configure the ingress (and the certificate)
# Your mileage may vary depending on your Kubernetes cluster and your ingress controller
cert-manager.io/cluster-issuer: letsencrypt-prod
The complete list of available values can be read in Bitnami's PhpMyAdmin chart documentation.
Setting up OpenID credentials
The administration panel uses OpenID to authenticate users from WorkAdventure and from Synapse. By default, it will create 2 OpenID client credentials with the appropriate secrets.
Because it can be tedious to retrieve those secrets, we advise you to provide them in the values.yaml file:
admin:
secretEnv:
"PASSPORT_PRIVATE_KEY": "a 40 characters long secret alphanumeric string"
"PASSPORT_CLIENT_WORKADVENTURE_SECRET": "another 40 characters long secret alphanumeric string"
Setting up the SMTP server
The admin dashboard sends emails to users for various reasons (password reset, sending invitations, ...).
You can configure the SMTP server to use in the values.yaml file:
admin:
env:
MAIL_MAILER: smtp
MAIL_HOST: <your SMTP server domain name>
MAIL_PORT: <your SMTP server domain name>
MAIL_USERNAME: <user name>
MAIL_ENCRYPTION: <encryption method>
MAIL_FROM_ADDRESS: "[email protected]"
MAIL_FROM_NAME: "No reply"
secretEnv:
MAIL_PASSWORD: <password>
Configuring uploaded files storage
The administration panel will need to store uploaded files (like Wokas, custom logos, ...). Those files will be stored in a S3 compatible storage.
Here is an example of a values.yaml file to configure the uploaded files storage:
admin:
env:
AWS_ENDPOINT: "<URL to the S3 compatible storage>"
AWS_BUCKET: "<name of the bucket>"
AWS_DEFAULT_REGION: "<region of the bucket>"
secretEnv:
AWS_ACCESS_KEY_ID: "<access key>"
AWS_SECRET_ACCESS_KEY: "<secret key>"
WorkAdventure configuration
By default, the admin chart will install a WorkAdventure instance with the default configuration.
You should customize the configuration of the WorkAdventure instance by providing a specific workadventure entry in the values.yaml file.
workadventure:
enabled: true
domainName: "example.com"
ejabberdDomain: "ejabberd.example.com"
ingress:
tls: true
annotationsRoot:
cert-manager.io/cluster-issuer: letsencrypt-prod
Map storage multi domain setup
The map-storage container needs to listen on all subdomains of the "map-storage" domain. This can be tricky to set up as it means you will need a wildcard ingress, with the associated certificates.
There are two solutions to this problem:
- use a wildcard ingress and a wildcard certificate for the map-storage domain
- or manually create ingresses for every subdomain of the map-storage domain you use (this will require modifying the configuration each time you create a new world)
Wildcard configuration
If you go with the first solution, you will need a Traefik ingress controller and a wildcard certificate for the map-storage domain.
Turn on the enableWildcardMapstorageDomains option in the values.yaml file:
# Whether to enable or not wildcard map storage domains. This requires a Traefik ingress controller to work
enableWildcardMapstorageDomains: true
# Suffix of the map storage domain. Note: there is one map-storage domain per world
# Defaults to ".map-storage.[domainName]"
publicMapStorageDomainSuffix: ""
workadventure:
mapstorage:
# Let's disable the default ingress
ingress:
enabled: false
This will create a custom IngressRoute for Traefik to handle the wildcard domain.
Manual configuration
If you go with the second solution, you should declare the domain names in the values.yaml file:
workadventure:
mapstorage:
ingress:
# Each time you create a new world, you need to add the domain name here
domainNames:
- my-world-1.map-storage.example.com
- my-other-world-2.map-storage.example.com
- ...
Only use this solution if you cannot obtain a wildcard certificate for your domain. It will require manual intervention each time you create a new world.
Setting up specific annotations for WorkAdventure ingress
The WorkAdventure chart does not provide HTTPS certificates out of the box but instead relies on third party software like cert-manager to provide them.
You can configure the ingress with specific annotations to use your own certificate:
workadventure:
ingress:
tls: true
annotationsRoot:
# If you are using nginx + cert-manager, you can use the following annotations
cert-manager.io/cluster-issuer: letsencrypt-prod
# If you are using Traefik, you can use the following annotations
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt
Configuring Livekit
Livekit is the SFU video conferencing solution that WorkAdventure uses to provide video conferencing capabilities when there are more than 4 users in a room or discussion bubble.
To plug WorkAdventure to your Livekit instance, you need to provide the following configuration in the values.yaml file:
admin:
env:
# The full URL to your Livekit server (with the https protocol)
LIVEKIT_HOST: "https://<your Livekit server domain>"
secretEnv:
# The API key and secret to connect to Livekit
LIVEKIT_API_KEY: "<some api key>"
LIVEKIT_API_SECRET: "<some api secret>"
Configuring Jitsi Meet (optional)
In past releases of WorkAdventure, Jitsi Meet was the main video conferencing solution. It is being superseded by Livekit, which provides a more integrated experience. If you have a Jitsi Meet instance already set up, you can still use it with WorkAdventure.
To plug WorkAdventure to your Jitsi instance, you need to provide the following configuration in the values.yaml file:
workadventure:
commonEnv:
# The full URL to your Jitsi server (with the protocol)
JITSI_URL: "https://<your Jitsi server domain>"
# An ISS value for JWT tokens. Can be anything
JITSI_ISS: "meetworkadventure"
commonSecretEnv:
# The key should match the one in the Jitsi Helm Chart (prosody.extraEnvs.JWT_APP_SECRET)
SECRET_JITSI_KEY: "<some secret key>"
Configuring ICE servers (STUN/TURN)
When the administration panel is deployed, any STUN/TURN environment variables set directly on the play or back containers
are ignored and superseded by the ICE server configuration defined in the admin panel. The admin panel centrally manages
all ICE server credentials and configuration for WorkAdventure instances.
The administration panel supports two types of ICE server configurations:
Option 1: Default (Coturn or custom TURN servers)
To use your own TURN server (such as Coturn), configure the following in the admin panel's values.yaml file:
admin:
env:
# Specify the type of TURN server configuration
TURN_SERVER_TYPE: "default"
# The default STUN server to use (can be multiple, comma-separated)
STUN_SERVER: "stun:stun.l.google.com:19302"
# The TURN server to use (can be multiple, comma-separated)
TURN_SERVER: "turn:[server-ip]:3478,turn:[server-ip]:3478?transport=tcp"
secretEnv:
# The TURN authentication secret used to generate time-limited credentials
TURN_STATIC_AUTH_SECRET: "<some secret key>"
The admin panel will automatically generate time-limited credentials using HMAC-SHA1 authentication with the provided secret.
Option 2: Cloudflare TURN
To use Cloudflare's managed TURN service, configure the following in the admin panel's values.yaml file:
admin:
env:
# Specify Cloudflare as the TURN server type
TURN_SERVER_TYPE: "cloudflare"
secretEnv:
# Your Cloudflare TURN key
# Get this from: https://dash.cloudflare.com/?to=/:account/calls/turn
CLOUDFLARE_TURN_KEY_ID: "<your-cloudflare-key-id>"
CLOUDFLARE_TURN_KEY_API_TOKEN: "<your-cloudflare-api-token>"
The admin panel will automatically request temporary credentials from Cloudflare's TURN API with a 4-hour TTL.
For more information on Cloudflare TURN, see: Cloudflare TURN Documentation
Configuring the map-storage storage service
The map-storage service can store the maps in a PVC or in a S3 compatible storage.
Here is an example of a values.yaml file to configure the map-storage service with a PVC:
workadventure:
mapstorage:
persistence:
enabled: true
size: "1Gi"
Here is an example of a values.yaml file to configure the map-storage service with a S3 compatible storage:
workadventure:
mapstorage:
env:
AWS_URL: "<URL to the S3 compatible storage (if not AWS)>"
AWS_BUCKET: "<name of the bucket>"
AWS_DEFAULT_REGION: "<region of the bucket>"
secretEnv:
AWS_ACCESS_KEY_ID: "<access key>"
AWS_SECRET_ACCESS_KEY: "<secret key>"
Matrix Synapse configuration
In order to install the Matrix Synapse server, we provide a separate Helm chart that can be installed along with the administration panel.
Note that you don't have to use our Matrix Synapse chart, you can use your own Matrix Synapse server if you want.
The chart we provide is based on the Ananace's Matrix Synapse chart and provides in addition a backup system and an automatic creation of an user dedicated to WorkAdventure.
The first step is to generate a secret containing the Matrix Synapse signing key. Be sure to save this signing key in a secure place. If you lose it, federation will stop working as the other Matrix servers are caching this signing key.
Getting a valid signing key is the hard part. A Docker installation of Matrix Synapse will generate one for you.
kubectl -n [your-namespace] create secret generic matrix-synapse-signing-key \
--from-file=signing.key=path_to_your_signing_key
Next, you need to provide the values.yaml file with the correct values for your installation.
You MUST at least provide the domain name of the Matrix Synapse server in the values.yaml file.
Also, you MUST configure the OpenID endpoint in the values.yaml file to point to the administration panel.
synapse:
# The domain name for the Synapse server
serverName: "matrix.example.com"
# The public Matrix server name, this will be used for any public URLs
# in config as well as for client API links in the ingress.
publicServerName: "matrix.example.com"
ingress:
tls:
- secretName: ingress-secret-admin
# Don't forget to set the domain name here to generate a SSL certificate for it.
hosts:
- matrix.example.com
extraConfig:
oidc_providers:
- idp_id: workadventure-admin
idp_name: WorkAdventure
issuer: "http://member.[YOUR-WA-DOMAIN]"
skip_verification: true
client_id: "2"
client_secret: "[YOUR SECRET]"
scopes: ["openid", "email", "matrix", "profile"]
user_mapping_provider:
config:
# matrix_local_part is provided by the special "matrix" scope
localpart_template: "{{ user.matrix_local_part }}"
display_name_template: "{{ user.name }}"
email_template: "{{ user.email }}"
sso:
client_whitelist:
- https://pusher.[YOUR-WA-DOMAIN]/
- https://play.[YOUR-WA-DOMAIN]/
Note: replace [YOUR SECRET] with the Synapse client secret.
In order to get it, you can look in your admin values.yaml file for the admin.secretEnv.PASSPORT_CLIENT_SYNAPSE_SECRET key.
If this secret is not set, WorkAdventure generated one for you. You can read it using this command:
kubectl -n [your-namespace] get secrets secret-env-admin -o jsonpath='{.data.PASSPORT_CLIENT_SYNAPSE_SECRET}' | base64 -d
When everything is setup, you can proceed to install the Matrix Synapse server with the following commands:
helm upgrade --install -n [your-namespace] -f values.yaml workadventure-synapse workadventure-charts/workadventure-synapse
Automatically creating a Matrix user
WorkAdventure needs a dedicated user on the Matrix Synapse server to create rooms and invite users. The Synapse chart can automatically create this user for you.
createMatrixUser: true
matrixUser: admin
matrixPassword: [TO BE FILLED]
Configuring the Matrix Synapse server in WorkAdventure
In order to connect WorkAdventure to the Matrix Synapse server, you need to provide the following configuration in the values.yaml file of the WorkAdventure chart:
workadventure:
commonEnv:
MATRIX_API_URI: "http://matrix:8008/"
MATRIX_PUBLIC_URI: "https://[YOUR-SYNAPSE-DOMAIN]/"
MATRIX_DOMAIN: '[YOUR-SYNAPSE-DOMAIN]'
MATRIX_ADMIN_USER: '[YOUR-SYNAPSE-USER]'
MATRIX_ADMIN_PASSWORD: '[YOUR-SYNAPSE-PASSWORD]'
In the example above:
- replace
[YOUR-SYNAPSE-DOMAIN]with the value ofsynapse.serverNamein thevalues.yamlfile of the Synapse chart - replace
[YOUR-SYNAPSE-USER]with the value ofmatrixUserin thevalues.yamlfile of the Synapse chart - replace
[YOUR-SYNAPSE-PASSWORD]with the value ofmatrixPasswordin thevalues.yamlfile of the Synapse chart
Configuring the Matrix Synapse backup
You can configure the Matrix Synapse backup system to automatically back up the database to an S3 compatible storage.
backup:
enable: true
#schedule: "* * * * *"
keepDays: 7
s3:
accessKeyId: "[TO BE FILLED]"
bucket: "[TO BE FILLED]"
# If you are using a S3-compatible endpoint, you can set the URL here
# endpointUrl: "[TO BE FILLED]"
region: "[TO BE FILLED]"
secretAccessKey: "[TO BE FILLED]"