In this configuration, Workbench is installed on a single Linux server and enables:
Access to RStudio, Jupyter Notebook, JupyterLab and VS Code development IDEs
Multiple concurrent sessions per user
Use of multiple versions of R and Python
Using Workbench as a cluster
In this configuration, Workbench is installed on two or more Linux servers and enables:
Load balancing to provide additional computational resources to end users
High availability to provide redundancy
Access to RStudio, Jupyter Notebook, JupyterLab and VS Code development IDEs
Multiple concurrent sessions per user
Use of multiple versions of R and Python
Requirements to support this architecture:
Users’ home directories must be stored on an external shared file server (typically an NFS server)
Session metadata must be stored on an external PostgreSQL database server
The example diagrams below show cluster architectures with and without an external load balancer.
External Load Balancer
Single Node Routing
Using Workbench with an external resource manager
In this configuration, Workbench is installed on one or more Linux servers, is configured with Launcher and a Kubernetes or Slurm cluster backend.
Launcher is a plugin that allows you to run sessions and background jobs on external cluster resource managers. In addition to Kubernetes and Slurm, Launcher can be extended to work with other cluster resource managers using the Launcher SDK. AWS Sagemaker and Altair Grid Engine are two example uses of this SDK where a third-party developed a launcher plugin for their respective cluster resource manager.
This enables:
Users to run sessions and jobs on external compute cluster(s)
Optional replicas for high availability
Access to RStudio, Jupyter Notebook, JupyterLab and VS Code development IDEs
Multiple concurrent sessions per user
Use of multiple versions of R and Python
Requirements to support this architecture:
Users’ home directories must be stored on an external shared file server (typically an NFS server)
It is strongly recommended that session metadata be stored on an external PostgreSQL database server
Using Workbench entirely in Kubernetes
In this configuration, Workbench is installed entirely inside a Kubernetes cluster and enables:
User sessions and jobs run in isolated pods, potentially from different base images
The entire installation is managed in Kubernetes with tools like Helm
Optional replicas for high availability
Access to RStudio, Jupyter Notebook, JupyterLab and VS Code development IDEs
Multiple concurrent sessions per user
Use of multiple versions of R and Python
Requirements to support this architecture:
Users’ home directories must be stored on an external shared file server (typically an NFS server)
Session metadata must be stored on an external PostgreSQL database server
In this configuration, Workbench is installed on one or more Linux servers, is configured with Launcher and a Slurm cluster backend, and enables:
Users to run sessions and submit jobs via the SLURM Launcher against a SLURM cluster with arbitrary number of compute nodes of a given type
Optional replicas for high availability
Access to RStudio, Jupyter Notebook, JupyterLab and VS Code development IDEs
Multiple concurrent sessions per user
Use of multiple versions of R and Python
Requirements to support this architecture:
Users’ home directories must be stored on a shared file system (typically an NFS server), shared storage typically includes /home, /scratch, data folders, and session containers
Session components will need to be accessible from the Slurm compute nodes (installed or mounted), or Singularity can be used as session containers
Users must exist on both Workbench servers and the Slurm cluster node, for example by pointing to the same authentication provider
The use of an external PostgreSQL database server is necessary when using multiple Workbench servers