This repository contains the code to dux CLI which will be used to setup, deploy and perform operations of tunnel server containerized deployments. This is coded in GoLang.
Execute go install
to compile and install the cli. Execute dux
to see the output
Dux can be installed on Linux, Mac OS and Windows.
Dux for Linux (RHEL) can be installed/updated using the package managers - yum or dnf.
Note: If you had installed the Beta version 2.0.0.3 earlier, please perform the following steps to cleanup before proceeding with installation:
# Remove dux.repo
sudo rm /etc/yum.repos.d/dux.repo
#Clean the package manager cache
sudo yum clean all
or
sudo dnf clean all
Create dux.repo and install
$ cat << EOF | sudo tee /etc/yum.repos.d/dux.repo
[dux]
name=Workspace ONE Tunnel CLI
baseurl=https://packages.omnissa.com/ws1-tunnel/dux
enabled=1
gpgcheck=0
EOF
#If using yum:
$ sudo yum install -y dux
# If using dnf
$ sudo dnf install -y dux
After installation the following directory structures will be created:
/opt/omnissa/dux/
/opt/omnissa/dux/images/
/opt/omnissa/dux/logs/
$ cd /opt/omnissa/
$ ls -ltr
drwxr-xr-x. 4 root root 32 Feb 16 17:59 dux
$ cd dux/
$ ls -ltr
total 0
drwxr-xr-x. 2 root root 6 Feb 16 13:27 logs
drwxr-xr-x. 2 root root 6 Feb 16 13:27 images
$ which dux
/usr/bin/dux
To update dux using yum/dnf you can use the command:
$ sudo yum update dux
or
$ sudo dnf update dux
Download the dux rpm as per the architecture of the host from where dux would be executed (x86_64 for AMD/Intel or aarch64 for ARM/Apple M1).
For example, if OS is Linux and Arch is amd64, the rpm dux-<version>-1.x86_64.rpm should be downloaded.
The url to download the rpms for Dux 2.3.0 is as follows:
For AMD64/Intel: https://packages.omnissa.com/ws1-tunnel/dux/2.3.0.405/dux-2.3.0.405-1.x86_64.rpm
For ARM64/Apple Silicon: https://packages.omnissa.com/ws1-tunnel/dux/2.3.0.405/dux-2.3.0.405-1.aarch64.rpm
# download and install manually
$ wget <url to download> Or download manually
$ sudo rpm -i dux-2.3.0.405-1.x86_64.rpm
installed
$ cd /opt/omnissa/
$ ls -ltr
drwxr-xr-x. 4 root root 32 Feb 16 17:59 dux
$ cd dux/
$ ls -ltr
total 0
drwxr-xr-x. 2 root root 6 Feb 16 13:27 logs
drwxr-xr-x. 2 root root 6 Feb 16 13:27 images
$ which dux
/usr/bin/dux
$ dux version
Workspace ONE Tunnel CLI (dux)
2.3.0.405
Note: If you had an older version of Dux package installed by rpm, please ensure to delete the package before installing
sudo rpm -e <package_name>
Dux can be installed on Mac OS using the package manager - brew.
$ brew tap wsonetunnel/tunnel
$ brew install dux
After installation the following directory structures will be created based on if you are using Mac on Intel (AMD64) or Mac on Apple Silicon (ARM):
For Mac OS on Intel/AMD64:
/usr/local/var/opt/omnissa/dux/
/usr/local/var/opt/omnissa/dux/images
/usr/local/var/opt/omnissa/dux/logs
For Mac OS on Apple Silicon/ARM64:
/opt/homebrew/var/opt/omnissa/dux/
/opt/homebrew/var/opt/omnissa/dux/images/
/opt/homebrew/var/opt/omnissa/dux/logs/`
# For example
$ cd /usr/local/var/opt/omnissa/dux/
$ ls -ltr
total 0
drwxr-xr-x. 2 root root 6 Feb 16 13:27 logs
drwxr-xr-x. 2 root root 6 Feb 16 13:27 images
$ which dux
/usr/local/bin/dux
The default path where dux looks for Tunnel Server images is the directory images based on the platform as mentioned above.
The logs of execution of dux (dux.log, tunnel snap, vpnserver logs) are stored under the directory logs based on the platform as mentioned above.
To update the dux version for Mac OS if you had installed an older version earlier:
brew update
brew upgrade dux
To install Dux for Windows, download the msi installer from the url based on the architecture:
For AMD64/Intel: https://packages.omnissa.com/ws1-tunnel/dux/2.3.0.405/dux-windows-amd64.msi
For ARM64/Apple Silicon: https://packages.omnissa.com/ws1-tunnel/dux/2.3.0.405/dux-windows-arm64.msi
C:\Program Files\Omnissa\Dux
After installation, the following directory structures will be created under the selected installation directory:
<INSTALL_DIR>\images
<INSTALL_DIR>\logs
If Dux is installed in a protected directory (e.g., C:\Program Files
), you must run PowerShell or Command Prompt as an administrator to execute Dux commands.
Get the version of Dux cli deployed
$ dux version
Workspace ONE Tunnel CLI (dux)
2.3.0.405
$ dux -h
CLI to deploy and manage Tunnel server containers
Usage:
dux [command]
Available Commands:
deploy Deploy Tunnel server containers
destroy Destroy the Tunnel server containers on the given hosts
exec-shell Open interactive shell with Tunnel server container
extportscan Check for open ports in the external NIC of the Tunnel server container hosts
help Help about any command
init Create a manifest file for configuring Tunnel server details for deployment and management
log-override Override log level in one or more Tunnel server containers deployed
logs Get logs from the Tunnel server containers deployed
report Fetch vpnreport of a Tunnel server container
restart Restart the Tunnel server container on given hosts
status Get the status of the Tunnel server containers deployed
stop Stop the Tunnel server containers on given hosts
version Get the version of dux
Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
Use "dux [command] --help" for more information about a command.
In the host/VM from where dux is to be installed and run:
a. AlmaLinux, CentOS/RHEL, macOS or Windows
b. SSH (Secure Shell) should be installed and running on your local Mac/Linux/Windows machine. You can verify this by opening a terminal and typing ssh followed by pressing Enter. If SSH is installed, you will see a list of available options. If not, please install SSH using the appropriate package manager for your system.
c. SSH keys need to be set up for users to log in to the remote VMs (where tunnel server is to be deployed) securely without passwords. If you haven't set up SSH keys before, follow these steps:
i. Generate SSH keys using the command: For example: ssh-keygen -t rsa
.
ii. Follow the prompts to create the SSH keys. By default in Linux and Mac OS, they will be saved in ~/.ssh/id_rsa
(private key) and ~/.ssh/id_rsa.pub
(public key) if RSA is chosen. In Windows, the default path for the keys will be in C:\Users<user_name>.ssh
iii. Copy the contents of the public key (id_rsa.pub
) to the remote VM's ~/.ssh/authorized_keys
file.
You can use the ssh-copy-id
command for this purpose: ssh-copy-id username@remote_host
.
Note that on Windows, PowerShell and Command Prompt do not include ssh-copy-id. Users must manually copy the public key using scp, sftp, or append/paste it manually to authorized_keys.
Please note that if using Git Bash or WSL on Windows, ssh-copy-id might be available.
d. If you wish to enable ssh_host_key_check
in the ts_manifest.yml
, ensure that the known_hosts
file exists on your local machine.
This file is used to store information about host keys for SSH connections. If it doesn't exist, it will be created automatically when you connect to a remote host for the first time/when running dux commands.
If the known_hosts
file does not exist, follow these steps to create it:
- SSH to the remote host using the command: ssh username@remote_host
.
- You will be prompted to confirm the authenticity of the host. Type yes
and press Enter to continue.
- After successful authentication, the host key will be added to the known_hosts
file on your local machine.
e. Tunnel server docker image bundle (tar.gz) has to be downloaded and should be available in the host from where the cli will be run.
f. Ensure the image is available in the default path - This step is applicable only after dux is installed and the folder structure is created.
For dux on Linux:
Use /opt/omnissa/dux/images/
For dux on macOS
Use /usr/local/var/opt/omnissa/dux/images for macOS on Intel/AMD64
Use /opt/homebrew/var/opt/omnissa/dux/images/ for macOS on Apple Silicon/ARM64
For dux on Windows
Use <INSTALL_DIR>\images - where INSTALL_DIR is the directory chosen by user during installation of dux
The dux tool will use the container image from the above folder path by default. A custom folder may be used. Ensure the complete full directory path is provided in the manifest file if using a custom folder.
In the Linux VMs/hosts where Tunnel server container needs to be deployed:
a. SSH has to be enabled in the remote host/VM and SSH server daemon sshd has to be running in the remote host.
For example, in AlmaLinux check if sshd is running with the command: systemctl status sshd
.
b. Docker/Podman/Podman-docker has to be installed and running in the Linux host/VM. Currently Snap docker (if ubuntu) is not supported.
Please ensure to install Docker CE/Podman/Podman-docker.
i. Dux commands perform docker operations using the 'docker' command. If Podman is installed on the VM, a symbolic link from podman to /usr/bin/docker is required to ensure compatibility.
The dry-run command (dux deploy -d) automatically creates this symlink if needed. Alternatively, the user can manually create it using the following command: `sudo ln -sf $(which podman) /usr/bin/docker`
c. Ensure that the user can do sudo without password in the remote VM/host where tunnel server container needs to be deployed
i.e In the sudoers file in the remote host, add an entry to grant passwordless access to your desired user.
To allow users to execute commands with sudo privileges without entering a password on the remote VM, follow these steps:
i. SSH to the remote VM as a user with administrative privileges.
ii. Edit the sudoers file using the command: sudo visudo
.
iii. Add the following line to the end of the file to grant sudo privileges without password prompt: Replace username with the actual username of the user.
username ALL=(ALL) NOPASSWD: ALL
iv. Save and exit the sudoers file.
d. Connectivity to UEM Console API.
e. Connectivity to AWCM.
f. If running cascade mode, Front-end to Back-end connection (direct or load-balanced) is required.
Deployment of Tunnel server in Dux VM
To deploy Tunnel server in the same Linux VM as Dux, and to use dux commands locally without the need to use ssh:
This command creates a sample manifest file for configuring Tunnel server for deployment.
Note that this command needs to be run only the first time after you install/update dux.
This command creates a sample manifest (ts_manifest.yml) and the performance tuning script (perf_tune.sh) under the directory dux based on the platform (by default). If you wish to use a different path where the files need to be created the command "dux init <some_path>" can be given. Please ensure to specify the path of manifest with -m option in the other commands.
The script perf_tune.sh contains commands to configure remote host settings for enhanced performance.
For example, in Linux VM where dux is installed:
$ dux init
Deployment manifest initialized successfully
$cd /opt/omnissa/dux
abc@abc dux $ ls -ltr
total 16
drwxr-xr-x 3 abc xyz 96 Feb 16 18:12 images
-rw-r--r-- 1 abc xyz 643 Feb 16 18:15 perf_tune.sh
-rw-r--r-- 1 abc xyz 2335 Feb 19 11:11 ts_manifest.yml
drwxr-xr-x 9 abc xyz 288 Feb 19 11:11 logs
Edit ts_manifest.yml generated in an editor of your choice.
Please refer to the section "Points to be noted while editing ts_manifest.yml" under Troubleshooting section.
Here are a few parameters:
# Enter the filename of the image to deploy below.
# This must match against the tunnel server image filename from the default directory (refer to the note below) or the absolute path.
# example: 29-2023.06.14-22e04910.tar.gz or /home/admin/29-2023.06.14-22e04910.tar.gz
# Note: The default directory where the images are recommended to be present is:
# - for linux: /opt/omnissa/dux/images
# - for Mac OS on Intel/AMD64: /usr/local/var/opt/omnissa/dux/images/
# - for Mac OS on Apple Silicon/ARM64: /opt/homebrew/var/opt/omnissa/dux/images/
# - for Windows: <path of dux installation directory>/images
#Copy the bundle to the working directory
# eg. in Linux: cp ~/Downloads/23.12.14-2023.12.12-95068395.tar.gz /opt/omnissa/dux/images/
$ ls -ltr /opt/omnissa/dux/images
total 735112
-rw-r--r--@ 1 abc xyz 376374902 Feb 16 18:12 23.12.14-2023.12.12-95068395.tar.gz
#image_name in manifest
image_name: 23.12.14-2023.12.12-95068395.tar.gz
If all hosts have common authentication credentials, you may want to use the parameter - ssh_login_credentials.
However if you want to use different set of credentials for a host, the parameter host_info can be used. Refer to the sub-section which talks about hosts below.
For authentication, provide the ssh user name and ssh key path below
Please ensure to create a ssh key and copy the key to the remote VMs. Refer to https://linuxhint.com/generate-ssh-keys-on-linux/
If all hosts use a different SSH port other than 22, uncomment the ssh_port
parameter and enter the port number. If not provided, default value of 22 will be used.
For example:
ssh_login_credentials:
ssh_user: root
# Input the path of ssh key - e.g /home/admin/id_rsa
ssh_key_path: "/home/admin/id_rsa"
## Optional: Input the ssh port. Default value - 22
#ssh_port:
SSH (Secure Shell) host key checking is a crucial security measure that helps verify the authenticity of a remote server before establishing a connection. When a client connects to a server for the first time, SSH presents the server's host key to the client. The client then checks this key against its list of known host keys to ensure it matches.
If the host key presented by the server matches an entry in the client's known_hosts file, the connection proceeds without interruption. However, if there's no match, SSH prompts the user to confirm the authenticity of the server by displaying the key fingerprint. This fingerprint serves as a unique identifier for the server's key.
The purpose of SSH host key checking is to prevent man-in-the-middle attacks, where an attacker intercepts communication between the client and server, posing as the legitimate server. By verifying the host key, SSH ensures that the client is connecting to the intended server and not a malicious entity.
By default the option to check host keys of remote VMs is enabled and the user will be prompted. If you do not wish to receive the prompts, ssh_host_key_check can be set to 0 to disable the check.
# SSH Host key check - verify the identity of the remote host
# By default this is enabled and the user will be prompted to confirm the fingerprint of the public key of the remote host.
# If disabled, dux will connect similar to the ssh option StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null
# 1 - enable host key checking
# 0 - disable host key checking
ssh_host_key_check: 1
Fill in the IP address of the host where tunnel server container needs to be deployed and the server_role (Basic/cascade-FE or cascade-BE).
If Tunnel server has to be deployed in Basic mode, the value of server_role is 0.
If Tunnel server has to be deployed as FrontEnd server, the value of server_role is 1.
If Tunnel server has to be deployed as Backend server, the value of server_role is 2.
For example:
hosts:
# Enter IP address of the host below
- address: 1.2.3.4
# The deployment role for the server.
# 0 - basic mode
# 1 - cascade mode - frontend
# 2 - cascade mode - backend
server_role: 1
If both ssh_key_path and ssh_password are provided, ssh_key_path is preferred. Note that for security reasons, giving password information in manifest is not recommended. But it is still provided as an option.
The values can be passed as environment variables.
If all hosts have common ssh credential info, you may use the global parameter: ssh_login_credentials mentioned in the section above.
If both host_info and ssh_login_credentials are given, the credentials under host_info are preferred.
If the host uses a different SSH port other than 22, uncomment the ssh_port
parameter and enter the port number. If not provided, default value of 22 will be used.
For example:
host_info:
ssh_user: admin
## Input the path of ssh key - e.g /home/admin/id_rsa
ssh_key_path: /home/admin/id_rsa
## For security reasons, the ssh_password is not recommended.
#ssh_password:
## Input the ssh port. Default value - 22
#ssh_port:
To use the Unique IP per device connection feature supported from Tunnel server version 24.10 onwards, the subnet_range parameter can be defined per host. Note that this parameter should to be added only if the deployment role of the tunnel server is basic or backend.
For example:
## If the deployment role for the server is basic/backend, please enter the CIDR for the IP range for devices corresponding to this tunnel server deployment
##
## Example: subnet_range: 192.168.4.0/23
subnet_range: 192.168.8.0/23
If the customer needs to use a 2 NIC configuration - where one NIC is needed for unauthenticated traffic (External NIC) and another NIC for authenticated back-end traffic (Internal IP/host address), the provisioning needs to be done on the VM where tunnel container is to be deployed.
Dux will be running on a host in the Internal network, hence would be using the Internal IP for communication. The external IP can be given in the manifest using the parameter multi_nic_external_ip.
From Dux, the following security assessments would be done during deploy dry-run / deploy command to ensure security is not compromised if external IP is configured:
Verify SSH is Not Listening on External IP - To ensure SSH is not accessible via the external IP on your machine, SSH should be configured to only listen on specific interfaces (internal IP, or 127.0.0.1 or any IPs explicitly configured) and not on all interfaces - 0.0.0.0 or external IP
Port Scan for External IP : Ensure port 22 (the default SSH Port) is not open in the External IP
To specify the external IP, uncomment the multi_nic_external_ip parameter:
# If multiple NICs are used in the host machine where tunnel server is deployed,
# please specify the details of external IP of NIC2
multi_nic_external_ip:
If remote host has to be tuned with performance parameters, the value of perf_tune is 1 which is by default. This will execute perf_tune.sh in remote host.
If the user does not want to modify the system configuration in remote host, the value of perf_tune is 0.
For example:
# Tune performance parameters/system configuration in remote host to support larger number of connections
# 1 - execute perf_tune.sh in the remote host
# 0 - do not modify system configuration in remote host
perf_tune: 1
If host entries need to be specified (e.g if outbound proxy is not in DNS in remote network ) in the remote host, specify the host names and ip addresses in this section.
# Add entries to the container hosts file to manually link FQDN to IP address
# Format:
## - host_name:
## ip_address:
host_entries:
- host_name: example.com
ip_address: 1.2.3.4
The details of UEM profile such as UEM url, Group Id/ tunnel configuration id, user name of the OG needs to be input in this section.
If tunnel_config_id is left blank, the organization Group ID is used to fetch the configuration.
Note that the tunnel_config_id parameter is supported only if UEM console supports multi-tunnel configuration feature which is from UEM Console version-23.06 onwards. If you are using an older UEM console version, please user group_id field.
For example:
uem:
# The Workspace ONE UEM API server URL. The destination URL must contain the protocol and hostname or IP address
# Example: https://load-balancer.example.com
url: https://wns-1.ssdevrd.com
# Omnissa Tunnel Configuration ID configured in the Workspace ONE UEM Console.
# This field is supported only if the UEM console supports multi-tunnel configuration feature (from UEM Console version 23.06 onwards).
# If left blank, default configuration from the specified organization group will be fetched.
tunnel_config_id: 27bff2e3-4c81-4c1a-a955-7de6b44c75be
# The organization group ID in Workspace ONE UEM Console where Tunnel is configured.
group_id:
admin:
# The username to authenticate with the Workspace ONE UEM API server.
username: uem_user1
Once the manifest is updated, deploy command can be run to deploy the Tunnel server container in the hosts specified.
$ dux deploy -h
Deploy Tunnel server containers
Usage:
dux deploy [flags]
Flags:
-d, --dry-run Check if manifest is good to deploy
-p, --ip stringArray Address specified in the manifest file where Tunnel server needs to be deployed
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest where Tunnel server needs to be deployed
-q, --q Quiet mode: interactive ssh password prompts are disabled
-u, --uem-password string Password to authenticate with the Workspace ONE UEM API server
-y, --y Auto accept all prompts
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
To catch if there the manifest is syntanctically correct, run "deploy --dry-run or -d ".
Following checks are done when dux deploy --dry-run
command is run:
$ dux deploy -d
Manifest file syntax validation is successful
Verifying Tunnel Server image location given in manifest: /opt/omnissa/dux/images/24.06.1.190-2024.08.29-8cc49b8.tar.gz
Verifying deployment prerequisites on 192.168.99.185
Verifying docker is installed and running on 192.168.99.185
Checking open SSH connections to confirm SSH is configured securely
Warning: SSH is listening on all interfaces (0.0.0.0)
Warning: SSH is listening on all interfaces (::)
Please act on the warning messages to ensure security is not compromised
Checking for availablility of sufficient free disk space
Host 192.168.99.185 is good to deploy
Manifest file and hosts are good to deploy!
#In case of error in the manifest, for example, if tunnel_config_id was not filled up, you may get an error like below:
$ dux deploy -d
Manifest verification failed error="Incorrect data in manifest: the group_id field is required if tunnel_config_id is not populated"
For security requirements, when dux commands are executed, host key verification is done during the SSH handshake at the first time. If the host is unknown, a prompt is displayed to check the fingerprint of the host's key. If the user confirms the host key is correct, the host is added to known hosts.
If the fingerprint of the host changes, the user is prompted again to ensure there is no intruder attack.
The authenticity of host '192.168.99.185:22' can't be established.
Fingerprint of the host's key:SHA256:AOy8f1sChEM7xLJyYP190vjjVxDLYI9ORDaKZCNKzzE
Do you want to continue connecting? (yes/no):
Note: The auto-accept (-y) option is disabled for SSH host key checking, ensuring the user can review the fingerprint before adding the host to known_hosts for the first time.
This command deploys the tunnel server containers in the hosts in the order as listed in manifest, and as per the UEM configuration defined. The image is copied to remote host which will take few minutes depending on the network connectivity.
Note:
#Sample run
$ dux deploy
Enter UEM password:
The perf_tune option has been enabled in the manifest. The perf_tune script will modify the Tunnel server host machines to provide recommended performance settings. Do you want to run the perf_tune script? (y/n): y
Checking open SSH connections to confirm SSH is configured securely
Warning: SSH is listening on all interfaces (0.0.0.0)
Warning: SSH is listening on all interfaces (::)
There are some warnings in the security assessment. Do you wish to continue with deployment (y/n): y
Preparing for Tunnel server container deployment on 10.87.132.186
Copying Tunnel server container image to remote. Please wait..
Progress 100% |██████████████████████████████████████████████████████████████████████████████████████| (376/376 MB, 7.5 MB/s)
Deploying new Tunnel server container on 10.87.132.186
Tunnel server container ID: 5a3cfd0f45741379e0a61e8c4847eebeb75043e94d2b74d5c1cf97015cfa0fdf
Fetching the deployment status. Please wait. This may take some time
Node number(n): 2 Version: 24.10.el9.411 CPU: 8.02% Memory: 1221.984 MB Devices: 0 Cascade: back-end Status: Running
Deployment is up!
Deploy command has completed on 10.87.132.186
——
#### To use a different manifest
To use a manifest from a different path -m flag can be used. If not specified, ts_manifest.yml from the directory where dux is run from is used by default.
#For eg.
$ dux deploy -m ~/Downloads/ts_manifest_xyz.yml
#### To deploy tunnel server container in specific/few remote hosts
To deploy tunnel server containers in few remote hosts specified by ip or node-number (as per the order in manifest).
#For eg.
$ dux deploy -n 1 -n 3
$ dux deploy -p 1.2.3.4
# To given UEM password as command line option:
$ dux deploy -u <uem_password>
Once the deployment of containers is successful, other commands can be used to check status of deployment, fetch logs, run vpnreport on container, and perform operations like stop, restart, and even destroy the deployments.
#status help
$ dux status -h
Get the status of the Tunnel server containers deployed
Usage:
dux status [flags]
Flags:
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-j, --json Get status of Tunnel server containers in json format
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-q, --q Quiet mode: interactive ssh password prompts are disabled
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
#To get status of all deployments - sample run. In this case, one of the deployments is not in Running state, hence shows Down as status
$ dux status
Status of Tunnel Server containers deployed
1. Host: 10.87.132.186 Node number(n): 1 Version: 23.12.ph4.14 CPU: 6.97% Memory: 1137.129 MB Devices: 0 Cascade: off Status: Running
2. Host: 192.168.99.180 Status: Not Deployed
#Get status of a host/hosts by IP
#Multiple ips can be specified too. eg. dux status -p 1.2.3.4 -p 1.2.3.5
$ dux status -p 10.87.132.110
Status of Tunnel Server containers deployed
1. Host: 10.87.132.186 Node number(n): 1 Version: 23.12.ph4.14 CPU: 0.00% Memory: 1157.281 MB Devices: 0 Cascade: off Status: Running
#Get status of a specific host with node-number as listed in the manifest file
#Multiple node numbers can be given too: eg. dux status -n 1 -n 3
$ dux status -n 1
Status of Tunnel Server containers deployed
1. Host: 10.87.132.186 Node number(n): 1 Version: 23.12.ph4.14 CPU: 0.00% Memory: 1157.281 MB Devices: 0 Cascade: off Status: Running
#Get status in a json string format for processing
$ dux status -j
{
"command": "status",
"values": [
{
"CPU": "0",
"Cascade": "off",
"Devices": "0",
"Host": "10.87.132.186",
"Memory": "1157.2812",
"Node number": "1",
"Status": "Running",
"Version": "23.12.ph4.14"
},
{
"CPU": "Unknown",
"Cascade": "Unknown",
"Devices": "Unknown",
"Host": "192.168.99.180",
"Memory": "Unknown",
"Node number": "2",
"Status": "Not Deployed",
"Version": "Unknown"
}
]
}
Vpnreport can be fetched from the deployed container(s).
#report help
$ dux report -h
Fetch vpnreport of a Tunnel server container
Usage:
dux report [flags]
Flags:
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-j, --json Get vpnreport in json format
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-q, --q Quiet mode: interactive ssh password prompts are disabled
-r, --rows string Range of rows from vpnreport to be printed for the nodes: e.g if rows 10-20 need to be printed, specify option as -r 10-20
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
#To get vpnreports of all containers, give the command "dux report"
# Sample run
$ dux report
Vpnreport of Tunnel Server containers deployed
1. Host: 10.87.132.186
Tunnel Version: 23.12.ph4.14
Console Version: 23.10.0.0
Operating System: Omnissa Photon OS/Linux
MultiTunnel Config: Tunnel Config
# of Devices: 0 Peak: 2
A: 0 iOS: 0 Mac: 0 Win: 0 Lnx: 0 Others: 0 SDK: 0
# of Connections: 0 Peak: 0
# of Traffic Rules: 1 Enabled: Yes
# of Proxies: 0 Up: 0 Down: 0
API Connectivity: Up Last Resp: 200 OK
AWCM Connectivity: Up Last Resp: 200 OK
API via Traf Rules: No
Cascade Mode: Off Reverse Connect: No
KCD Proxy Support: No Config Locked: No
TLS Port Sharing: No Deployment Mode: QA
FIPS Mode: No NSX Mode: No
ZTNA DTR: Yes ZTNA PDTR: No
# of ZTNA DTR: 0 # of ZTNA PDTR: 0
Appliance Mode: No Container Mode: Yes
MFA: Off JWT: No
Service Status: Up
Log Lvl: Debug
SOCKS Downstream: 0.000 Kbps
SOCKS Upstream: 0.000 Kbps
NAT Downstream: 0.000 Kbps
NAT Upstream: 0.000 Kbps
Total Downstream: 0.000 Kbps
Total Upstream: 0.000 Kbps
CPU 1: 0.00%% CPU 2: 2.97%%
Average CPU: 1.06 %%
Memory Virtual: 1157.281 MB
Memory Resident: 77.879 MB
Memory Share: 15.461 MB
Certificate Expiry Info
Server cert: Sep 25,2025
API cert: Sep 20,2042
Client cert: Sep 20,2042
API Last Sync: 2024-02-19 12:50:04
AWCM Last Sync: 2024-02-19 12:56:55
Up Time: 3d 0h 8m 22s
# of Allowlisted Devices: 1
# of Devices Since Start: 0
Using DTLS: 0
# of Device Failures
Closed Handshake: 0
Failed Handshake: 104
Rejected due to DDoS Protection: 0
Blocked due to ZTNA Policy: 0
Blocked by Admin: 0
Unable to Connect to BackEnd: 0
Device Not in Allowlist: 0
Device Non-Compliant: 0
Device Non-Managed: 0
Outbound Traffic Since Start
# of Successful Connections: 0
# of Failed Connections: 0
# of Blocked by ZTNA: 0
# Using Proxy: 0
# Not Using Proxy: 0
# of Flows by Device Type
iOS: 0
Android: 0
Windows: 0
MacOS: 0
Linux: 0
Others: 0
# of Flows from SDK Bundled App: 0
# of Flows by Protocol
TCP: 0
UDP: 0
Connected UDP: 0
Connectionless UDP: 0
Per Device UDP Limit: 1321
Popular Apps TCP UDP PKT
1. 0 0 0
2. 0 0 0
3. 0 0 0
4. 0 0 0
5. 0 0 0
6. 0 0 0
7. 0 0 0
8. 0 0 0
Devices with Most Traffic TCP UDP PKT
1. 0 0 0
2. 0 0 0
3. 0 0 0
4. 0 0 0
5. 0 0 0
6. 0 0 0
7. 0 0 0
8. 0 0 0
Top Destinations TCP UDP PKT
1. 0 0 0
2. 0 0 0
3. 0 0 0
4. 0 0 0
5. 0 0 0
6. 0 0 0
7. 0 0 0
8. 0 0 0
2. Host: 192.168.99.180 Status: Not Deployed
# To filter few row numbers from the output give -r/--rows option
$ dux report -r 6-10
Displaying rows:6 7 8 9 10
Vpnreport of Tunnel Server containers deployed
1. Host: 10.87.132.186
A: 0 iOS: 0 Mac: 0 Win: 0 Lnx: 0 Others: 0 SDK: 0
# of Connections: 0 Peak: 0
# of Traffic Rules: 1 Enabled: Yes
# of Proxies: 0 Up: 0 Down: 0
API Connectivity: Up Last Resp: 200 OK
2. Host: 192.168.99.180 Status: Not Deployed
# To fetch row numbers 1,6, and 10 from all hosts
$ dux report -r 1,6,10
Displaying rows:1 6 10
Vpnreport of Tunnel Server containers deployed
1. Host: 10.87.132.186
Tunnel Version: 23.12.ph4.14
A: 0 iOS: 0 Mac: 0 Win: 0 Lnx: 0 Others: 0 SDK: 0
API Connectivity: Up Last Resp: 200 OK
2. Host: 192.168.99.180
Tunnel Version: 23.12.ph4.14
A: 0 iOS: 0 Mac: 0 Win: 0 Lnx: 0 Others: 0 SDK: 0
API Connectivity: Up Last Resp: 204 No Content
## To get vpnreport output as json format for processing use -j option
# For example:
$ dux report -j
## To get vpnreport of a node-number or ip use -n or -p option as the other commands
# For example
$ dux report -n 1
$ dux report -p 10.87.142.143
Fetch tunnel_snap from the deployed containers. If the container deployment is down, the docker logs of the container are fetched.
Note that the logs are stored in the logs directory based on the platform.
For linux: /opt/omnissa/dux/logs/
For Mac OS on Intel (AMD64): /usr/local/var/opt/omnissa/dux/logs
For Mac OS on Apple Silicon (ARM64): /opt/homebrew/var/opt/omnissa/dux/logs
The option -v can be used to continously view the docker logs output of a tunnel server container deployed till Ctrl-C is given.
# logs help
$ dux logs -h
Get logs from the Tunnel server containers deployed
Usage:
dux logs [flags]
Flags:
-f, --follow Follow/View logs of a Tunnel server container specified by node-number (-n) or ip (-p) option
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-q, --q Quiet mode: interactive ssh password prompts are disabled
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
#Get logs of all containers deployed as per manifest
#Sample run - Note if the deployment is not up, the container logs are fetched.
$ dux logs
Retrieve vpnserv logs from 10.87.132.186
Copy log bundle from Remote to Local machine..
Logs from 10.87.132.186 downloaded at: /opt/omnissa/dux/logs/tunnel_snap.10.87.132.186_20240219184004.tar.gz
Retrieve vpnserv logs from 192.168.99.180
Copy log bundle from Remote to Local machine..
Logs from 192.168.99.180 downloaded at: /opt/omnissa/dux/logs/tunnel_snap.192.168.99.180_20240219184036.tar.gz
#Logs from a single deployment or multiple deployments can be obtained by specifying IP / node-number in the order as per the manifest
#eg.
# dux logs -p 1.2.3.4 -p 1.2.3.5
# dux logs -n 2 -n 4
#To continuously view/follow the run logs of container , give -f option for the specific node/host ip
# dux logs -n 1 -f
#Press Ctrl-C to stop viewing
In case a tunnel server container needs to be stopped for some reason, dux stop command can be given.
# stop help
$ dux stop -h
Stop Tunnel server containers on the given host(s) from the manifest file. To restart the containers again, you may use dux restart command.
Usage:
dux stop [flags]
Flags:
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-q, --q Quiet mode: interactive ssh password prompts are disabled
-y, --y Auto accept all prompts
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
# Stop all deployments
$ dux stop
Are you sure you want to stop the Tunnel Server containers deployed in all the hosts given in the manifest?
ip : 10.87.132.186 node-number : 1
ip : 10.87.132.197 node-number : 2
Please confirm (y/n): y
Tunnel server container was successfully stopped on 10.87.132.186
Tunnel server container was successfully stopped on 10.87.132.197
#Deployment of tunnel server containers can be stopped by specifying IPs or node-number in the order as per the manifest
#eg.
# dux stop -p 1.2.3.4
# dux stop -n 4
#To auto accept all prompts for y/n , -y option can be given
#eg.
# dux stop -y
Stopped containers can be restarted by "dux restart" command
#restart help
$ dux restart -h
Restart the Tunnel server container on given hosts
Usage:
dux restart [flags]
Flags:
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-q, --q Quiet mode: interactive ssh password prompts are disabled
-y, --y Auto accept all prompts
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
#Restart all deployments
#Sample run
$ dux restart
Are you sure you want to restart the Tunnel Server containers deployed in all the hosts given in the manifest?
ip : 10.87.132.186 node-number : 1
ip : 10.87.132.197 node-number : 2
Please confirm (y/n): y
Tunnel server container was successfully restarted on 10.87.132.186
Tunnel server container was successfully restarted on 10.87.132.197
#Deployment of tunnel server containers can be restarted by specifying IPs or node-number in the order as per the manifest
#eg.
# dux restart -p 1.2.3.4
# dux restart -n 4
#To auto accept all prompts for y/n , -y option can be given
#eg.
# dux restart -y
To stop a container and remove the loaded container image from the remote tunnel server host, "dux destroy" command can be used.
#destroy command help
Destroy the Tunnel server containers on the given hosts
Usage:
dux destroy [flags]
Flags:
-p, --ip stringArray Address specified in the manifest file where Tunnel server deployment needs to be destroyed
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest where Tunnel server deployment needs to be destroyed
-q, --q Quiet mode: interactive ssh password prompts are disabled
-y, --y Auto accept all prompts
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
#Destroy all deployments
#Sample run
$ dux destroy
Are you sure you want to destroy the Tunnel Server containers deployed in all the hosts given in the manifest?
ip : 10.87.132.186 node-number : 1
ip : 10.87.132.197 node-number : 2
Please confirm (y/n): y
Tunnel server container was successfully destroyed on 10.87.132.186
Tunnel server container was successfully destroyed on 10.87.132.197
#Deployment of tunnel server containers can be destroyed by specifying IPs or node-number in the order as per the manifest
#eg.
# dux destroy -p 1.2.3.4
# dux destroy -n 4
#To auto accept all prompts for y/n , -y option can be given
#eg.
# dux destroy -y
To change the log level in Tunnel server container, use the log-override command.
Note that this feature to change log-level from the one set in UEM console is supported in Tunnel server from 23.12 onwards.
$ dux log-override -h
Override log level in one or more Tunnel server containers deployed
Usage:
dux log-override [flags]
Flags:
-c, --clear Restore log level to default value set by UEM Console
-d, --duration int Duration in minutes for the log level override; -1 to set log level indefinitely (default 30)
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-l, --log-level int Desired log level to be set (1-Error, 2-Warn, 3-Info, 4-Debug)
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-q, --q Quiet mode: interactive ssh password prompts are disabled
-y, --y Auto accept all prompts
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
# Some examples below
# To set log-level to Debug (4) for 10 mins
$ dux log-override -n 1 -l 4 -d 10
Are you sure you want to set log level in the Tunnel Server containers deployed in the following hosts to "Debug" for 10 mins?
ip : 10.87.132.186 node-number : 1
Please confirm (y/n): y
Log-level was succesfully set to "Debug" in Tunnel server container on 10.87.132.186
# To set log-level to Debug (4) (default 30 mins)
$ dux log-override -n 1 -l 4
Are you sure you want to set log level in the Tunnel Server containers deployed in the following hosts to "Debug" for 30 mins?
ip : 10.87.132.186 node-number : 1
Please confirm (y/n): y
Log-level was succesfully set to "Debug" in Tunnel server container on 10.87.132.186
# To set log-level to Info (3) for indefinite time with auto-accept
$ dux log-override -n 1 -l 4 -d -1 -y
Log-level was succesfully set to "Debug" in Tunnel server container on 10.87.132.186
# To clear log-level set / restore to default value set by UEM Console - for all hosts
$ dux log-override -c
Are you sure you want to clear log override on all the Tunnel Server containers deployed? (y/n): y
Restored log level to default value set by UEM Console on 10.87.132.186
Restored log level to default value set by UEM Console on 10.87.132.197
If you wish to get verbose logs for any command, use the -v or --verbose option.
For example:
$ dux deploy -v
If you wish to start a shell with tunnel container you can use the command 'exec-shell'
$ dux exec-shell -h
Open interactive shell with Tunnel server container
Usage:
dux exec-shell [flags]
Flags:
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-m, --manifest-file string Custom manifest file path (default "/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-q, --q Quiet mode: interactive ssh password prompts are disabled
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
$ dux exec-shell -n 1
Starting the interactive shell with container 10.87.132.186 vpnserver
[root@centos81 vpnd]# pwd
pwd
/opt/omnissa/tunnel/vpnd
[root@centos81 vpnd]# ls
ls
awcm.ca
awcm.crt
awcm.key
ca.pem
client_config.conf
dh2048.pem
entrypoint
gmon.out
ipv6_history.json
report.conf
server.conf
traffic_rules.xml
tunnel_snap.tar.gz
unique_ip_history.json
vpn.crt
vpn.key
vpnreport
vpnserv
If the user needs to use a 2 NIC configuration - where one NIC is needed for unauthenticated traffic (External NIC) and another NIC for authenticated back-end traffic (Internal IP/host address), they can use the command 'extportscan' command to check for the following security issues in the external NIC/IP of the Tunnel Server host:
Verify SSH is Not Listening on External IP - To ensure SSH is not accessible via the external IP on your machine, SSH should be configured to only listen on specific interfaces (internal IP, or 127.0.0.1 or any IPs explicitly configured) and not on all interfaces - 0.0.0.0 or external IP
Port Scan for External IP : Ensure port 22 (the default SSH Port) is not open in the External IP. Also list all open ports for the user to check for any security loopholes.
Note that the external IP can be given in the multi_nic_external_ip parameter in the manifest file.
$ dux extportscan --help
Check for open ports in the external NIC of the Tunnel server container hosts
Usage:
dux extportscan [flags]
Flags:
-p, --ip stringArray Address specified in the manifest file where Tunnel server container is deployed
-m, --manifest-file string Custom manifest file path (default "/usr/local/var/opt/omnissa/dux/ts_manifest.yml")
-n, --node-number stringArray Number of the node as listed in manifest
-y, --y Auto accept all prompts
Global Flags:
-h, --help Print help information
-v, --verbose Show verbose logs
$ dux extportscan
Are you sure you want to check for open ports on all nodes? (y/n): y
Checking open SSH connections to confirm SSH is configured securely
SSH is securely enabled only on the internal IP.
Scanning for open ports on the external IP
No open ports found.
Info: 65535 closed TCP ports detected.
For any installation issues please refer to the package manager instructions (yum/dnf/brew) for the specific error.
For example, if you are using dnf and encounter issues with installing dux, check if the cache is updated. Try "dnf makecache" to update metadata cache.
Run the dux init
command the first time after installing or updating dux. This generates ts_manifest.yml and perf_tune.sh, both required for container deployment. If you've updated dux to the latest version, consider saving the old ts_manifest.yml and manually updating the new one to match your required configuration.
Do a dry-run before deploying to ensure there are no issues with the manifest, and to ensure that the deployment pre-requisites are met.
dux deploy -d
The auto-accept (-y) option is disabled for SSH host key checking, ensuring the user can review the fingerprint before adding the host to known_hosts for the first time.
If dux fails to connect to the remote host with the error error while creating client config:authentication failure: ssh: handshake failed: host key verification failed. auto accept cancelled for security
try running the command without the -y option. Alternatively can run the dry-run command dux deploy -d
to bypass this issue.
For any issues, please check the file dux.log under logs directory as per the platform - for e.g /opt/omnissa/dux/logs/dux.log in Linux.
If docker is installed with snap in the Linux system, you may encounter permission issues during deployment of tunnel server containers.
Ubuntu commonly uses snap to install packages.
If snap docker is used it is recommended to uninstall snap docker, and install docker as mentioned in https://docs.docker.com/engine/install/ubuntu/ .
sudo snap remove docker --purge
sudo reboot
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin
Note: Please check if there are other containers running in your VM with snap docker. While technically possible, running both Snap Docker and Docker CE on the same Ubuntu system is generally not recommended due to the potential for conflicts and complexity - wrt port usage, networking, system resources usage etc.
Please ensure the conditions in the Prerequisites section are met.
In Windows, if dux is installed under system directories like C:\Program Files, Windows Powershell/Command Prompt should be run as administrator for dux commands to work. Please check for errors related to access issues.
To avoid errors while editing the ts_manifest.yml, please refer to the following guide:
Open the YAML File
Understand YAML Syntax
:
and then the corresponding value.image_name: TunnelContainer_23.12.1.7.tar.gz
Make Changes
#
) in the YAML file as they provide context or explanations about specific entries.Save Changes
Ctrl + S
(Linux) or Cmd + S
(Mac)..yml
extension to maintain its YAML format.Validate Changes (Optional)
Backup (Optional but Recommended)
Sample ts_manifest.yml file for reference
# Version number for the Tunnel server deployment manifest. This is auto generated and should not be altered.
version: "2.3.0.405"
#Workspace ONE UEM Information
uem:
# The Workspace ONE UEM API server URL. The destination URL must contain the protocol and hostname or IP address
# Example: https://load-balancer.example.com
url: https://example-uem-api.com
# Omnissa Tunnel Configuration ID configured in the Workspace ONE UEM Console.
# This field is supported only if the UEM console supports multi-tunnel configuration feature (from UEM Console version 23.06 onwards).
# If left blank, default configuration from the specified organization group will be fetched.
tunnel_config_id:
# The organization group ID in Workspace ONE UEM Console where Tunnel is configured.
group_id: og1
admin:
# The username to authenticate with the Workspace ONE UEM API server.
username: uemuser1
#Tunnel Server Image Information
tunnel_server:
# Enter the filename of the image or the repo path to deploy below.
# File: This must match against the tunnel server image filename from the default directory (refer to the note below) or the absolute path.
# example: 29-2023.06.14-22e04910.tar.gz or /home/admin/29-2023.06.14-22e04910.tar.gz
# Note: The default directory where the images are recommended to be present is:
# - for linux: /opt/omnissa/dux/images
# - for Mac OS on Intel/AMD64: /usr/local/var/opt/omnissa/dux/images/
# - for Mac OS on Apple Silicon/ARM64: /opt/homebrew/var/opt/omnissa/dux/images/
# - for Windows: <path of dux installation directory>/images
# Repository: Repository path of the image with the tag can be given as well:
# For example: your-local-repo.com/23.12.1/tunnel-server:23.12.1.7-2024.04.02-95a22406
image_name: TunnelContainer_24.06.tar.gz
#Container Host(s) Authentication Information
# If all hosts have common authentication credentials, you may want to use the parameter - 'ssh_login_credentials'
# For authentication, provide the ssh user name and ssh key path below
# If all hosts use a different SSH port other than 22, uncomment the `ssh_port` parameter and enter the port number.
# If not provided, default value of 22 will be used.
ssh_login_credentials:
ssh_user: user1
# Input the path of ssh key - e.g /home/admin/id_rsa
ssh_key_path: /home/user1/.ssh/id_rsa
## Optional: Input the ssh port. Default value - 22
#ssh_port:
# SSH Host key check - verify the identity of the remote host
# By default this is enabled and the user will be prompted to confirm the fingerprint of the public key of the remote host.
# If disabled, dux will connect similar to the ssh option StrictHostKeyChecking=no and UserKnownHostsFile=/dev/null
# 1 - enable host key checking
# 0 - disable host key checking
ssh_host_key_check: 1
#Tunnel Server Host(s) Information
# Input docker host information for Tunnel server container deployment. Add an entry for each host.
hosts:
# Enter IP address of the host below
- address: 10.232.18.232
# The deployment role for the server.
# 0 - basic mode
# 1 - cascade mode - frontend
# 2 - cascade mode - backend
server_role: 0
#####################################################################
## THE FOLLOWING ARE OPTIONAL PARAMETERS FOR TUNNEL SERVER DEPLOYMENT.
## PLEASE EDIT THEM AS PER YOUR REQUIREMENTS
######################################################################
## For information specific to this host, uncomment 'host_info' and the parameters under it as needed.
## For authentication info specific to this host, uncomment the 'ssh_user' and 'ssh_key_path/ssh_password' as needed.
## If both 'ssh_key_path' and 'ssh_password' are provided, 'ssh_key_path' is preferred.
## The values can also be passed as environment variables.
## If all hosts have common ssh credential info, you may use the global parameter: 'ssh_login_credentials'
## If both 'host_info' and 'ssh_login_credentials' are given, the credentials under 'host_info' are preferred.
## SSH Port information
## If the host uses a different SSH port other than 22, uncomment the `ssh_port` parameter and enter the port number.
## If not provided, the default value of 22 will be used.
#host_info:
#ssh_user:
## Input the path of ssh key - e.g /home/admin/id_rsa
#ssh_key_path:
## For security reasons, the ssh_password is not recommended.
#ssh_password:
## Input the ssh port. Default value - 22
#ssh_port:
## Define Subnet range for Unique IP per device connection (Please note this feature is supported from Tunnel server version 24.10 onwards)
## If the deployment role for the server is basic/backend, please enter the CIDR for the IP range for devices corresponding to this tunnel server deployment
##
## Example: subnet_range: 192.168.4.0/23
subnet_range: 192.168.4.0/23
# If external NIC is configured in the host machine where tunnel server is deployed,
# please specify the details of external IP of NIC2
#multi_nic_external_ip:
- address: 10.232.18.232
server_role: 0
host_info:
ssh_user: user2
## Input the path of ssh key - e.g /home/admin/id_rsa
#ssh_key_path:
## For security reasons, the ssh_password is not recommended.
ssh_password: abc123
## Input the ssh port. Default value - 22
#ssh_port:
subnet_range: 192.168.8.0/23
# If external NIC is configured in the host machine where tunnel server is deployed,
# please specify the details of external IP of NIC2
#multi_nic_external_ip:
# Tune performance parameters/system configuration in remote host to support larger number of connections
# 1 - execute perf_tune.sh in the remote host
# 0 - do not modify system configuration in remote host
perf_tune: 1
# Add entries to the container hosts file to manually link FQDN to IP address
# Format:
## - host_name:
## ip_address:
host_entries: