# Appendix B: Container Deployment
# Kubernetes: Quickstart & Files
This section covers deployment of the Kloudless appliance on Kubernetes.
It more than just simple configuration for early testing, but does not necessarily cover all the configuration options we might recommend for acceptance testing and production.
# Provisioning: Cluster Providers
In order to deploy on Kubernetes, you'll first need a cluster and kubectl
configured with credentials to manage it. Keep in mind the minimum
requirements for
hosting the Kloudless appliance.
For initial testing, local deployment on a minikube node or cluster may be easiest. This choice allows validating Kubernetes configuration with the exception of managed load balancers and their SSL/TLS termination.
For Google Kubernetes Engine (GKE), provisioning via the dashboard is likely easiest. Credentials for
kubectl
can be obtained with the gcloud utility as detailed by Google.For Amazon Elastic Kubernetes Service (EKS), you can provision a cluster with
eksctl
or the AWS Management Web Console. Credentials can be obtained witheksctl utils write-kubeconfig --name [cluster-name]
.
Other providers including Microsoft's Azure Kubernetes Service (AKS) are also available, and you can ask for assistance from Kloudless Support.
# Provider-Specific Notes: Private Networking
We recommend routing Redis and database traffic on private subnets. EKS provisions private subnets for cluster networking by default, and optionally public subnets. However, GKE requires more configuration.
# Google Kubernetes Engine (GKE)
To configure private networking in GKE, first find the private IP addresses for your Postgres database and GCP Memorystore server. We'll clone and use k8s-custom-iptables have traffic routed privately from the Kloudless appliance to these services, per the recommendation of Google Cloud's Redis/Memorystore docs
Using iptables in a privileged container, it deploys NAT rules to route traffic for these private IP addresses.
Clone and deploy k8s-custom-iptables Deploying the iptables will look
something like the following, with IP addresses for your services substituted in
place. TARGETS="10.2.3.4/32 10.5.6.7/32" ./install.sh
.
# Provider-Specific Notes: Ingress
Configuring traffic ingress routes is easy on GKE and EKS. Create a Service with type Load Balancer, and ensure it has appropriate selector and port configuration. See the Service example section for a usable example Service declaration.
# Setup: Credentials, Configuration & External Services
# Credentials: Kloudless Private Docker Registry
To obtain a container, visit our enterprise downloads page. With your registered email and Developer Meta Token you can authorize your cluster to pull container images with the following command:
kubectl create secret docker-registry kloudless-registry-key \
--docker-server=docker.kloudless.com \
--docker-username=[developer-registered-email] \
--docker-password=[developer-meta-token]
The latest release version of Kloudless Enterprise can be determined from the Release Notes or by querying the repository directly for available version, with the following command.
curl -u [developer-email]:[meta-bearer-token] https://docker.kloudless.com/v2/{prod,k8s}/tags/list | jq .
# Configuration: Customization & Distribution
To generate an application configuration file for your deployment, see the
Configuration Guide. To deploy the application with
your configuration, store the file as a ConfigMap, "kloudless-cm" per the
default deployment template, which will be mounted to the appropriate path,
/data/kloudless.yml
, in each container.
You can use
ke_config_skel.sh
to generate a configuration file template, before instantiating it for your
deployment.
# External Services: Postgres (& Redis)
Before attempting a permanent deployment, have an external Postgres database ready. Using an external database is recommended for almost all deployments, to simplify management of data and deployment of Kloudless appliance instances
For clustering multiple deployed containers, we require a Redis server for sharing state. Instances will coordinate automatically once they connect to Redis.
# Resources Lifecycle: (ConfigMap, Service, Deployment)
kubectl api-resources
note, for your convenience, the short names for
ConfigMap (cm), Service (svc), and Deployment (deploy).
Action | Command |
---|---|
Pod login | kubectl exec -it kloudless-[rs-id]-[pod-id] ke_shell |
Re-deploy after ConfigMap update | kubectl scale rs kloudless-[rs-id] --replicas=0 |
Logs from Pods | kubectl logs [-f] [pod [container]] |
Events (deployment and errors) | kubectl get events [-o [wide|yaml]] [-w] [--sort-by='.metadata.creationTimestamp'] |
# ConfigMap (cm)
Action | Command |
---|---|
Create | kubectl create configmap kloudless-cm --from-file=./kloudless.yml |
View | kubectl get cm kloudless-cm [-o yaml] or kubectl describe cm kloudless-cm with metadata |
Edit* | kubectl edit cm kloudless-cm |
Delete | kubectl delete cm kloudless-cm |
*Edit: (ensure you back this up, or prefer process of editing local source files and then creating again--which overwrites based on resource name)
# Service (svc)
Action | Command |
---|---|
Create | kubectl apply -f ./service.yml |
View | kubectl get svc [kloudless] [-o yaml] or kubectl describe svc kloudless with metadata |
Edit* | kubectl edit svc kloudless |
Delete | kubectl delete svc kloudless |
*Edit: (ensure you back this up, or prefer process of editing local source files and then creating again--which overwrites based on resource name)
# Deployment (deploy)
Deployments consist of ReplicaSets (rs) which contain Pods (pod), the deployment unit, which run the Kloudless container.
Action | Command |
---|---|
Create | kubectl apply -f ./deployment.yml |
View | kubectl get deploy [kloudless] [-o yaml] or kubectl describe svc kloudless with metadata |
Edit* | kubectl edit deploy kloudless |
Delete | kubectl delete deploy kloudless |
*Edit: (ensure you back this up, or prefer process of editing local source files and then creating again--which overwrites based on resource name)
# Usage & Troubleshooting
# Opening a Shell
ke_shell
starts a shell with environment variables initialized to administer
the Kloudless appliance.
kubectl exec -it [kloudless-deployment-rsId-podId] ke_shell
# Troubleshooting
Check that the license key has been downloaded. It should be present at
/data/license.key
Also, see the Monitoring Initialization section.
# K8s ConfigMap: Setup & Example File
- Generate a configuration file template with this ke_config_skel.sh script, and fill in necessary values. The script creates unique private keys to protect your data at rest. You have the only copy of the keys. You are responsible for keeping backups and keeping the keys secret.
./ke_config_skel.sh > my-kloudless.yml
- Create a ConfigMap in your Kubernetes cluster with the following command.
kubectl create cm kloudless-cm \
--from-file=kloudless.yml=my-kloudless.yml \
[-o yaml] [--dry-run]
- Ensure the ConfigMap contains the file data under the key "kloudless.yml" in order to match the provided Deployment file.
# Clarification & Pitfalls
Ultimately it matters that the ConfigMap data is available to the application
at path /data/kloudless.yml
.
# ./deploy.yml - relevant slice of hierarchy
spec:
template:
spec:
volumes:
- name: kloudless-config-volume
configMap:
name: kloudless-cm
items:
- key: kloudless.yml
path: kloudless.yml
containers:
- name: kenterprise
image: docker.kloudless.com/k8s:1.29.x
volumeMounts:
- name: kloudless-config-volume
mountPath: /data/kloudless.yml
subPath: kloudless.yml
Role of each highlighted line:
spec.template.spec.volumes[k].name
: volume name exportable to containersspec.template.spec.volumes[k].configMap.name
: name of the ConfigMap to draw onspec.template.spec.containers[i].volumeMounts[j].name
: name of the volume, as provided by the pod, to mount into the containerspec.template.spec.containers[i].volumeMounts[j].mountPath
: path to mount the volumespec.template.spec.containers[i].volumeMounts[j].subPath
: option to specify hardlink location instead of mounting (and masking) over the directory
# K8s Service: Setup & Example File
Configuring external access requires the following configuration items set up appropriately.
- The
kloudless.yml
configuration file should be set with the external hostname of the appliance. - A load balancer should be provisioned by the Kubernetes Service.
- A DNS CNAME or A record should point to the load balancer.
- Kubernetes labels should match across the following.
- service spec:
- spec.selector.selector
- deployment spec:
- spec.template.metadata.labels
- spec.selector.matchLabels
- metadata.labels
- service spec:
For SSL/TLS, each provider has a LoadBalancer Kubernetes-Ingress type for their own load balancers. You could also terminate on the appliance itself, or on another Ingress, e.g. nginx. The Service, which provisions a load balancer, should be deployed before configuring DNS. You can update the Service without deleting it, but if you delete the Service your Kubernetes provider will likely provision a different load balancer, requiring you to update DNS.
Decide whether you'll terminate TLS on a load-balancer Service,
Ingress
before the appliance, or on the appliance itself. Then configure appropriate
ports and configure the Kloudless
appliance for your chosen SSL/TLS termination in kloudless.yml
.
# ./k8s-service.yml - Example for External TLS Termination
apiVersion: v1
kind: Service
metadata:
name: kloudless-api
labels:
app: kloudless
spec:
type: LoadBalancer
selector:
app: kloudless
ports:
- name: api-http
port: 443
targetPort: 80
protocol: TCP
- name: dev-http
port: 8443
targetPort: 8080
protocol: TCP
with the Kloudless appliance configuration file
# ./kloudless.yml - TLS Configuration Section, for External TLS Termination
...
ssl:
is_configured: true
local: false
...
Pass-through configuration
# ./k8s-service.yml - Pass-Through, On-Appliance TLS Termination
apiVersion: v1
kind: Service
metadata:
name: kloudless-api
labels:
app: kloudless
spec:
type: LoadBalancer
selector:
app: kloudless
ports:
- name: api-http
port: 80
targetPort: 80
protocol: TCP
- name: dev-http
port: 8080
targetPort: 8080
protocol: TCP
- name: api-https
port: 443
targetPort: 443
protocol: TCP
- name: dev-https
port: 8443
targetPort: 8443
protocol: TCP
# Labels
For more about Labels, see the Good Practices with Labels, Recommended Labels, and Introduction to Labels in the Kubernetes docs.
# K8s Deployment: Setup & Example File
Specifies the container to run, configuration file to mount as a volume, and tags pods with a label to allow Service routing by Kubernetes.
Note:
- for the
containers[0].image
version, see enterprise downloads page for the latest available version - the
containers[0].env
section includes Resource Scaling Overrides. Bothenv...containerName
keys must match the Kloudless container'scontainers[0].name
- the container's port names need to match the service port names. Choose either http ports or https ports to open, corresponding to whether a network load-balancer is in use in front of the application. See the network services or TLS configuration sections for more information.
- the kloudless-config-volume mount mounts a single file. It does not mount (over) the whole directory, where generated config files are accessed.
# ./deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kloudless-deployment
labels:
app: kloudless
spec:
replicas: 2
selector:
matchLabels:
app: kloudless
template:
metadata:
labels:
app: kloudless
spec:
imagePullSecrets:
- name: kloudless-registry-key
volumes:
- name: kloudless-config-volume
configMap:
name: kloudless-cm
items:
- key: kloudless.yml
path: kloudless.yml
- name: run
emptyDir:
medium: "Memory"
sizeLimit: "100Mi"
- name: run-lock
emptyDir:
medium: "Memory"
sizeLimit: "100Mi"
- name: tmp
emptyDir:
medium: "Memory"
sizeLimit: "100Mi"
containers:
- name: kenterprise
image: docker.kloudless.com/prod:1.32.x
imagePullPolicy: Always
args: ["--ulimit nofile=1024000:1024000 --sysctl net.ipv4.ip_local_port_range='1025 65535'"]
command: ["/bin/entrypoint"]
volumeMounts:
- name: kloudless-config-volume
mountPath: /data/kloudless.yml
subPath: kloudless.yml
- name: run
mountPath: /run
- name: run-lock
mountPath: /run/lock
- name: journal
mountPath: /var/lib/journal
resources:
limits:
memory: "16G"
requests:
memory: "4.0G"
env:
- name: SALT_NUM_CPUS
valueFrom:
resourceFieldRef:
containerName: kenterprise
resource: limits.cpu
- name: SALT_MEM_TOTAL
valueFrom:
resourceFieldRef:
containerName: kenterprise
divisor: 1Mi
resource: limits.memory
ports:
- name: api-http
containerPort: 80
- name: dev-http
containerPort: 8080
- name: api-https
containerPort: 443
- name: dev-https
containerPort: 8443
# Resource Scaling Overrides
Kloudless scales internal process and memory usage based on available
resources. In some cases, the values reported as available by the operating
system may differ significantly from the resources allocated for use by the
container. To prevent issues that can be caused by inappropriate scaling, the
detected limits can be overridden with SALT_NUM_CPUS
and SALT_MEM_TOTAL
(in
Megabytes). This is included in the default k8s configuration provided above as
well.
For Kubernetes, we recommend to set these values to the container's limits for CPU and memory as shown below and in the sample file above.
# ./deployment.yml ...
# containers:
- name: kenterprise
image: docker.kloudless.com/k8s:1.23.45
env:
- name: SALT_NUM_CPUS
valueFrom:
resourceFieldRef:
containerName: kenterprise
resource: limits.cpu
- name: SALT_MEM_TOTAL
valueFrom:
resourceFieldRef:
containerName: kenterprise
divisor: 1Mi
resource: limits.memory
# Docker: Quickstart
# Obtaining the Container
The following command can be used to fetch and load our Docker container into your local registry so that you can get started:
docker pull docker.kloudless.com/[release]:[version]
per the enterprise downloads page or list available versions with:
curl -u [developer-email]:[meta-bearer-token] https://docker.kloudless.com/v2/prod/tags/list | jq .
Reach out to the Kloudless Support Team if you have any questions.
# Host Configuration
For improved network performance, the following kernel settings are recommended to handle large amounts of network connections (these should be persisted with the relevant configuration for your host operating system):
net.ipv4.tcp_tw_reuse=1
# Minimal Configuration
Generate a configuration file template with this script, and fill in necessary values. The script creates unique private keys to protect your data at rest. You have the only copy of the keys. Keep backups and kept the keys secret.
./ke_config_skel.sh > kloudless.yml
Customize the resulting YAML file for your deployment. It will later be deployed either as an environment variable or on a removable volume.
Your license code can be found on the Enterprise License page.
# ./kloudless.yml - Example Instantiation
hostname: my-kloudless.company.com
license_code: XyzEnterpriseLicenseCodeXyz
...
# db: # for persisting data outside the instance [host,port,user,password,name]
# redis: # for clustering Kloudless instances
# Running the Container
The volume mounted to /data
will be used as the "data disk" to
store configuration, system logs, and the local database (if used).
# Linux
Once loaded, start a local Kloudless Enterprise instance with the following command:
docker run -d \
--name kenterprise \
--env KLOUDLESS_CONFIG="$(cat ./kloudless.yml)" \
--tmpfs /run --tmpfs /run/lock --tmpfs /var/lib/journal \
-p 80:80 -p 8080:8080 -p 443:443 -p 8443:8443 -p 22:22 \
--ulimit nofile=1024000:1024000 \
--sysctl net.ipv4.ip_local_port_range='1025 65535' \
--volume /host/dir/path:/data \
docker.kloudless.com/prod:1.32.x \
docker_entry
# Windows
On a Windows 2016 machine, a named volume must be used instead:
docker volume create --name ke-data
docker run -d \
--name kenterprise \
--env KLOUDLESS_CONFIG="$(cat ./kloudless.yml)" \
--tmpfs /run --tmpfs /run/lock --tmpfs /var/lib/journal \
-p 80:80 -p 8080:8080 -p 443:443 -p 8443:8443 -p 22:22 \
--ulimit nofile=1024000:1024000 \
--sysctl net.ipv4.ip_local_port_range='1025 65535' \
--volume ke-data:/data \
docker.kloudless.com/prod:1.32.x \
docker_entry
Be sure to enable sharing of the Drive under Shared Drives in Docker’s Settings. In addition, Hyper-V must be enabled on the host.
# Accessing the Administrative Console
Once the container has started, it is possible to enter a shell within the container with the following command:
docker exec -ti **_container_id_** ke_shell
From here it is possible to run the required management and configuration scripts.
# Interactive Reconfiguration
The commands above will start a stand alone appliance that uses its own local database and broker, which is sufficient for demo or development purposes. To configure the license key, copy it into the data volume directory, via the shared directory if necessary. Once copied, run the following command to obtain a shell in the container's environment:
docker exec -ti [container_id] ke_shell
Once in the shell set the license key as follows:
sudo ke_manage_license_key set /data/lk-filename
After that finishes, visit http://localhost:8080 to see the developer portal.
# Considerations on Red Hat (RHEL)
When running the container on RedHat based systems, it is important to make sure
that the underlying file system is configured correctly. According to the Red
Hat
Documentation
XFS is the only file system that can be used as the base layer for the OverlayFS
used by the docker container's root file system. The XFS file system must be
created with the -n ftype=1
option. Since this is still under heavy
development by Red Hat, there are steps which must be taken to ensure that a
sufficiently recent kernel is in use, see the
Docker Issues
section for more information about how to debug the errors you might see.
# Docker Compose: Example File
Here is an example of a Docker compose file:
# ./docker-compose.yml
version: '3.8'
services:
redis:
image: "redis:latest"
db:
image: "postgres:10"
environment:
- POSTGRES_PASSWORD
- POSTGRES_USER=kloudless
- POSTGRES_DB=kloudless
restart: always
volumes:
- target: /var/lib/postgresql/data
type: volume
source: database
kloudless:
image: "docker.kloudless.com/prod:${KLOUDLESS_VERSION:-1.32.9}"
command: "/usr/local/bin/docker_entry"
depends_on:
- redis
- db
ports:
- "8080:8080"
- "8443:8443"
- "80:80"
- "443:443"
tmpfs:
- /run
- /run/lock
- /var/lib/journal
volumes:
- type: volume
target: /data
source: data
environment:
SALT_NUM_CPUS: ${NUM_CPUS:-2}
SALT_MEM_TOTAL: ${MEM_TOTAL:-7458}
KLOUDLESS_CONFIG: |
redis: redis://redis:6379/0
db:
host: db
port: 5432
name: 'kloudless'
user: 'kloudless'
password: ${POSTGRES_PASSWORD}
hostname: ${KLOUDLESS_HOSTNAME}
license_code: ${KLOUDLESS_LICENSE}
${KLOUDLESS_EXTRA_CONFIG}
volumes:
data:
database:
# Monitoring Initialization
By default, the container will send its logs to standard out, so they can be monitored with the standard Docker tooling. It is also possible to monitor the initial configuration process of the container from within the container via the following command:
sudo tail -f /var/log/journal/ke_init.service.log
Logs for the whole appliance can be monitored through /var/log/syslog
.
If the license key is already configured, completion of initialization can also be checked by querying the /health endpoint of the container:
curl http://localhost/health
A successful response will look something like the following:
{"celery": {"status": "ok", "local": {"queues": {"celery": 0, "celery-bg": 0},
"tasks": {"bg": "ok", "fg": "ok"}}, "remote": {"queues": {"celery": 0,
"celery-bg": 0}, "tasks": {"bg": "ok", "fg": "ok"}}, "elapsed":
0.42250490188598633}, "api": {"status": "ok"}, "db": {"status": "ok", "elapsed":
0.0}}