Deploying the appliance

OVA (VMWare and VirtualBox)

Obtaining the OVA

The OVA files can be obtained by contacting the Kloudless Support Team.

Importing the OVA

During the initial import of the Virtual Appliance it is important to configure an external hard drive which will be used to store local configuration, logs, and the local database if used. Please refer to the Minimum Specifications section for details on how much CPU, memory, and disk to allocate. It is important that the primary disk in the appliance not be modified, otherwise the instance may fail to boot or upgrade properly.

Network Configuration

The appliance requires DHCP in order to configure its network interfaces, DNS servers, etc. Please ensure that this is available before booting the appliance.

Firewall Rules

By default the appliance does not have any firewall in place. Access to the appliance should be managed by an external firewall or by manually configuring iptables on the appliance (note that this will not be persisted across system upgrades by default). For information on what rules to apply see the Network Services section.

Accessing the Administrative Console

You should be able to log in on the console use the default ubuntu user with password ubuntu. Once logged in on the OVA console, you should immediately change the password of the ubuntu user using the passwd command. You should also disable password authentication for SSH by setting PasswordAuthentication to no in the /etc/ssh/sshd_config file and running sudo service ssh reload.

Amazon Web Services (AWS)

Creating an Instance from an AMI

Provide Kloudless Support with the AWS Account Number to share the Kloudless Enterprise AMI with. Once shared, you should be able to launch an instance of that AMI. When creating an instance, make sure to choose an instance type that fulfills at least the Minimum Hardware Requirements (e.g. t2.large). For all disks, we recommend using EBS with gp2 SSDs.

While creating the instance, attach a separate EBS drive to use as the “data disk” to persist data on. A 50 GB EBS drive is sufficient.

Networking

When deploying the Kloudless Appliance it is important that it is reachable from wherever your application will be running. Security Groups should be used to isolate the Kloudless Appliance from sources that do not require access. The services that are exposed on the appliance are described in the Network Services section, care needs to be taken that there is no unauthorized access to the Developer portal and administrative consoles especially.

Accessing the Administrative Console

While creating the instance, an SSH key should have been configured. The key configured there can be used to access the administrative console over SSH as follows:

ssh -i yourkey.pem ubuntu@instance_ip

Kubernetes

Obtaining the container

Kloudless Support can provide you with a link to our private docker registry, and the most current version of the application.

Private networking

We recommend routing redis and database traffic on private subnets.

Google Kubernetes Engine (GKE)

To configure private networking in GKE, first find the private IP addresses for your Postgres database and GCP Memorystore server. We'll clone and use k8s-custom-iptables have traffic routed privately from the kloudless applicance to these services, per the recommendation of Google Cloud's redis/Memorystore docs

Using iptables in a privileged container, it deploys NAT rules to route traffic for these private IPs.

Clone and deploy k8s-custom-iptables Deploying the iptables will look something like the following, with IP addresses for your services substituted in place. TARGETS="10.2.3.4/32 10.5.6.7/32" ./install.sh.

Ingress

GKE

apiVersion: v1
kind: Service
metadata:
  ...
spec:
  type: LoadBalancer
  ports:
  - name: api-http
    port: 80
    targetPort: 80

Configuration

To generate an application config file for your deployment, see the Configuration Guide. To deploy the application with your configuration, store the file as a ConfigMap, "kloudless-cm" per the default deployment template, which will be mounted to the appropriate path, /data/kloudless.yml, in each container.

You can use ke_config_skel.sh to generate a config file template, before instantiatng it for your deployment.

This config file contains the only copies of your application's encryption keys for protecting data at rest. Back it up.

  • Create:
    • kubectl create configmap kloudless-cm --from-file=./kloudless.yml
  • View:
    • kubectl get cm kloudless-cm [-o yaml], or
    • kubectl describe cm kloudless-cm with metadata
  • Edit: (ensure you backup this, or prefer local editing with re-creation)
    • kubectl edit cm kloudless-cm
  • Delete:
    • kubectl delete cm kloudless-cm

Docker

Obtaining the Container

The Kloudless Enterprise Docker container can be obtained by contacting the Kloudless Support Team.

Importing the Container

The container is packaged as an archive that can be imported into the Docker Daemon with the following command. The label is included for later reference:

docker load -i kloudless-enterprise-1.23.0-1475696067.tar

Host Configuration

For improved network performance, the following kernel settings are recommended to handle large amounts of network connections (these should be persisted with the relevant configuration for your host operating system):

net.ipv4.tcp_tw_reuse=1

Running the Container

The following command will start the container with the relevant application ports exposed on the Docker host.

docker run --cap-add SYS_PTRACE -v data_dir:/data \
    -v optional_logging_dir:/data/log \
    --ulimit nofile=1024000:1024000 \
    --sysctl net.ipv4.ip_local_port_range='1025 65535' \
    -p 80:80 -p 8080:8080 -p 443:443 -p 8443:8443 -p 22:22 \
    -d kloudless-enterprise:1.23.0 docker_entry

The container uses Upstart to manage internal services and requires the SYS_PTRACE capability. The directory specified in data_dir will be used as the "data disk" to store configuration, system logs, and the local database (if used). Only include the optional_logging_dir volume if it is outside of data_dir . Otherwise, the logs will be stored in the log directory in the configured data volume (/data/log within the container). In a multi-node deployment, the configuration files and license key can be shared between containers via a networked filesystem. If this configuration is used, it is important to allocate a unique log directory for each running container to avoid conflicts. Configuration described in the next section can be applied when starting the container using the KLOUDLESS_CONFIG environment variable. The variable will be used to populate the local configuration in /data/kloudless.yml which configures the external hostname, database, etc. For example:

KLOUDLESS_CONFIG='hostname: "api.example.com"' docker run ...

Extra Considerations

When running the container on RedHat based systems, it is important to make sure that the underlying file system is configured correctly. According to the Red Hat Documentation XFS is the only file system that can be used as the base layer for the OverlayFS used by the docker container's root file system. The XFS file system must be created with the -n ftype=1 option. Since this is still under heavy development by Red Hat, there are steps which must be taken to ensure that a sufficiently recent kernel is in use, see the Docker Issues section for more information about how to debug the errors you might see.

Accessing the Administrative Console

Once the container has started, it is possible to enter a shell within the container with the following command:

docker exec -ti **_container_id_** ke_shell

From here it is possible to run the required management and configuration scripts.

Monitoring Initialization

The initial configuration process of the container can be monitored from within the container via the following command:

sudo tail -f /var/log{,.old}/upstart/ke_init.log

If the license key is already configured, completion of initialization can also be checked by querying the /health endpoint of the container:

curl http://localhost/health

A successful response will look something like the following:

{"celery": {"status": "ok", "local": {"queues": {"celery": 0, "celery-bg": 0},
"tasks": {"bg": "ok", "fg": "ok"}}, "remote": {"queues": {"celery": 0,
"celery-bg": 0}, "tasks": {"bg": "ok", "fg": "ok"}}, "elapsed":
0.42250490188598633}, "api": {"status": "ok"}, "db": {"status": "ok", "elapsed":
0.0}}