1. Introduction

The CaaS platform is the link between FirstSpirit and the customer’s end application. The REST Interface receives information and updates it in the internal persistence layer of the CaaS platform. An update of data in the customer’s end application is done by requests to the REST Interface.

The CaaS platform includes the following components, which are available as docker containers:

REST Interface (caas-rest-api)

The REST Interface is used both for transferring and retrieving data to and from the repository. For this purpose it provides a REST endpoint that can be used by the CaaS Admin Interface, the FirstSpirit Server or other services.

Security Proxy (caas-rest-api-security)

Connections to REST Interface are tunnelled through the Security Proxy for authorization and authentication.

In CaaS version 2.10 and earlier, this functionality was part of REST Interface.

CaaS repository (caas-mongo)

The CaaS repository is not accessible from the Internet and can be only accessed within the platform via the REST Interface. It serves as a storage for all project data and internal configuration.

CaaS Admin Interface (caas-admin-webapp)

The CaaS Admin Interface enables the management of the information transferred to CaaS and provides a simple, web-based administration interface. To do this, it communicates with the repository via the REST Interface and is accessible via HTTP(S).

2. Technical requirements

The operation of the CaaS platform has to be realized with Kubernetes.

If you do not feel able to operate, configure, monitor, and analyze and resolve operating problems of the cluster infrastructure accordingly, we strongly advise against on-premises operation and refer to our SaaS offering.

Since the CaaS-platform is delivered as Helm artifact, the Helm client must be available.

It is important that Helm is installed in a secure manner. For more information, refer to the Helm Installation Guide.

For system requirements please consult the technical data sheet of the CaaS platform .

3. Installation and configuration

The setup of the CaaS platform for operation with Kubernetes is done by using Helm-Charts. These are part of the delivery and already contain all necessary components.

The following subchapters describe the necessary installation and configuration steps.

3.1. Import of the images

The first step in setting up he CaaS platform for operation with Kubernetes requires the import of the images into your central Docker registry (e.g. Artifactory). The images are contained in the file caas-docker-images-2.11.34.zip in the delivery.

The credentials for cluster access to the repository must be known.

The steps necessary for the import can be found in the documentation of the registry you are using.

3.2. Configuration of the Helm-Chart

After the import of the images the configuration of the Helm chart is necessary. This is part of the delivery and contained in the file caas-2.11.34.tgz. A default configuration of the chart is already make in the values.yaml file. All parameters specified in this values.yaml can be overwritten with a manually created custom-values.yaml by a specific value.

3.2.1. Authentication

All authentication settings for the communication with or within the CaaS platform are specified in the credentials block of the custom-values.yaml. So here you will find user names and default passwords as well as the CaaS Master API Key. It is strongly recommended to adjust the default passwords and the CaaS Master API Key.

All selected passwords must be alphanumeric. Otherwise, problems will occur in connection with CaaS.

The CaaS Master API Key is automatically created during the installation of CaaS and thus allows the direct use of the REST Interface.

3.2.2. CaaS repository (caas-mongo)

The configuration of the repository includes two parameters:

storageClass

The possibility of overwriting parameters from the values.yaml file mainly affects the parameter mongo.persistentVolume.storageClass.

For performance reasons, we recommend that the underlying MongoDB filesystem is provisioned with XFS.

clusterKey

For the authentication key of the Mongo Cluster a default configuration is delivered. The key can be defined in the parameter credentials.clusterKey. It is strongly recommended that you use the following command to create a new key for productive operation:

openssl rand -base64 756

This value may only be changed during the initial installation. If it is changed at a later time, this can lead to a permanent unavailability of the database, which can only be repaired manually.

3.2.3. Docker- Registry

An adjustment of the parameters imageRegistry and imageCredentials is necessary to configure the used Docker registry.

sample configuration in a custom-values.yaml
imageRegistry: docker.company.com/e-spirit

imageCredentials:
   username: "username"
   password: "special_password"
   registry: docker.company.com
   enabled: true

3.2.4. Ingress Configurations

Ingress-Definitions control the incoming traffic to the respective component and are not created by default by the delivery. The parameters adminWebapp.ingress.enabled and restApi.ingress.enabled allow the ingress configuration for the REST Interface and the CaaS Admin Interface.

The Ingress definitions of the Helm chart assume the NGINX Ingress Controller to be used, since annotations of this concrete implementation are used. If you are using a different implementation, you must adapt the annotations of the Ingress definitions in your custom-values.yaml file accordingly.

ingress creation in a custom-values.yaml
adminWebapp:
   ingress:
      enabled: true
      hosts:
         - caas-webapp.company.com

restApi:
   ingress:
      enabled: true
      hosts:
         - caas.company.com

If the setting options are not sufficient for the specific application, the Ingress can also be generated independently. In this case the corresponding parameter must be set to the value enabled: false. The following code example provides an orientation for the definition.

Ingress definition for the REST Interface
apiVersion: extensions/v1beta1
child: Ingress
metadata:
   labels:
   name: caas
spec:
   rules:
      - http:
      paths:
      - baking:
         serviceName: caas-rest-api
         servicePort: 80
   host: caas-rest-api.mydomain.com

3.2.5. Preview CaaS

The Preview CaaS is used for the preview of unreleased content while avoiding mixing with the release state. If the functionality of a Preview CaaS is desired, it can be realized in two ways:

  • a separate instance of the CaaS platform is set up, which is operated independently of the instance for released content

  • for the CaaS platform the Preview CaaS Ingress is activated in the Helm charts

In the following, this chapter deals with the implementation using Preview CaaS Ingress.

Configuration

The Helm chart allows the configuration of an additional Ingress definition to map the functionality of the Preview CaaS without having to deploy a second CaaS platform. For this purpose, the same parameters are configured as are available for the REST Interface when defining the Ingress:

restApi:
  ingressPreview:
    enabled: false
    # you can activate usage of the cert-manager for this ingress to automatically issue SSL certificates see below (certManager:) for more details
    certManager:
      enabled: false
    #if you want to manage tls on your own, you can add the relevant data here. See https://kubernetes.io/docs/concepts/services-networking/ingress/#tls for more info.
    #tls:
    #  - secretName: caas-rest-api-preview-tls
    #    hosts:
    #      - caas-preview
    hosts:
      - caas-preview
    annotations:
      kubernetes.io/ingress.class: "nginx"
      nginx.ingress.kubernetes.io/rewrite-target: /
      nginx.ingress.kubernetes.io/proxy-body-size: "100m"
      #We're setting cors defaults as described in https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-cors here.
      #If you want to customize cors settings, please take a look at the link and add your custom entries here.
      nginx.ingress.kubernetes.io/enable-cors: "true"
      #nginx.org/hsts: false
      #ingress.kubernetes.io/ssl-redirect: false
    configurationSnippet: |-
      set $caas_protocol 'https';
      if ($https = "") {
        set $caas_protocol 'http';
      }
      if ($request_uri ~ ^/([^/]+)(/.*)?$) {
        return 308 $caas_protocol://{{ index .Values.restApi.ingress.hosts 0 -}}/$1Preview$2;
      }
      return 308 $caas_protocol://{{ index .Values.restApi.ingress.hosts 0 -}}/;

The feature is activated by the parameter 'ingressPreview.enabled'. In addition, the parameter ingressPreview.hosts specifies the desired URL where the Preview CaaS can be reached.

If SSL is already terminated before the Preview CaaS Ingress, the parameter configurationSnippet may need to be adjusted to your network configuration.

Functionality

The Preview CaaS Ingress redirects all requests addressed to the Preview CaaS to the regular CaaS platform via HTTP redirect (response code 308) and simultaneously performs URL rewriting at the CaaS database level. In these cases, the database name is supplemented by the suffix Preview.

Please make sure that your HTTP client supports and enables the settings follow redirects and redirect authentication. Otherwise, HTTP redirects will not be performed or will be performed without authorization headers.

Example: Requests from a FirstSpirit project with the name MyProject, which run over the Preview CaaS Ingress, are therefore redirected to the CaaS database MyProjectPreview.

All other requests from this project, which are served by the regular REST Interface Ingress, are not affected. This prevents conflicts between released and unreleased content from the same project.

Data conflicts between FirstSpirit projects may occur if a project name ends in Preview and a second project with activated Preview CaaS has the same name as the first project, but without Preview suffix.

Example:

  • Project 1: MyProjectPreview

    • The name of the CaaS database becomes MyProjectPreview for shared content (conflict)

  • Project 2: MyProject

    • The name of the CaaS database becomes MyProject for shared content

    • The name of the CaaS database becomes MyProjectPreview for non-shared content (conflict)

3.3. Installation of the Helm-Chart

After the configuration of the Helm-chart it has to be installed into the Kubernetes cluster. The installation is done with the following commands, which must be executed in the directory of the Helm-chart.

Installation of the chart
kubectl create namespace caas
helm install RELEASE_NAME . --namespace=caas --values /path/to/custom-values.yaml

The name of the release can be chosen freely.

If the namespace is to have a different name, you must replace the specifications within the commands accordingly.

If an already existing namespace is to be used, the creation is omitted and the desired namespace must be specified within the installation command.

Since the containers are first downloaded from the used image registry, the installation can take several minutes. Ideally, however, a period of five minutes should not be exceeded before the CaaS platform is operational.

The status of each component can be obtained with the following command:

kubectl get pods --namespace=caas

Once all components have the status Running, the installation is complete.

NAME                                 READY     STATUS        RESTARTS   AGE
caas-admin-webapp-1055845989-0s4pg   1/1       Running       0          5m
caas-mongo-0                         2/2       Running       0          4m
caas-mongo-1                         2/2       Running       0          3m
caas-mongo-2                         2/2       Running       0          1m
caas-rest-api-1851714254-13cvn       2/2       Running       0          5m
caas-rest-api-1851714254-13cvn       2/2       Running       0          4m
caas-rest-api-1851714254-xs6c0       2/2       Running       0          4m

The components Security Proxy and REST Interface are running on the same pod.

3.4. TLS

The communication of the CaaS platform to the outside world is not encrypted by default. If it is to be protected by TLS, there are two configuration options:

Using an officially signed certificate

To use an officially signed certificate, a TLS secret is required, which must be generated first. This must contain the keys tls.key and the certificate tls.crt.

The steps necessary to generate the TLS secret are described in the Kubernetes Ingress Documentation.

Automated certificate management

As an alternative to using an officially signed certificate, it is possible to automate the administration using the cert manager. This must be installed within the cluster and takes over the generation, distribution and updating of all required certificates. The configuration of the Cert-Manager allows for example the use and automatic renewal of Let’s-Encrypt-Certificates.

The necessary steps for installation are explained in the Cert-Manager-Documentation.

3.5. Scaling

In order to be able to quickly process the information transferred to CaaS, the CaaS platform must ensure optimal load distribution at all times. For this reason, the REST Interface and the Mongo database are scalable and already configured to deploy at least three instances at a time for failover. This minimum number of instances is mandatory, especially for the Mongo cluster.

REST Interface

The scaling of the REST Interface is done with the help of a Horizontal Pod Autoscaler. Its activation as well as configuration must be done in the custom-values.yaml file to overwrite the default values defined in the values.yaml file.

The Security Proxy is operated in the same pod as the REST Interface (so-called "sidecar"), so that scaling the REST Interface also scales the Security Proxy at the same time.

default configuration of the REST Interface
restApi:
  horizontalPodAutoscaler:
    enabled: false
    minReplicas: 3
    maxReplicas: 9
    targetCPUUtilizationPercentage: 50

The Horizontal Pod Autoscaler allows to scale down or up the REST Interface depending on the current CPU load. The parameter targetCPUUtilizationPercentage specifies the percentage value from which scaling should take place. At the same time the parameters minReplicas and maxReplicas define the minimum and maximum number of possible REST Interfacen instances.

The threshold value for the CPU load should be chosen with care:
If too low a percentage is selected, the REST Interface scales up too early in the case of increasing load. If too high a percentage is selected, the REST Interface will not scale fast enough as the load increases.

A wrong configuration can therefore endanger the stability of the system.

The official Kubernetes Horizontal Pod Autoscaler-documentation as well as the examples listed in it contain further information on the use of an Horizontal Pod Autoscaler.

Mongo database

Unlike REST Interface, scaling the Mongo database is only possible manually. Therefore, it cannot be performed automatically using a Horizontal Pod Autoscaler.

Scaling the Mongo database is done using the replicas parameter. This parameter must be entered in the custom-values.yaml file to override the default value defined in the values.yaml file.

At least three instances are required to run the Mongo Cluster, otherwise there is no Primary node available and the database is not writable. If the number of available instances falls below a value of 50% of the configured instances, no more Primary nodes can be selected. However, this is essential for the functionality of the REST Interface.

The chapter Consider Fault Tolerance of the MongoDB documentation describes how many nodes can explicitly fail, until the determination of a new Primary node is impossible. The information contained in the documentation must be taken into account when scaling the installation.

Further information on scaling and replicating the Mongo database is available in the chapters Replica Set Deployment Architectures and Replica Set Elections.

definition of the replica parameter
mongo:
  replicas: 3

A downscaling of the Mongo database is not possible without direct intervention and requires a manual reduction of the replicaset of the Mongo database. The MongoDB documentation describes the necessary steps for this.

Such intervention increases the risk of failure and is therefore not recommended.

Applying the configuration

The updated custom-values.yaml file must be applied after the configuration changes for the REST Interface or Mongo database with the following command.

upgrade command
helm upgrade -i RELEASE_NAME path/to/caas-<VERSIONNUMBER>.tgz --values /path/to/custom-values.yaml

The release name can be determined with the command helm list --all-namespaces.

3.6. Monitoring

The CaaS platform is a micro service architecture and therefore consists of different components. In order to be able to monitor its status properly at any time and to be able to react quickly in the event of an error, integration in a cluster-wide monitoring system is absolutely essential for operation with Kubernetes.

The CaaS platform is already preconfigured for monitoring with Prometheus Operator, since this scenario is widely used in the Kubernetes environment. It includes Prometheus ServiceMonitors for collecting metrics, Prometheus Alerts for notification in case of problems and predefined Grafana dashboards for visualizing the metrics.

3.6.1. Requirements

It is essential to set up monitoring and log persistence for the Kubernetes cluster. Without these prerequisites, there are hardly any analysis possibilities in case of a failure and Technical Support lacks important information.

Metrics

To install the Prometheus Operator please use the official Helm-Chart, so that cluster monitoring can be set up based on it. For further information please refer to the corresponding documentation.

If you are not running a Prometheus Operator, you must turn off the Prometheus ServiceMonitors and Prometheus Alerts.

Logging

With the use of Kubernetes it is possible to provide various containers or services in an automated and scalable way. To ensure that the logs remain in such a dynamic environment even after an instance has been terminated, an infrastructure must be integrated that persists the instance beforehand.

Therefore we recommend the use of a central logging system, such as Elastic-Stack. The Elastic or ELK stack is a collection of open source projects that help to persist, search and analyze log data in real time.

Here too, you can use an existing Helm-Chart for the installation.

3.6.2. Prometheus ServiceMonitors

The deployment of the ServiceMonitors provided by the CaaS platform for the REST Interface and the mongo database, is controlled via the custom-values.yaml file of the Helm-Charts.

The access to the metrics of the REST Interface is secured by HTTP Basic Auth and the access to the metrics of the MongoDB by a corresponding MongoDB user. The respective access data is contained in the credentials block of the values.yaml file of the Helm-Charts.

Please adjust the credentials in your custom-values.yaml file for security reasons.

Typically, Prometheus is configured to consider only ServiceMonitors with specific labels. The labels can therefore be configured in the custom-values.yaml file and are valid for all ServiceMonitors of the CaaS Helm chart. Furthermore, the parameter scrapeInterval allows a definition of the frequency with which the respective metrics are retrieved.

monitoring:
  prometheus:
    # Prometheus service monitors will be created for enabled metrics. Each Prometheus
    # instance has a configured serviceMonitorSelector property, to be able to control
    # the set of matching service monitors. To allow defining matching labels for CaaS
    # service monitors, the labels can be configured below and will be added to each
    # generated service monitor instance.
    metrics:
      serviceMonitorLabels:
        release: "prometheus-operator"
      mongo:
        enabled: true
        scrapeInterval: "30s"
      caas:
        enabled: true
        scrapeInterval: "30s"

The MongoDB metrics are provided via a sidecar container and retrieved with the help of a separate database user. You can configure the database user in the credentials block of the custom-values.yaml. The sidecar container is stored with the following standard configuration:

mongo:
  metrics:
    image: mongodb-exporter:0.6.1
    port: 9216
    path: "/metrics"
    socketTimeout: 3s
    syncTimeout: 1m

3.6.3. Prometheus Alerts

The deployment of the alerts provided by the CaaS platform is controlled via the custom-values.yaml file of the Helm-Charts.

Prometheus is typically configured to consider only alerts with specific labels. The labels can therefore be configured in the custom-values.yaml file and apply to all alerts in the CaaS Helm chart:

monitoring:
  prometheus:
    alerts:
      prometheusRuleLabels:
        app: "prometheus-operator"
        release: "prometheus-operator"
      caas:
        enabled: true

3.6.4. Grafana Dashboards

The deployment of the Grafana dashboards provided by the CaaS platform is controlled via the custom-values.yaml file of the Helm-Charts.

Typically, the Grafana Sidecar Container is configured to consider only configmaps with specific labels and in a defined namespace. The labels of the configmap and the namespace in which it is deployed can therefore be configured in the custom-values.yaml file:

monitoring:
  grafana:
    dashboards:
      enabled: true
      configmapNamespace: ""
      configMapLabels: {}

4. Development Environment

While Kubernetes forms the basis of all productive installations, Docker or Minikube can also be used for development scenarios.

The operation of Minikube on Windows is currently marked as experimental and not stable in every situation.

This chapter describes the implementation with Docker.

For the local development environment, the CaaS delivery contains the following two zip files:

  • caas-docker-images-2.11.34.zip

  • caas-docker-configuration-2.11.34.zip

These include the required Docker images and the required configuration of the Docker Compose stack.

First import the Docker images into your local Docker registry. Depending on your operating system, run either the install.cmd or install.sh script to do this. The scripts is contained in the zip file for the docker images.

After performing the configurations described in the following paragraphs, the Docker Compose configuration can be checked with the docker-compose config command and started with docker-compose up.

Additional parameters for controlling the CaaS platform via Docker Compose can be found in the Docker Compose documentation.

Resource Limits

If you want to adjust the resource limits stored in the docker-compose.yml file, please note that the values of the Java options Xms and Xmx must be lower than the values of the parameters mem_reservation and mem_limit. How much lower the values are to be set depends on the concrete load scenario. A value of at least 76 MiB is recommended for all components.

Authentication

The Configuration directory included in the delivery contains the configuration file caas-docker.env. It contains all security-relevant authentication data that is shared across the containers.

All passwords selected in this file must be alphanumeric. Otherwise, problems will occur in connection with the REST Interface.

CaaS Admin Interface

The CaaS Admin Interface has the configuration file env.js, which is contained in the directory Configuration of the delivery. The file contains various parameters that make up the URL of the REST Interface:

configuration file env.js
(function (window) {
  window.__caas_config = window.__caas_config || {};
  window.__caas_config.host = 'http://CHANGE_ME';
  window.__caas_config.port = '8080';
  window.__caas_config.path = '/';
}(this));

It is mandatory to adjust the parameter window.__caas_config.host, which defines at which hostname the REST Interface can be reached. Please note that the entered host must be accessible for all users of the CaaS Admin Interface. For local development, localhost should therefore be used accordingly

5. REST Interface

5.1. Storage of the content

Using the REST Interface all content can be managed via HTTP and are stored in CaaS in so-called collections, which are subordinate to databases. The following three-part URL scheme applies:

\\http://Servername:Port/Database/Collection/Document

Binary content (media) is an exception in that it is stored in so-called buckets. The associated collections always end with the suffix .files.

\\http://Servername:Port/Database/MediaCollection.files/Media

5.2. Authentication

Each request to the REST Interface must contain an HTTP header in the form Authorization: apikey="<UUUID>". The value of apikey is expected to be a UUID known to the CaaS platform as API Key.

The API Keys control access rights to projects and can be defined in the CaaS Admin Interface. A request that has no header or an incorrect header will therefore be rejected.

5.3. HAL format

The interface returns all results in HAL format. This means that they are not simply raw data, such as traditionally unstructured content in JSON format.

The HAL format offers the advantage of simple but powerful structuring. In addition to the required content, the results contain additional meta-information on the structure of this content.

Example

{ "_size": 5,
   "_total_pages": 1,
   "_returned": 3,
   "_embedded": { CONTENT }
}

In this example a filtered query was sent. Without knowing the exact content, its structure can be read directly from the meta information. At this point, the REST Interface returns three results from a set of five documents corresponding to the filter criteria and displays them on a single page.

If the element to be requested is a medium, the URL only determines its metadata. The HAL format contains corresponding links that refer to the URL with the actual binary content of the medium. For further information please refer to the documentation.

5.4. Use of filters

Filters are always used when documents are not to be determined by their ID but by their content. In this way, both single and multiple documents can be retrieved.

For example, the query of all English language documents from the products collection has the following structure:

\\http://Servername:Port/Database/products?filter={fs_language: "EN"}

Beyond this example there are further filter possibilities. For more information, see query documentation.

6. CaaS Admin Interface

The CaaS Admin Interface is used for the administration of the transferred content and offers a simple, web-based administration interface for this purpose. It is divided into the areas Projects, Browser and API Keys (see figure CaaS Admin Interface).

The first time the application is launched, the user must authenticate himself with the access data selected during installation. After a valid entry, the user is automatically redirected to the administration interface. A renewed authentication is only necessary after an explicit logoff or after the end of a session.

home
Figure 1. CaaS Admin Interface

6.1. Projects

Within the area Projects a list of all identified projects is displayed (see figure projects). Next to the project name there is a red button for each entry in the list, with which the user can delete the respective project. The same button is also displayed before each collection. The overview of the collections of the project appears when clicking on the project name. In addition to the name of the collection, each entry contains information about the number of documents it contains.

Neither a deleted project nor a deleted collection can be restored via the CaaS Admin Interface. The user must therefore confirm the deletion before.

projects
Figure 2. projects

6.2. Browser

The browser provides an overview of all projects transferred to the CaaS platform, their collections and the documents contained in them. It also allows direct requests to the REST Interface.

browser selected
Figure 3. overview of the browser

After clicking on a project, a list of associated collections appears. A table of documents appears again after you select a collection (see the figure overview of the browser). The table sorted alphabetically by DOCUMENT ID represents only some metadata of the document, namely the DOCUMENT ID, the REVISION ID, the LAST DOCUMENT CHANGES, the FIRSTSPIRIT OBJECT TYPE and the GENERATION DATE.

The complete document (the JSON) becomes visible when clicking on a line.

browser json
Figure 4. JSON of a document

If a collection contains a large number of documents, the results are split over several pages.

6.2.1. Sending own queries

You can create your own query directly using the Query text field.

browser query
Figure 5. query field

The structure of the query is as follows:

Projectname/Collection?filter= {key: "value"}

A query which lists all English documents of a collection content within the project MithrasEnergy would look like this:

MithrasEnergy/content?filter={fs_language: "EN"}

This example query can also be output via the Show Example link in the CaaS Admin Interface.

browser query selected elements
Figure 6. Selected project with collection

If a custom query is desired on the currently selected collection, the current path can be taken over using the Apply XY to query link (see figure query field). You then only need to add the appropriate parameters.

An overview of the possible filters can be found on the following page: https://restheart.org/learn/query-documents/#filtering

Parameters for paging are automatically attached - even when you make your own queries. Attaching them yourself will result in an incorrect query to the REST Interface.

6.3. API Keys

For each project, API Keys can be defined, which allow access to the project via the REST Interface. By default, an API Key has no rights. These must therefore first be set accordingly to ensure access (see figure API Keys).

API keys are cached in the system. So it can happen that changed rights of already existing API keys have an effect only after a short delay.

api keys
Figure 7. API Keys

The API Keys are listed in a four-column table:

API Key

In addition to the technical key (a UUID), the column contains the name assigned during creation and an optional description. Both the name and the description can be changed later by inline editing. The key is used to grant access to the selected projects. In addition, a red button is displayed in front of each key, which allows the deletion of the respective key.

PROJECT

This column contains for each API Key a list of all projects for which it has rights. With the button + Add Project below such a list further projects can be added. In contrast to this, the red button in front of each project name serves to remove the respective project from the selection and thus withdraw access to the API Key.

READ / WRITE

The two columns indicate whether the API Key has read or write access to the respective project. Using the sliders, the rights can be set or removed independently.

These and other functions are described in more detail in the following subchapters.

6.3.1. Filter the displayed API Keys

With a high number of API Keys and projects it can happen that not all API Keys or projects are of interest and unnecessarily increase the overview. To filter the API Keys-overview there are therefore two input components above the list: Filter Projects and Show unused API Keys.

When a project is selected in Filter Projects, API Keys with rights to that project are displayed in the overview. Otherwise the corresponding API Keys are hidden.

By default, all entries in Filter Projects are selected, so the overview will show all API Keys that have rights on at least one project. This also means that API Keys that do not have rights on any project are hidden by default. For this reason there is a button Show unused API Keys, which can be used to show or hide these keys.

With the entry All Projects you can display the API Keys that have access rights to all projects and not only to individual projects. The entry explicitly not corresponds to the display all projects.

6.3.2. Create a new API Key

To create a new API Key a special dialog exists (see figure dialog for adding a new API Key). This dialog is called by clicking the +Add API Key button.

new api key
Figure 8. dialog for adding a new API Key

Within the dialog a name for the API Key to be created must be defined first. In addition, an optional description can be added. The key can be assigned to all projects as well as to individual projects by simply clicking on it. The selected elements are highlighted in green and can be deselected by a further click. If the process is to be aborted and all entries discarded, this can be done with the Cancel button. Otherwise the button Save saves the selection made and generates a new API Key.

Another dialog informs the user about the successful creation of the new API Key, which is automatically sorted into the overview (see figure confirmation dialog).

api key created
Figure 9. confirmation dialog

API Keys without project assignment are hidden in the overview list by default. If no selection was made for the new API Key when it was created, it is therefore not visible in the overview. However, the Show unused API Keys button allows it to be shown.

6.3.3. Granting of rights

An API Key initially has no rights, but only a project assignment. This means that the sliders in the columns Read and Write on the Overview initially have the status OFF. The sliders can be used to set or remove the rights independently.

If an API Key should get access to further projects, they can be added via a selection dialog. The dialog opens with the + Add Project button. It only lists the projects which are not assigned to the API Key yet.

API Keys without project assignment are hidden in the overview list by default. However, the Show unused API Keys button allows them to be shown.

It is possible to assign an API Key to all and simultaneously specific projects (see figure assignment of rights ). In this case, please note that the permissions are not additive. This means that the permissions set for a specific project always override the general definition.

Example

In the following illustration, an API Key has read access to all projects. For the project MithrasEnergy, however, it only has write permission. Thus, the key has only write access to the MithrasEnergy project, while it can only read all other projects.

interface permissions
Figure 10. assignment of rights

6.3.4. Withdrawal of rights

On the Overview, a list of the projects assigned to each API Key is displayed. Each of these projects also has two sliders (see figure rights of an API Key). They define whether the API Key has read or write access to the respective project. The two permissions can be set or removed independently. Switching a slider to the status OFF deprives the key of the corresponding right for the associated project.

key permissions
Figure 11. rights of an API Key

In addition, it is possible to revoke not only a single right but the entire access to a project from an API Key. To do this, it must be deleted from the project list associated with the key by pressing the red button that is visible in front of each project name.

If a project was accidentally removed from the API Key`s project list, it can be easily added again by pressing the + Add Project button.

add projects panel
Figure 12. add projects to an API Key

A delete button is also displayed before each API Key. Deletion removes the key from the overview. It loses its validity and thus any access to the projects assigned to it before.

The deletion of an API Key cannot be undone. It cannot be restored afterwards.

7. Metrics

Metrics are used for monitoring and error analysis of CaaS components during operation and can be accessed via HTTP endpoints. If metrics are available in Prometheus format, corresponding ServiceMonitors are generated for this purpose, see also Prometheus ServiceMonitors.

7.1. REST Interface

Healthcheck

The Healthcheck endpoint provides information about the functionality of the corresponding component in the form of a JSON document. This status is calculated from several checks. If all checks are successful, the JSON response has the HTTP status 200. As soon as at least one check has the value false, the response has HTTP status 500.

The query is made using the URL: \\http://REST-HOST:PORT/_logic/healthcheck

The functionality of the REST Interface depends on the accessibility of the MongoDB cluster as well as on the existence of a primary node. If the cluster does not have a primary node, it is not possible to perform write operations on the MongoDB.

HTTP Metrics

Metrics for HTTP requests and responses of the REST Interface can be retrieved as a JSON document or in Prometheus format at the following URL \\http://REST-HOST:PORT/_metrics

Further information is available in the RESTHeart-Documentation.

7.2. MongoDB

The metrics of the MongoDB are provided by a sidecar container. This container accesses the MongoDB metrics with a separate database user and provides them via HTTP.

The metrics can be accessed at the following URL: \\http://MONGODB-HOST:METRICS-PORT/metrics.

Please note that the MongoDB metrics are delivered via a separate port. This port is not accessible from outside the cluster and therefore not protected by authentication.

8. Maintenance

The transfer of data to CaaS can only work if the individual components work properly. If faults occur or an update is necessary, all CaaS components must therefore be considered. The following subchapters describe the necessary steps of an error analysis in case of a malfunction and the execution of a backup or update.

8.1. Error analysis

CaaS is a distributed system and is based on the interaction of different components. Each of these components can potentially generate errors. Therefore, if a failure occurs while using CaaS, it can have several causes. The basic analysis steps for determining the causes of faults are explained below.

Status of the components

The status of each component of the CaaS platform can be checked using the kubectl get pods --namespace=<namespace> command. If the status of an instance differs from running or ready, it is recommended to start debugging at this point and check the associated log files.

If there are problems with the Mongo database, check whether a Primary node exists. If the number of available instances falls below 50% of the configured instances, no more Primary nodes can be selected. However, this is essential for the functionality of the REST Interface. The absence of a Primary node means that the pods of the REST Interface no longer have the status ready and are therefore unreachable.

The chapter Consider Fault Tolerance of the MongoDB documentation describes how to avoid this, how many nodes can explicitly fail until the determination of a new primary node is impossible

Analysis of the logs

In case of problems, the log files are a good starting point for analysis. They offer the possibility to trace all processes on the systems. In this way, any errors and warnings become apparent.

Current log files of the CaaS components can be viewed using kubectl --namespace=<namespace> logs <pod>, but only contain events that occurred within the lifetime of the current instance. To be able to analyze the log files after a crash or restart of an instance, we recommend setting up a central logging system.

The log files can only be viewed for the currently running container. For this reason, it is necessary to set up a persistent storage to access the log files of already finished or newly started containers.

8.2. Backup

The architecture of CaaS consists of different, independent components that generate and process different information. If there is a need for data backup, this must therefore be done depending on the respective component.

A backup of the information stored in CaaS must be performed using the standard mechanisms of the Mongo database. This can either be done by creating a copy of the underlying files or by using mongodump.

8.3. Update

Operating the CaaS platform with Helm in Kubernetes provides the possibility of updating to the new version without the need for a new installation.

Before updating the Mongo database, a Backup is strongly recommended.

The helm list --all-namespaces command first returns a list of all already installed Helm charts. This list contains both the version and the namespace of the corresponding release.

sample list of installed releases
\$ helm list --all-namespaces
NAME            NAMESPACE    REVISION  UPDATED             STATUS    CHART        APP VERSION
firstinstance   integration  1         2019-12-11 15:51..  DEPLOYED  caas-2.10.4  caas-2.10.4
secondinstance  staging      1         2019-12-12 09:31..  DEPLOYED  caas-2.10.4  caas-2.10.4

To update a release, the following steps must be carried out one after the other:

Transfer the settings

To avoid losing the previous settings, it is necessary to have the custom-values.yaml file with which the initial installation of the Helm chart was carried out.

Adoption of further adjustments

If there are adjustments to files (e.g. in the config directory), these must also be adopted.

Update

After performing the previous steps, the update can be started. It replaces the existing installation with the new version without any downtime. To do this, execute the following command, which starts the process:

helm upgrade RELEASE_NAME caas-2.11.34.tgz --values /path/to/custom-values.yaml

9. Help

The Technical Support of the e-Spirit AG provides expert technical support to customers and partners covering any topic related to the FirstSpirit™ product. You can get and find more help concerning relevant topics in our community.

10. Disclaimer

This document is provided for information purposes only. e-Spirit may change the contents hereof without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. e-Spirit specifically disclaims any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. The technologies, functionality, services, and processes described herein are subject to change without notice.