1. Introduction
The CaaS platform is the link between FirstSpirit and the customer’s end application. The REST Interface receives information and updates it in the internal persistence layer of the CaaS platform. An update of data in the customer’s end application is done by requests to the REST Interface.
The CaaS platform includes the following components, which are available as docker containers:
REST Interface (caas-rest-api)
The REST Interface is used both for transferring and retrieving data to and from the repository. For this purpose it provides a REST endpoint that can be used by any service. It also supports authentication and authorization.
Between CaaS version 2.11 and 2.13 (inclusive), the authentication and authorization functionality was provided by a separate Security Proxy. |
CaaS repository (caas-mongo)
The CaaS repository is not accessible from the Internet and can be only accessed within the platform via the REST Interface. It serves as a storage for all project data and internal configuration.
Deprecated: CaaS Admin Interface (caas-admin-webapp)
The CaaS Admin Interface enables the management of the information transferred to CaaS and provides a simple, web-based administration interface. To do this, it communicates with the repository via the REST Interface and is accessible via HTTP(S).
Deprecation notice: The CaaS Admin Interface is deprecated since version 7. We highly recommend using a REST client of your choice. The CaaS Admin Interface will be removed in December 2021. |
2. Technical requirements
The operation of the CaaS platform has to be realized with Kubernetes.
If you do not feel able to operate, configure, monitor, and analyze and resolve operating problems of the cluster infrastructure accordingly, we strongly advise against on-premises operation and refer to our SaaS offering. |
Since the CaaS-platform is delivered as Helm artifact, the Helm client must be available.
It is important that Helm is installed in a secure manner. For more information, refer to the Helm Installation Guide. |
For system requirements please consult the technical data sheet of the CaaS platform .
3. Installation and configuration
The setup of the CaaS platform for operation with Kubernetes is done by using Helm-Charts. These are part of the delivery and already contain all necessary components.
The following subchapters describe the necessary installation and configuration steps.
3.1. Import of the images
The first step in setting up he CaaS platform for operation with Kubernetes requires the import of the images into your central Docker registry (e.g. Artifactory). The images are contained in the file caas-docker-images-9.0.1.zip
in the delivery.
The credentials for cluster access to the repository must be known. |
The steps necessary for the import can be found in the documentation of the registry you are using.
3.2. Configuration of the Helm chart
After the import of the images the configuration of the Helm chart is necessary. This is part of the delivery and contained in the file caas-9.0.1.tgz
. A default configuration of the chart is already make in the values.yaml
file. All parameters specified in this values.yaml
can be overwritten with a manually created custom-values.yaml
by a specific value.
3.2.1. Authentication
All authentication settings for the communication with or within the CaaS platform are specified in the credentials
block of the custom-values.yaml
. So here you will find user names and default passwords as well as the CaaS Master API Key. It is strongly recommended to adjust the default passwords and the CaaS Master API Key.
All selected passwords must be alphanumeric. Otherwise, problems will occur in connection with CaaS. |
The CaaS Master API Key is automatically created during the installation of CaaS and thus allows the direct use of the REST Interface. |
3.2.2. CaaS repository (caas-mongo)
The configuration of the repository includes two parameters:
- storageClass
-
The possibility of overwriting parameters from the
values.yaml
file mainly affects the parametermongo.persistentVolume.storageClass
.
For performance reasons, we recommend that the underlying MongoDB filesystem is provisioned with XFS. |
- clusterKey
-
For the authentication key of the Mongo Cluster a default configuration is delivered. The key can be defined in the parameter
credentials.clusterKey
. It is strongly recommended that you use the following command to create a new key for productive operation:
openssl rand -base64 756
This value may only be changed during the initial installation. If it is changed at a later time, this can lead to a permanent unavailability of the database, which can only be repaired manually. |
3.2.3. Docker- Registry
An adjustment of the parameters imageRegistry
and imageCredentials
is necessary to configure the used Docker registry.
imageRegistry: docker.company.com/e-spirit
imageCredentials:
username: "username"
password: "special_password"
registry: docker.company.com
enabled: true
3.2.4. Ingress Configurations
Ingress-Definitions control the incoming traffic to the respective component and are not created by default by the delivery. The parameters adminWebapp.ingress.enabled
and restApi.ingress.enabled
allow the ingress configuration for the REST Interface and the CaaS Admin Interface.
The Ingress definitions of the Helm chart assume the NGINX Ingress Controller to be used, since annotations of this concrete implementation are used. If you are using a different implementation, you must adapt the annotations of the Ingress definitions in your |
adminWebapp:
ingress:
enabled: true
hosts:
- caas-webapp.company.com
restApi:
ingress:
enabled: true
hosts:
- caas.company.com
If the setting options are not sufficient for the specific application, the Ingress can also be generated independently. In this case the corresponding parameter must be set to the value enabled: false
. The following code example provides an orientation for the definition.
apiVersion: extensions/v1beta1
child: Ingress
metadata:
labels:
name: caas
spec:
rules:
- http:
paths:
- baking:
serviceName: caas-rest-api
servicePort: 80
host: caas-rest-api.mydomain.com
3.3. Installation of the Helm-Chart
After the configuration of the Helm-chart it has to be installed into the Kubernetes cluster. The installation is done with the following commands, which must be executed in the directory of the Helm-chart.
kubectl create namespace caas
helm install RELEASE_NAME . --namespace=caas --values /path/to/custom-values.yaml
The name of the release can be chosen freely.
If the namespace is to have a different name, you must replace the specifications within the commands accordingly.
If an already existing namespace is to be used, the creation is omitted and the desired namespace must be specified within the installation command.
Since the containers are first downloaded from the used image registry, the installation can take several minutes. Ideally, however, a period of five minutes should not be exceeded before the CaaS platform is operational.
The status of each component can be obtained with the following command:
kubectl get pods --namespace=caas
Once all components have the status Running
, the installation is complete.
NAME READY STATUS RESTARTS AGE
caas-admin-webapp-1055845989-0s4pg 1/1 Running 0 5m
caas-mongo-0 2/2 Running 0 4m
caas-mongo-1 2/2 Running 0 3m
caas-mongo-2 2/2 Running 0 1m
caas-rest-api-1851714254-13cvn 1/1 Running 0 5m
caas-rest-api-1851714254-13cvn 1/1 Running 0 4m
caas-rest-api-1851714254-xs6c0 1/1 Running 0 4m
3.4. TLS
The communication of the CaaS platform to the outside world is not encrypted by default. If it is to be protected by TLS, there are two configuration options:
- Using an officially signed certificate
-
To use an officially signed certificate, a TLS secret is required, which must be generated first. This must contain the keys
tls.key
and the certificatetls.crt
.The steps necessary to generate the TLS secret are described in the Kubernetes Ingress Documentation.
- Automated certificate management
-
As an alternative to using an officially signed certificate, it is possible to automate the administration using the cert manager. This must be installed within the cluster and takes over the generation, distribution and updating of all required certificates. The configuration of the Cert-Manager allows for example the use and automatic renewal of Let’s-Encrypt-Certificates.
The necessary steps for installation are explained in the Cert-Manager-Documentation.
3.5. Scaling
In order to be able to quickly process the information transferred to CaaS, the CaaS platform must ensure optimal load distribution at all times. For this reason, the REST Interface and the Mongo database are scalable and already configured to deploy at least three instances at a time for failover. This minimum number of instances is mandatory, especially for the Mongo cluster.
REST Interface
The scaling of the REST Interface is done with the help of a Horizontal Pod Autoscaler. Its activation as well as configuration must be done in the custom-values.yaml
file to overwrite the default values defined in the values.yaml
file.
restApi:
horizontalPodAutoscaler:
enabled: false
minReplicas: 3
maxReplicas: 9
targetCPUUtilizationPercentage: 50
The Horizontal Pod Autoscaler allows to scale down or up the REST Interface depending on the current CPU load. The parameter targetCPUUtilizationPercentage
specifies the percentage value from which scaling should take place. At the same time the parameters minReplicas
and maxReplicas
define the minimum and maximum number of possible REST Interfacen instances.
The threshold value for the CPU load should be chosen with care: A wrong configuration can therefore endanger the stability of the system. The official Kubernetes Horizontal Pod Autoscaler-documentation as well as the examples listed in it contain further information on the use of an Horizontal Pod Autoscaler. |
Mongo database
Unlike REST Interface, scaling the Mongo database is only possible manually. Therefore, it cannot be performed automatically using a Horizontal Pod Autoscaler.
Scaling the Mongo database is done using the replicas
parameter. This parameter must be entered in the custom-values.yaml
file to override the default value defined in the values.yaml
file.
At least three instances are required to run the Mongo Cluster, otherwise there is no The chapter Consider Fault Tolerance of the MongoDB documentation describes how many nodes can explicitly fail, until the determination of a new Further information on scaling and replicating the Mongo database is available in the chapters Replica Set Deployment Architectures and Replica Set Elections. |
mongo:
replicas: 3
A downscaling of the Mongo database is not possible without direct intervention and requires a manual reduction of the replicaset of the Mongo database. The MongoDB documentation describes the necessary steps for this. Such intervention increases the risk of failure and is therefore not recommended. |
Applying the configuration
The updated custom-values.yaml
file must be applied after the configuration changes for the REST Interface or Mongo database with the following command.
helm upgrade -i RELEASE_NAME path/to/caas-<VERSIONNUMBER>.tgz --values /path/to/custom-values.yaml
The release name can be determined with the command |
3.6. Monitoring
The CaaS platform is a micro service architecture and therefore consists of different components. In order to be able to monitor its status properly at any time and to be able to react quickly in the event of an error, integration in a cluster-wide monitoring system is absolutely essential for operation with Kubernetes.
The CaaS platform is already preconfigured for monitoring with Prometheus Operator, since this scenario is widely used in the Kubernetes environment. It includes Prometheus ServiceMonitors for collecting metrics, Prometheus Alerts for notification in case of problems and predefined Grafana dashboards for visualizing the metrics.
3.6.1. Requirements
It is essential to set up monitoring and log persistence for the Kubernetes cluster. Without these prerequisites, there are hardly any analysis possibilities in case of a failure and Technical Support lacks important information. |
- Metrics
-
To install the Prometheus Operator please use the official Helm-Chart, so that cluster monitoring can be set up based on it. For further information please refer to the corresponding documentation.
If you are not running a Prometheus Operator, you must turn off the Prometheus ServiceMonitors and Prometheus Alerts.
- Logging
-
With the use of Kubernetes it is possible to provide various containers or services in an automated and scalable way. To ensure that the logs remain in such a dynamic environment even after an instance has been terminated, an infrastructure must be integrated that persists the instance beforehand.
Therefore we recommend the use of a central logging system, such as Elastic-Stack. The Elastic or ELK stack is a collection of open source projects that help to persist, search and analyze log data in real time.
Here too, you can use an existing Helm-Chart for the installation.
3.6.2. Prometheus ServiceMonitors
The deployment of the ServiceMonitors provided by the CaaS platform for the REST Interface and the mongo database, is controlled via the custom-values.yaml
file of the Helm-Charts.
The access to the metrics of the REST Interface is secured by HTTP Basic Auth and the access to the metrics of the MongoDB by a corresponding MongoDB user. The respective access data is contained in the credentials block of the Please adjust the credentials in your |
Typically, Prometheus is configured to consider only ServiceMonitors with specific labels. The labels can therefore be configured in the custom-values.yaml
file and are valid for all ServiceMonitors of the CaaS Helm chart. Furthermore, the parameter scrapeInterval
allows a definition of the frequency with which the respective metrics are retrieved.
monitoring:
prometheus:
# Prometheus service monitors will be created for enabled metrics. Each Prometheus
# instance has a configured serviceMonitorSelector property, to be able to control
# the set of matching service monitors. To allow defining matching labels for CaaS
# service monitors, the labels can be configured below and will be added to each
# generated service monitor instance.
metrics:
serviceMonitorLabels:
release: "prometheus-operator"
mongo:
enabled: true
scrapeInterval: "30s"
caas:
enabled: true
scrapeInterval: "30s"
The MongoDB metrics are provided via a sidecar container and retrieved with the help of a separate database user. You can configure the database user in the credentials
block of the custom-values.yaml
. The sidecar container is stored with the following standard configuration:
mongo:
metrics:
image: mongodb-exporter:0.11.0
syncTimeout: 1m
3.6.3. Prometheus Alerts
The deployment of the alerts provided by the CaaS platform is controlled via the custom-values.yaml
file of the Helm-Charts.
Prometheus is typically configured to consider only alerts with specific labels. The labels can therefore be configured in the custom-values.yaml
file and apply to all alerts in the CaaS Helm chart:
monitoring:
prometheus:
alerts:
prometheusRuleLabels:
app: "prometheus-operator"
release: "prometheus-operator"
caas:
enabled: true
3.6.4. Grafana Dashboards
The deployment of the Grafana dashboards provided by the CaaS platform is controlled via the custom-values.yaml
file of the Helm-Charts.
Typically, the Grafana Sidecar Container is configured to consider only configmaps with specific labels and in a defined namespace. The labels of the configmap and the namespace in which it is deployed can therefore be configured in the custom-values.yaml
file:
monitoring:
grafana:
dashboards:
enabled: true
configmapNamespace: ""
configMapLabels: {}
4. Development Environment
Kubernetes and Helm form the basis of all CaaS platform installations. In case of development environments we recommend installing CaaS platform into a separate namespace on your production cluster or any cluster configured similarly. We do not recommend using local CaaS platform instances, even for development.
The documentation regarding deprecated Docker Compose stack was removed with version 7.1.0. However, it is still available in the documentation of the previous versions up to and including version 7.0.0. |
If you need a local environment on developer machines you have to create a local Kubernetes cluster to be used. One of the following projects may be used to achieve this:
This list does not claim to be exhaustive. Rather, it is intended to give some examples of which we know that operation is generally possible without us permanently using these projects ourselves. |
Each of these projects can be used to manage Kubernetes clusters locally. However, we’re not able to give you support for any of these specific projects. The CaaS platform uses only standard Helm and Kubernetes features and is thus independent of any particular Kubernetes distribution.
Please be sure to configure the following features correctly when using a local Kubernetes cluster:
-
Kubernetes Image Pull Secrets to resolve the docker images from your local or company Docker registry
-
disabling monitoring features in
custom-values.yaml
or installing the needed prerequisites -
tweaking host systems DNS settings to be able to work with Kubernetes Ingress resources or using local port forwards into the cluster
5. REST Interface
5.1. Storage of the content
Using the REST Interface all content can be managed via HTTP and is stored in CaaS in so-called collections, which are subordinate to databases. The following three-part URL scheme applies:
http://Servername:Port/Database/Collection/Document
Binary content (media) is an exception in that it is stored in so-called buckets. The associated collections always end with the suffix
|
5.2. Authentication
Each request to the REST Interface must be authenticated, otherwise it will be rejected. The various authentication options are explained below.
5.2.1. Authentication as admin user
Authorization of the admin user is done using HTTP Basic Authentication with the configured credentials. The admin user is intended for administrative tasks, such as the administration of API Keys. All other operations should be authenticated using API Keys.
API Keys control access rights to projects and can only be managed by the admin user. See the Management of API Keys section for details. |
The credentials of the admin user are defined in the parameters credentials.webAdminUser
and credentials.webAdminPassword
of the Helm chart.
Details can be found in chapter Authentication.
5.2.2. Authentication with API Keys
Each request to the REST Interface must contain an HTTP header of the form Authorization: apikey="<key>"
. The value of key
is expected to be the value of the key
attribute of the corresponding API Key.
See the Validation of API Keys section below for more information.
5.2.3. Authentication with security token
It is possible to generate a short-lived (up to 24 hours) security token for an API Key. The token contains the same permissions as the API Key which it was generated for. There are two ways to generate and use these tokens:
Query Parameter
A GET request authenticated with an API Key to the /_logic/securetoken?tenant=<db>
endpoint generates a security token. Such a token can be issued only for one specified database, regardless of whether the API Key has permissions on multiple databases. The parameter &ttl=<lifetime in seconds>
is optional. The JSON response contains the security token.
Each request to the REST Interface can optionally be authenticated using a query parameter ?securetoken=<token>
.
Cookie
A GET request authenticated with an API Key to the /_logic/securetokencookie?tenant=<db>
endpoint generates a security token cookie. Such a cookie can be issued only for one specified database, regardless of whether the API Key has permissions on multiple databases. The parameter &ttl=<lifetime in seconds>
is optional. The response includes a set-cookie
header with the security token.
All requests that include this cookie get automatically authenticated.
5.3. Management of API Keys
API Keys, like all other resources in CaaS, can be managed via REST endpoints. In general, it is important to distinguish that API Keys can be managed at two levels: global or local per database. Global API Keys differ from local API Keys by their scope of validity.
When using an API Key for authentication, the CaaS platform always searches the local API Keys first. If no matching API Key is found, the global API Keys are evaluated afterwards.
5.3.1. Global API Keys
Global API Keys are cross-database and are managed in the apikeys
collection of the caas_admin
database. Unlike local API Keys, they allow permissions to be defined for multiple or even all databases.
5.3.2. Local API Keys
Local API Keys are defined per database and are managed accordingly in the apikeys
collection of any database. Unlike global API Keys, local API Keys can only define permissions for resources within the same database.
5.3.3. Authorization Model
The authorization of an API Key is performed using its url
attribute. This value is checked against the URL path of the request.
This results in a basic distinction between global and local API Keys. Global API Keys always check against the entire path of the request, while local API Keys only check against the part of the path after the database.
The following example illustrates this procedure:
authorization in API Key | type of API Key | request URL path | Allowed |
---|---|---|---|
/ |
global |
/ |
yes |
/project/ |
yes |
||
/project/content/ |
yes |
||
/other-project/ |
yes |
||
/other-project/content/ |
yes |
||
/project/ |
global |
/ |
no |
/project/ |
yes |
||
/project/content/ |
yes |
||
/other-project/ |
no |
||
/other-project/content/ |
no |
||
/ |
local in 'project |
/ |
no |
/project/ |
yes |
||
/project/content/ |
yes |
||
/other-project/ |
no |
||
/other-project/content/ |
no |
||
/content/ |
local in 'project |
/ |
no |
/project/ |
no |
||
/project/content/ |
yes |
||
/other-project/ |
no |
||
/other-project/content/ |
no |
5.3.4. REST endpoints
The following endpoints are available for managing API Keys:
Since managing API Keys is considered part of administrative tasks, both read and write access are exclusive to the admin user. To issue queries, please use a REST client of your choice. |
-
GET /<database>/apikeys
-
POST /<database>/apikeys
Note: the parameters_id
andkey
are mandatory and must have identical values -
PUT /<database>/apikeys/{id}
Note: the parameterkey
must have the same value as the {id} in the URL -
DELETE /<database>/apikeys/{id}
The database is based on the type of API Key.
The |
5.3.5. Validation of API Keys
Each API Key is validated against a stored JSON schema when created and updated. The JSON schema secures the basic structure of API Keys and can be queried at /<database>/_schemas/apikeys
.
Further validations ensure that no two API Keys can be created with the same key
. Likewise, a API Key must not contain a URL more than once.
If a API Key does not satisfy the requirements, the corresponding request is rejected with HTTP status 400.
If the JSON schema has not been successfully stored in the database before, requests are answered with HTTP status 500.
The |
5.4. Managing content
5.4.1. HAL format
The interface returns all results in HAL format. This means that they are not simply raw data, such as traditionally unstructured content in JSON format.
The HAL format offers the advantage of simple but powerful structuring. In addition to the required content, the results contain additional meta-information on the structure of this content.
Example
{ "_size": 5,
"_total_pages": 1,
"_returned": 3,
"_embedded": { CONTENT }
}
In this example a filtered query was sent. Without knowing the exact content, its structure can be read directly from the meta information. At this point, the REST Interface returns three results from a set of five documents corresponding to the filter criteria and displays them on a single page.
If the element to be requested is a medium, the URL only determines its metadata. The HAL format contains corresponding links that refer to the URL with the actual binary content of the medium. For further information please refer to the documentation. |
5.4.2. Page size of queries
The results of REST Interface are always delivered paginated. To control the page size and requested page, the HTTP query parameters pagesize
and page
can be used for GET requests. For more information, see the RESTHeart documentation.
5.4.3. Use of filters
Filters are always used when documents are not to be determined by their ID but by their content. In this way, both single and multiple documents can be retrieved.
For example, the query of all English language documents from the products collection has the following structure:
http://Servername:Port/Database/products?filter={fs_language: "EN"}
Beyond this example there are further filter possibilities. For more information, see query documentation. |
5.5. Indexes for efficient query execution
The runtime of queries with filters can get longer as the number of documents in a collection grows. If it exceeds a certain value, the query is answered by the REST Interface with HTTP status 408. More efficient execution can be achieved by creating an index on the attributes used in the affected filter queries.
For detailed information on database indices, please refer to the documentation of the MongoDB.
5.5.1. Predefined indexes
If you have CaaS Connect in use, predefined indices are already created that support some frequently used filter queries. The exact definitions can be found at http://Servername:Port/Database/Collection/_indexes/
.
5.5.2. Customer-specific indexes
If the predefined indices do not cover your use cases and you observe long response times or even request timeouts, you can create your own indexes. The REST Interface can be used to manage the desired indexes. The procedure is described in the RESTheart documentation.
Please only create the indexes you need. |
5.6. Push notifications (change streams)
It is often convenient to be notified about changes in the CaaS platform. For this purpose the CaaS platform offers change streams. This feature allows a websocket connection to be established to the CaaS platform, through which events about the various changes are published.
Change streams are created by putting a definition in the metadata of a collection. If you use CaaS Connect, a number of predefined change streams are already created for you. You also have the option to define your own change streams.
The format of the events corresponds to standard MongoDB events.
When working with websockets, we recommend taking into account connection failures that may occur. Regular |
You can find an example of using change streams in the browser in the appendix. |
6. Metrics
Metrics are used for monitoring and error analysis of CaaS components during operation and can be accessed via HTTP endpoints. If metrics are available in Prometheus format, corresponding ServiceMonitors are generated for this purpose, see also Prometheus ServiceMonitors.
6.1. REST Interface
Healthcheck
The Healthcheck endpoint provides information about the functionality of the corresponding component in the form of a JSON document. This status is calculated from several checks. If all checks are successful, the JSON response has the HTTP status 200. As soon as at least one check has the value false
, the response has HTTP status 500.
The query is made using the URL: \\http://REST-HOST:PORT/_logic/healthcheck
The functionality of the REST Interface depends on the accessibility of the MongoDB cluster as well as on the existence of a primary node. If the cluster does not have a primary node, it is not possible to perform write operations on the MongoDB. |
HTTP Metrics
Metrics for HTTP requests and responses of the REST Interface can be retrieved as a JSON document or in Prometheus format at the following URL http://REST-HOST:PORT/_metrics
Further information is available in the RESTHeart-Documentation. |
6.2. MongoDB
The metrics of the MongoDB are provided by a sidecar container. This container accesses the MongoDB metrics with a separate database user and provides them via HTTP.
The metrics can be accessed at the following URL: http://MONGODB-HOST:METRICS-PORT/metrics
.
Please note that the MongoDB metrics are delivered via a separate port. This port is not accessible from outside the cluster and therefore not protected by authentication. |
7. Maintenance
The transfer of data to CaaS can only work if the individual components work properly. If faults occur or an update is necessary, all CaaS components must therefore be considered. The following subchapters describe the necessary steps of an error analysis in case of a malfunction and the execution of a backup or update.
7.1. Error analysis
CaaS is a distributed system and is based on the interaction of different components. Each of these components can potentially generate errors. Therefore, if a failure occurs while using CaaS, it can have several causes. The basic analysis steps for determining the causes of faults are explained below.
- Status of the components
-
The status of each component of the CaaS platform can be checked using the
kubectl get pods --namespace=<namespace>
command. If the status of an instance differs fromrunning
orready
, it is recommended to start debugging at this point and check the associated log files.
If there are problems with the Mongo database, check whether a The chapter Consider Fault Tolerance of the MongoDB documentation describes how to avoid this, how many nodes can explicitly fail until the determination of a new |
- Analysis of the logs
-
In case of problems, the log files are a good starting point for analysis. They offer the possibility to trace all processes on the systems. In this way, any errors and warnings become apparent.
Current log files of the CaaS components can be viewed using
kubectl --namespace=<namespace> logs <pod>
, but only contain events that occurred within the lifetime of the current instance. To be able to analyze the log files after a crash or restart of an instance, we recommend setting up a central logging system.
The log files can only be viewed for the currently running container. For this reason, it is necessary to set up a persistent storage to access the log files of already finished or newly started containers. |
7.2. Backup
The architecture of CaaS consists of different, independent components that generate and process different information. If there is a need for data backup, this must therefore be done depending on the respective component.
A backup of the information stored in CaaS must be performed using the standard mechanisms of the Mongo database. This can either be done by creating a copy of the underlying files or by using mongodump
.
7.3. Update
Operating the CaaS platform with Helm in Kubernetes provides the possibility of updating to the new version without the need for a new installation.
Before updating the Mongo database, a Backup is strongly recommended. |
The helm list --all-namespaces
command first returns a list of all already installed Helm charts. This list contains both the version and the namespace of the corresponding release.
\$ helm list --all-namespaces
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
firstinstance integration 1 2019-12-11 15:51.. DEPLOYED caas-2.10.4 caas-2.10.4
secondinstance staging 1 2019-12-12 09:31.. DEPLOYED caas-2.10.4 caas-2.10.4
To update a release, the following steps must be carried out one after the other:
- Transfer the settings
-
To avoid losing the previous settings, it is necessary to have the
custom-values.yaml
file with which the initial installation of the Helm chart was carried out. - Adoption of further adjustments
-
If there are adjustments to files (e.g. in the
config
directory), these must also be adopted. - Update
-
After performing the previous steps, the update can be started. It replaces the existing installation with the new version without any downtime. To do this, execute the following command, which starts the process:
helm upgrade RELEASE_NAME caas-9.0.1.tgz --values /path/to/custom-values.yaml
8. Appendix
8.1. Examples
<script type="module">
import PersistentWebSocket from 'https://cdn.jsdelivr.net/npm/pws@5/dist/index.esm.min.js';
// Replace this with your API key (needs read access for the preview collection)
const apiKey = "your-api-key";
// Replace this with your preview collection url (if not known copy from CaaS Connect Project App)
// e.g. "https://caas-host/my-tenant-id/f948bb48-4f6b-4a8a-b521-338c9d352f2b.preview.content"
const previewCollectionUrl = new URL("your-preview-collection-url");
const pathSegments = previewCollectionUrl.pathname.split("/");
if (pathSegments.length !== 3) {
throw new Error(`The format of the provided url '${previewCollectionUrl}' is incorrect and should only contain two path segments`);
}
(async function(){
// Retrieving temporary auth token
const token = await fetch(new URL(`_logic/securetoken?tenant=${pathSegments[1]}`, previewCollectionUrl.origin).href, {
headers: {'Authorization': `apikey="${apiKey}"`}
}).then((response) => response.json()).then((token) => token.securetoken).catch(console.error);
// Establishing WebSocket connection to the change stream "crud"
// ("crud" is the default change stream that the CaaS Connect module provides)
const wsUrl = `wss://${previewCollectionUrl.host + previewCollectionUrl.pathname}`
+ `/_streams/crud?securetoken=${token}`;
const pws = new PersistentWebSocket(wsUrl, { pingTimeout: 60000 });
// Handling change events
pws.onmessage = event => {
const {
documentKey: {_id: documentId},
operationType: changeType,
} = JSON.parse(event.data);
console.log(`Received event for '${documentId}' with change type '${changeType}'`);
}
})();
</script>
9. Help
The Technical Support of the e-Spirit AG provides expert technical support covering any topic related to the FirstSpirit™ product. You can get and find more help concerning relevant topics in our community.
10. Disclaimer
This document is provided for information purposes only. e-Spirit may change the contents hereof without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. e-Spirit specifically disclaims any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. The technologies, functionality, services, and processes described herein are subject to change without notice.