capito digital

On Premises solution for your needs

With an On Premises installation you can host capito digital on your own servers.

On Premises Beispiel - capito digital
1

Full access

All capito digital services are available as On Premises. You have full access to capito digital.

2

Simple deployment, simple updates

Our CI/CD process is designed to make On Premises installation and software updates as easy as possible.

3

Full control

With an On Premises installation, you can choose where the software is hosted. You have full control over your data.

All functions at a glance

Simplification API
capito digital simplifies texts fully automatically for 3 easy-to-understand language levels.

Analysis API
capito digital checks texts for comprehensibility and shows which simplifications can be made.

Suggestions API
For difficult words, capito digital suggests easy-to-understand alternatives.

Lexicon API
Easily understandable explanations of terms can be requested directly.

Any questions?

For more information and answers to frequently asked questions about our On Premises solutions, click here.

These companies trust capito

You are interested?

Get in touch.

Martin Gollegger
Product manager

For more information and answers to frequently asked questions about our On Premises solutions, click here.

Frequently asked questions

Hier findest du die häufigsten Fragen zu den On Premises-Lösungen

What are the requirements for an On Premises installation?

You need a Kubernetes cluster with at least 2 nodes. At least one node must have a CUDA-compatible GPU with more than 16 GB of vRAM. If two GPU nodes are used, the other node must also have at least 16 GB of vRAM. If a CPU node is also used, a minimum memory resource of 16 GB is strongly recommended.

Do the NLP services need to be hosted on a GPU node?

Both NLP services should be hosted on CUDA-compatible GPU nodes to optimize performance. The analysis service can be hosted on either a CPU or a GPU node. The simplification service must be hosted on a GPU node. If the simplification service is to be run on a CPU node, this must be coordinated with us in advance, as code changes must be made.

What are the requirements for the API?

The API service has low system requirements and can run either on the same nodes or on other smaller nodes.

The system requirements at a glance:

Analyzer:

Type: CPU or Nvidia GPU

Memory: > 16 GB

Memory: > 50 GB

Simplifier:

Type: GPU

Memory: > 16 GB vRAM

Memory: > 400 GB

API service:

Type: CPU

Memory: > 2GB

Memory: > 20GB

Database Access Layer:

Type: CPU

Memory: > 2 GB

Memory: > 20GB

Why does the Simplifier service require so much memory?

The node for the Simplifier service requires a lot of memory because NLP models for performing the simplifications have to be stored on it. We are working on an optimization here.

How does the deployment process work?

In general, the services CI/CD process is performed in two main steps:

1. A Docker image is created and placed in a Docker container registry.

2. Aa Kubernetes cluster update is performed with Helm Charts.

How does Docker image integration work?

Since Docker images are stored in our organization’s private DockerHub repository, the image is not publicly available. However, finished images can be submitted to your Docker registry. To keep the registry credentials secret, a Secret Management tool helps integrate the credentials into our deployment process without anyone, including us, seeing the credentials.

How does cluster integration work?

We provide Helm Charts for the deployment of our customer solutions. Each service (LPP, API and database) is integrated into its own namespace. The Helm Charts do not contain any sensitive data and are delivered over a secure channel.

How to determine which deployments should run on which nodes?

To decide which pods to place on which node, you can specify labels or tolerances. To enable correct planning, the nodes must be provided with the correct labels or tolerances.

How are the applications accessed?

All services are interconnected, but can only be accessed via the API service. Ingress and gateway configurations can be defined and managed by the customer.