Skip to content

Limits

This page covers the practical limits and operating guardrails you should account for when planning how to use Kupe Cloud.

These are not hidden internals. They are the constraints that most often affect cluster design, automation, and day-to-day operations.

Each cluster is created with explicit CPU, memory, and storage limits.

Plan around that model:

  • size the cluster for current workload needs
  • keep headroom for rollouts, retries, and incident response
  • review usage before resizing rather than waiting for pressure to show up in production

If a workload needs more room, update the cluster limits instead of treating capacity as open-ended.

CLI access is intentionally bounded:

  • OIDC kubeconfig uses an 8-hour access token
  • token kubeconfig uses a fixed lifetime chosen when it is generated

Choose the shortest workable lifetime for automation and rotate long-lived credentials regularly.

Secret limits come from both Kubernetes and platform guardrails:

  • secret values are limited to 1 MB per key
  • each tenant can create up to 1,000 secrets

If you have large configuration payloads, split them into smaller secrets or move bulk configuration into normal files and config objects rather than treating secrets as a general storage layer.

Some limits are about time rather than hard object counts:

  • DNS changes are not instant
  • certificate issuance depends on DNS propagation
  • GitOps reconciliation is continuous, but not instantaneous

When a route or domain does not become ready immediately, check propagation and reconciliation before assuming the configuration is wrong.

Cluster lifecycle operations are controlled workflows, not background details.

  • create, upgrade, and delete actions should be treated as explicit operational events
  • upgrades should be planned and validated like any other platform change
  • delete operations are destructive and should follow a confirmation process

The safest pattern is to treat lifecycle phases such as Provisioning, Running, and Upgrading as operating signals for your team and automation.

Not every path in the platform is equally mature for every use case.

  • shared-node clusters are the standard self-service path today
  • dedicated node placement is not the normal default workflow yet
  • some workflows are easier through the console, while others are better through the API or Terraform

Design your process around the supported path, then add more advanced automation once the core operating flow is stable.