• SettleMintSettleMint
    • Introduction
    • Market pain points
    • Lifecycle platform approach
    • Platform capabilities
    • Use cases
    • Compliance & security
    • Glossary
    • Core component overview
    • Frontend layer
    • API layer
    • Blockchain layer
    • Data layer
    • Deployment layer
    • System architecture
    • Smart contracts
    • Application layer
    • Data & indexing
    • Integration & operations
    • Performance
    • Quality
    • Getting started
    • Asset issuance
    • Platform operations
    • Troubleshooting
    • Development environment
    • Code structure
    • Smart contracts
    • API integration
    • Data model
    • Deployment & ops
      • Installation & configuration
      • Production operations
    • Testing and QA
    • Developer FAQ
Back to the application
  1. Documentation
  2. Developer guides
  3. Deployment & ops

Installation & configuration

This guide covers installing the Asset Tokenization Kit to Kubernetes environments using the included Helm charts.

Overview

The deployment architecture consists of:

  • Blockchain network (Hyperledger Besu validators and RPC nodes)
  • RPC gateway (ERPC for load balancing and caching)
  • Indexing layer (TheGraph node and Blockscout explorer)
  • Application layer (DApp frontend, Portal IAM, Hasura GraphQL)
  • Support services (PostgreSQL, Redis, MinIO, NGINX Ingress)
  • Observability stack (Grafana, Loki, VictoriaMetrics, Tempo)
Rendering chart...

Helm chart structure

The Asset Tokenization Kit uses an umbrella chart architecture located in kit/charts/atk/:

Chart dependencies

The main chart (kit/charts/atk/Chart.yaml) orchestrates 11 dependent subcharts:

dependencies:
  - name: support # Infrastructure (NGINX, Redis, PostgreSQL, MinIO)
  - name: observability # Metrics, logs, traces (Grafana, Loki, VictoriaMetrics)
  - name: network # Blockchain network (Besu nodes)
  - name: erpc # RPC gateway with caching
  - name: ipfs # IPFS cluster for distributed storage
  - name: blockscout # Blockchain explorer
  - name: graph-node # TheGraph indexing protocol
  - name: portal # Identity and access management
  - name: hasura # GraphQL engine for database
  - name: txsigner # Transaction signing service
  - name: dapp # Frontend application

Each subchart can be enabled/disabled via the enabled flag in values.yaml.

Directory layout

kit/charts/atk/
├── Chart.yaml              # Chart metadata and dependencies
│   https://github.com/settlemint/asset-tokenization-kit/blob/main/kit/charts/atk/Chart.yaml
├── values.yaml             # Default configuration values
│   https://github.com/settlemint/asset-tokenization-kit/blob/main/kit/charts/atk/values.yaml
├── values-openshift.yaml   # OpenShift-specific overrides
├── templates/              # Kubernetes resource templates
│   ├── _helpers.tpl        # Template helpers
│   ├── _common-helpers.tpl # Shared helper functions
│   └── image-pull-secrets.yaml
└── charts/                 # Subchart definitions
    https://github.com/settlemint/asset-tokenization-kit/tree/main/kit/charts/atk/charts

Prerequisites

Before deploying, ensure you have:

  1. Kubernetes cluster (v1.27+)

    • Minimum 8 CPU cores, 32GB RAM for full deployment
    • Storage provisioner with dynamic PVC support
    • LoadBalancer or Ingress controller support
  2. Required tools:

    • kubectl (v1.27+)
    • helm (v3.13+)
    • bun (for chart documentation generation)
  3. Container registry access:

    • GitHub Container Registry (ghcr.io)
    • Docker Hub (docker.io)
    • Kubernetes Registry (registry.k8s.io)
  4. DNS configuration:

    • Wildcard DNS or individual records for each service hostname
    • Default hostnames use .k8s.orb.local (customize for your environment)

Installation steps

1. Build chart dependencies

Navigate to the charts directory and update dependencies:

cd kit/charts/atk
helm dependency update

This downloads all subchart dependencies into the charts/ directory.

2. Configure values

Create a custom values.yaml file for your environment. Start by copying the default:

cp values.yaml values-production.yaml

3. Customize configuration

Edit values-production.yaml to match your environment:

Update hostnames

Replace all .k8s.orb.local hostnames with your domain:

# Use .k8s.orb.local (default) or .localhost
dapp:
  ingress:
    hosts:
      - host: dapp.k8s.orb.local

erpc:
  ingress:
    hostname: rpc.k8s.orb.local

blockscout:
  blockscout:
    ingress:
      hostname: explorer.k8s.orb.local
# Staging environment with subdomain
dapp:
  ingress:
    hosts:
      - host: dapp-staging.example.com

erpc:
  ingress:
    hostname: rpc-staging.example.com

blockscout:
  blockscout:
    ingress:
      hostname: explorer-staging.example.com

graph-node:
  ingress:
    hostname: graph-staging.example.com

hasura:
  ingress:
    hostName: hasura-staging.example.com

portal:
  ingress:
    hostname: portal-staging.example.com

observability:
  grafana:
    ingress:
      hosts:
        - grafana-staging.example.com
# Production with custom domain
dapp:
  ingress:
    hosts:
      - host: dapp.example.com

erpc:
  ingress:
    hostname: rpc.example.com

blockscout:
  blockscout:
    ingress:
      hostname: explorer.example.com

graph-node:
  ingress:
    hostname: graph.example.com

hasura:
  ingress:
    hostName: hasura.example.com

portal:
  ingress:
    hostname: portal.example.com

observability:
  grafana:
    ingress:
      hosts:
        - grafana.example.com

Update authentication URLs

The DApp requires the BETTER_AUTH_URL to match the ingress hostname:

dapp:
  secretEnv:
    BETTER_AUTH_URL: "https://dapp.example.com"

Update database and storage passwords

CRITICAL: Change all default passwords before production deployment:

global:
  datastores:
    default:
      redis:
        password: "YOUR_REDIS_PASSWORD"
      postgresql:
        password: "YOUR_PG_PASSWORD"

    portal:
      postgresql:
        password: "YOUR_PORTAL_DB_PASSWORD"

    txsigner:
      postgresql:
        password: "YOUR_TXSIGNER_DB_PASSWORD"

    graphNode:
      postgresql:
        password: "YOUR_GRAPH_DB_PASSWORD"

    blockscout:
      postgresql:
        password: "YOUR_BLOCKSCOUT_DB_PASSWORD"

    hasura:
      postgresql:
        password: "YOUR_HASURA_DB_PASSWORD"

# Redis auth
support:
  redis:
    auth:
      password: "YOUR_REDIS_PASSWORD"

# Grafana admin credentials
observability:
  grafana:
    adminUser: admin
    adminPassword: "YOUR_GRAFANA_PASSWORD"

Update transaction signer mnemonic

CRITICAL: Generate a new mnemonic for production:

txsigner:
  config:
    mnemonic: "YOUR_PRODUCTION_MNEMONIC_HERE"
    derivationPath: "m/44'/60'/0'/0/0"

Use a secure mnemonic generator or BIP39 tool. Never commit this to version control.

Configure resource limits

Adjust resource requests and limits based on your cluster capacity:

# Minimal resources for local testing
network:
  network-nodes:
    validatorReplicaCount: 1
    rpcReplicaCount: 1
    resources:
      requests:
        cpu: "60m"
        memory: "512Mi"
      limits:
        cpu: "360m"
        memory: "1024Mi"

dapp:
  resources:
    requests:
      cpu: "100m"
      memory: "512Mi"
    limits:
      cpu: "500m"
      memory: "1024Mi"
# Moderate resources for testing
network:
  network-nodes:
    validatorReplicaCount: 2
    rpcReplicaCount: 2
    resources:
      requests:
        cpu: "200m"
        memory: "1024Mi"
      limits:
        cpu: "1000m"
        memory: "2048Mi"

dapp:
  resources:
    requests:
      cpu: "250m"
      memory: "1024Mi"
    limits:
      cpu: "2000m"
      memory: "2048Mi"
# Full resources for production workloads
network:
  network-nodes:
    validatorReplicaCount: 4
    rpcReplicaCount: 3
    resources:
      requests:
        cpu: "500m"
        memory: "2048Mi"
      limits:
        cpu: "2000m"
        memory: "4096Mi"

dapp:
  resources:
    requests:
      cpu: "500m"
      memory: "2048Mi"
    limits:
      cpu: "4000m"
      memory: "4096Mi"

See the Resource summary section for default allocations.

Configure storage sizes

Adjust persistent volume sizes for your data retention requirements:

# Blockchain data
network:
  network-nodes:
    persistence:
      size: 100Gi  # Scale based on expected chain growth

# Metrics retention
observability:
  victoria-metrics-single:
    server:
      persistentVolume:
        size: 50Gi

# Log retention
observability:
  loki:
    singleBinary:
      persistence:
        size: 50Gi

4. Deploy the chart

Install the chart with your custom values:

helm install atk . \
  --namespace atk \
  --create-namespace \
  --values values-production.yaml \
  --timeout 20m

Or upgrade an existing deployment:

helm upgrade atk . \
  --namespace atk \
  --values values-production.yaml \
  --timeout 20m

5. Verify deployment

Check pod status:

kubectl get pods -n atk

All pods should reach Running or Completed state. The deployment includes:

  • Init jobs: network-bootstrapper (generates genesis file)
  • StatefulSets: Besu nodes, Graph Node, PostgreSQL, Redis
  • Deployments: DApp, Portal, Hasura, ERPC, Blockscout
  • DaemonSets: Node exporters, log collectors

Check service endpoints:

kubectl get ingress -n atk

Verify each ingress has an external address assigned.

6. Access applications

Once deployed, access the services via configured hostnames:

  • DApp: https://dapp.example.com
  • Blockchain Explorer: https://explorer.example.com
  • Grafana Dashboards: https://grafana.example.com
  • Hasura Console: https://hasura.example.com/console
  • RPC Endpoint: https://rpc.example.com

Configuration reference

Global settings

All subcharts inherit global configuration:

global:
  # Blockchain network identity
  chainId: "53771311147"
  chainName: "ATK"

  # Labels applied to all resources
  labels:
    environment: production
    team: platform

  # Centralized datastore configuration
  datastores:
    # Shared default settings
    default:
      redis:
        host: "redis"
        port: 6379
        username: "default"
        password: "atk"
      postgresql:
        host: "postgresql"
        port: 5432
        username: "postgres"
        password: "atk"

    # Service-specific overrides
    portal:
      postgresql:
        database: "portal"
        username: "portal"
      redis:
        db: 4

    hasura:
      redis:
        cacheDb: 2
        rateLimitDb: 3

Network configuration

Control blockchain network topology:

network:
  enabled: true

  network-bootstrapper:
    settings:
      validators: 1 # Number of validator identities to generate

  network-nodes:
    validatorReplicaCount: 1 # Validator pods (consensus)
    rpcReplicaCount: 1 # RPC pods (queries)

    persistence:
      size: 20Gi # Per-node storage

    resources:
      requests:
        cpu: "60m"
        memory: "512Mi"
      limits:
        cpu: "360m"
        memory: "1024Mi"

ERPC gateway configuration

Configure RPC load balancing and caching:

erpc:
  enabled: true

  ingress:
    enabled: true
    ingressClassName: "atk-nginx"
    hostname: rpc.k8s.orb.local

  resources:
    requests:
      cpu: "60m"
      memory: "256Mi"
    limits:
      cpu: "360m"
      memory: "512Mi"

ERPC caches responses in Redis databases configured in global.datastores.erpc.

DApp configuration

Frontend application settings:

dapp:
  enabled: true

  image:
    repository: ghcr.io/settlemint/asset-tokenization-kit
    # tag: defaults to chart appVersion

  ingress:
    enabled: true
    hosts:
      - host: dapp.k8s.orb.local
        paths:
          - path: /
            pathType: ImplementationSpecific

  # Environment variables (stored as secrets)
  secretEnv:
    BETTER_AUTH_URL: "https://dapp.k8s.orb.local"
    SETTLEMINT_BLOCKSCOUT_UI_ENDPOINT: "https://explorer.k8s.orb.local/"
    SETTLEMINT_MINIO_ENDPOINT: "http://minio:9000"
    SETTLEMINT_MINIO_ACCESS_KEY: "console"
    SETTLEMINT_MINIO_SECRET_KEY: "console123"

  resources:
    requests:
      cpu: "100m"
      memory: "1024Mi"
    limits:
      cpu: "3000m"
      memory: "2048Mi"

Selective component deployment

Disable components not required for your environment:

# Minimal deployment (no observability, no IPFS)
observability:
  enabled: false

ipfs:
  enabled: false

# Keep core services
network:
  enabled: true
erpc:
  enabled: true
graph-node:
  enabled: true
hasura:
  enabled: true
dapp:
  enabled: true
support:
  enabled: true

Resource summary

Default resource allocations with full stack enabled:

ComponentReplicasRequest CPULimit CPURequest MemoryLimit MemoryStorage
network.network-nodes260m (total 120m)360m (total 720m)512Mi (total 1024Mi)1024Mi (total 2048Mi)20Gi (total 40Gi)
erpc160m360m256Mi512Mi-
blockscout.blockscout1100m600m640Mi1280Mi-
blockscout.frontend160m360m320Mi640Mi-
graph-node160m360m512Mi1024Mi-
hasura180m480m384Mi768Mi-
portal160m360m256Mi512Mi-
txsigner160m360m192Mi384Mi-
dapp1100m3000m1024Mi2048Mi-
support.ingress-nginx1120m720m256Mi512Mi-
support.redis140m240m64Mi128Mi1Gi
support.postgresql180m480m256Mi512Mi8Gi
support.minio150m300m256Mi512Mi-
observability.grafana160m360m256Mi512Mi-
observability.loki1200m1200m512Mi1024Mi10Gi
observability.victoria-metrics-single160m360m256Mi512Mi10Gi
observability.alloy1120m720m512Mi1024Mi-
Totals-1.43 cores11.0 cores6.8Gi13.6Gi69Gi

These totals represent the minimum cluster capacity required. Add overhead for system components (kube-system, DNS, etc.).

See also

  • Production operations - Production best practices, monitoring, and troubleshooting
  • Testing and QA - Running tests against deployed environments
  • Development FAQ - Common deployment issues
Corporate action entities
Production operations
llms-full.txt

On this page

OverviewHelm chart structureChart dependenciesDirectory layoutPrerequisitesInstallation steps1. Build chart dependencies2. Configure values3. Customize configurationUpdate hostnamesUpdate authentication URLsUpdate database and storage passwordsUpdate transaction signer mnemonicConfigure resource limitsConfigure storage sizes4. Deploy the chart5. Verify deployment6. Access applicationsConfiguration referenceGlobal settingsNetwork configurationERPC gateway configurationDApp configurationSelective component deploymentResource summarySee also