Deploy Asset Tokenization Kit on ADI Foundation testnet with AWS EKS

Complete deployment guide for running the Asset Tokenization Kit on ADI Foundation's Layer 2 testnet using AWS EKS Auto Mode. This guide covers smart contract deployment, Kubernetes infrastructure setup, networking configuration, and production-ready observability.

About ADI Foundation

ADI Foundation is an Abu Dhabi-based blockchain initiative founded by Sirius International, the digital arm of IHC (a $240B UAE holding company). ADI Chain is a modular, high-performance Layer 2 blockchain secured by Ethereum, built on ZKsync's Atlas and Airbender technology stacks.

Mission and vision

ADI Foundation's goal is to bring one billion people onchain by 2030, primarily targeting emerging markets across the Middle East, Africa, and Asia. These regions represent approximately 75% of the world's population but have been largely excluded from blockchain innovation due to infrastructure gaps and compliance challenges.

Technical foundation

ADI Chain is an EVM-compatible Layer 2 that executes transactions off-chain, verifies them via zero-knowledge validity proofs, and finalizes on Ethereum. This architecture combines high throughput with Ethereum-grade security while dramatically reducing transaction costs.

Key capabilities:

  • Modular Layer 3 domains: Entities can operate compliance-optimized L3 chains segmented by jurisdiction, sector, or policy while staying connected to ADI Chain L2
  • Built-in compliance: Unlike most blockchain networks, ADI Chain provides L3 chains with native compliance capabilities meeting regulatory requirements for government and institutional use
  • ZKsync infrastructure: Leverages ZKsync's proven technology for zero-knowledge proofs and rollup execution
  • Cross-border interoperability: Designed for interconnected institutional and government L3s enabling transparent, trackable cross-border flows with same-day settlement

Target use cases

ADI Chain empowers institutions and governments across multiple verticals:

  • Fintech: Stablecoins, cross-border remittances, decentralized financial networks
  • Tokenized RWAs: Digital securities, bonds, funds, real estate
  • Government services: Digital identity, healthcare data, logistics
  • Supply chain: Transparent tracking and compliance across borders

For developers building on ADI Chain, refer to the ADI Chain Documentation and Developer Quickstart.

Deployment overview

This guide walks through a complete production deployment of the Asset Tokenization Kit on ADI Foundation's testnet. The process involves three major phases:

  1. Smart contract deployment: Deploy ATK smart contracts to ADI testnet using Hardhat Ignition
  2. Infrastructure provisioning: Create and configure AWS EKS Auto Mode cluster with networking and load balancing
  3. Service orchestration: Deploy ATK Helm charts with contracts, subgraph, and application services
Rendering chart...

Prerequisites

Before starting this deployment, ensure you have:

  • ATK license and Harbor credentials: Valid SettleMint ATK license with Harbor registry credentials (username and password). Contact [email protected] if you haven't received your credentials.
  • AWS account: Active account with permissions to create EKS clusters, VPCs, subnets, and IAM resources
  • AWS CLI: Installed and configured with appropriate credentials (aws sts get-caller-identity should succeed)
  • kubectl: Kubernetes CLI tool (version 1.28+)
  • Helm: Kubernetes package manager (version 3.14+)
  • Bun: JavaScript runtime and package manager (version 1.1+)
  • ADI testnet private key: Funded wallet private key for contract deployment (obtain test ADI tokens from ADI faucet)
  • Domain access: Ability to configure DNS records for your deployment domain (e.g., Cloudflare, Route 53)

Smart contract deployment

Get source code

Clone the Asset Tokenization Kit repository and install dependencies:

git clone https://github.com/settlemint/asset-tokenization-kit.git
cd asset-tokenization-kit
bun install

Generate contract artifacts and ABIs:

bun artifacts

This command compiles Solidity contracts, generates TypeScript types, and creates ABI files needed for dApp and subgraph integration.

Configure Hardhat for ADI testnet

Add ADI testnet configuration to kit/contracts/hardhat.config.ts:

const config: HardhatUserConfig = {
  // ... existing configuration ...
  networks: {
    // ... existing networks ...
    aditestnet: {
      url: "https://rpc.ab.testnet.adifoundation.ai/",
      gasPrice: "auto",
      gasMultiplier: 1.15, // 15% buffer for testnet variability
      ignition: {
        explorerUrl: "https://explorer.ab.testnet.adifoundation.ai/",
      },
      accounts: [process.env.ADI_TESTNET_PRIVATE_KEY || ""],
    },
  },
  ignition: {
    requiredConfirmations: 1, // Single confirmation for testnet
  },
};

Configuration details:

  • gasPrice: "auto": Hardhat automatically determines appropriate gas price from network conditions
  • gasMultiplier: 1.15: 15% buffer prevents out-of-gas failures during network congestion
  • requiredConfirmations: 1: Minimal confirmations acceptable for deployments

Deploy implementation contracts

Set your funded private key as an environment variable and deploy contracts:

export ADI_TESTNET_PRIVATE_KEY="0x..."
bunx hardhat --network aditestnet ignition deploy ignition/modules/main.ts

Hardhat Ignition orchestrates the deployment of all ATK smart contracts in the correct dependency order. The deployment includes:

  • ATKSystemFactory: Main factory contract for creating asset systems
  • Token implementations: Bond, Equity, Fund, Stablecoin, Deposit, and Real Estate token templates
  • Compliance modules: Identity verification, country/address allow/block lists, supply limits, time locks, transfer approvals
  • Addon factories: Airdrop, vesting, yield schedule, and cross-chain settlement (XVP) factories

Deployment output (example):

Deploying ATK System Contracts to aditestnet...

SystemFactoryModule#ATKSystemFactory - 0xbd6bFfcaD96244E46999E3b1EB1C2fE2f347be29
TokenBondModule#TokenBondImplementation - 0x4331D1cb699b1C71Aca139C26D29969D562Bd093
TokenEquityModule#TokenEquityImplementation - 0x5BfA71729A60D686dA086712c29EEd992E3a9E9e
...

Locate deployed contract addresses

Hardhat Ignition stores all deployed contract addresses in a JSON file within the deployment artifacts directory. This file serves as the deployment manifest containing every contract address from your deployment.

Deployment artifacts location:

kit/contracts/ignition/deployments/chain-<chainId>/deployed_addresses.json

For ADI testnet (chain ID 36900), the file is located at:

kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json

Example deployed_addresses.json structure:

{
  "SystemFactoryModule#ATKSystemFactory": "0xbd6bFfcaD96244E46999E3b1EB1C2fE2f347be29",
  "ComplianceModule#SmartIdentityVerification": "0x727312A34e941C295Ec4A8bD9ccb59c57d7644Af",
  "ComplianceModule#CountryAllowList": "0x1069C2b2247885E893810977a248e0f1BC06E473",
  "TokenBondModule#BondFactoryImplementation": "0x296b67b9da894C967447668E2f8fD95340b0Aa44",
  "TokenBondModule#TokenBondImplementation": "0x4331D1cb699b1C71Aca139C26D29969D562Bd093",
  ...
}

What is the Ignition deployment folder?

Hardhat Ignition creates a deployment folder for each network (identified by chain ID) to track deployment state and addresses. This folder contains:

  • deployed_addresses.json: Complete mapping of module names to deployed contract addresses
  • journal.jsonl: Deployment execution log with timestamps and transaction hashes
  • Build artifacts: Compiled contract metadata and deployment parameters

These artifacts enable:

  • Reproducible deployments: Ignition tracks which contracts are already deployed, preventing accidental re-deployments
  • Upgrade management: When upgrading proxy implementations, Ignition knows which addresses to update
  • Deployment verification: You can verify contract addresses against the deployment log
  • Configuration management: Export addresses directly to application configuration

Extract addresses for Helm configuration:

# View all deployed addresses
cat kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json

# Extract specific address using jq
jq -r '."SystemFactoryModule#ATKSystemFactory"' \
  kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json

You'll use these addresses to populate the dapp.secretEnv section of the Helm chart configuration (detailed in the Install ATK Helm chart section below).

Find deployment block number:

The deployment block number is not stored in deployed_addresses.json. Search for your ATKSystemFactory address in the ADI testnet explorer to find the block number where it was deployed. You'll need this for subgraph configuration.

Infrastructure provisioning

Create EKS Auto Mode cluster

AWS EKS Auto Mode provides simplified Kubernetes cluster management with automatic node provisioning, scaling, and patching. Create a cluster using the AWS Console, CLI, or Infrastructure-as-Code tools.

AWS CLI example:

aws eks create-cluster \
  --name adi-demo \
  --region me-south-1 \
  --compute-config enabled=true \
  --kubernetes-network-config elasticLoadBalancing={enabled=true} \
  --storage-config blockStorage={enabled=true}

Key EKS Auto Mode features:

  • Automatic node scaling: Provisions compute capacity based on pod requests without managing node groups
  • Built-in storage: EBS CSI driver automatically installed for persistent volumes
  • Integrated load balancing: ALB controller prerequisites configured by default

For production deployments, use Pulumi or Terraform to define infrastructure as code. Refer to ATK Pulumi examples for reusable templates.

Configure AWS credentials

Set AWS credentials as environment variables so subsequent commands authenticate correctly:

export AWS_ACCESS_KEY_ID="AKIA..."
export AWS_SECRET_ACCESS_KEY="..."
export AWS_DEFAULT_REGION="me-south-1"

For production systems, use IAM roles attached to your CI/CD environment or workstation rather than long-lived access keys.

Obtain kubeconfig

Configure kubectl to communicate with your EKS cluster:

aws eks update-kubeconfig \
  --name adi-demo \
  --region me-south-1

Verify cluster connectivity:

kubectl cluster-info
kubectl get nodes

Configure private subnet networking

EKS Auto Mode creates private subnets for worker nodes. These subnets require NAT gateway routes for outbound internet access (pulling container images, reaching external APIs).

Check existing routes:

# Get VPC and subnet IDs
VPC_ID=$(aws eks describe-cluster --name adi-demo --region me-south-1 \
  --query 'cluster.resourcesVpcConfig.vpcId' --output text)

PRIVATE_SUBNETS=$(aws ec2 describe-subnets --region me-south-1 \
  --filters "Name=vpc-id,Values=$VPC_ID" "Name=tag:Name,Values=*private*" \
  --query 'Subnets[*].SubnetId' --output text)

# Inspect route tables for private subnets
for subnet in $PRIVATE_SUBNETS; do
  echo "Routes for subnet $subnet:"
  aws ec2 describe-route-tables --region me-south-1 \
    --filters "Name=association.subnet-id,Values=$subnet" \
    --query 'RouteTables[*].Routes'
done

If route tables lack 0.0.0.0/0 routes to a NAT gateway, create them:

# List available NAT gateways
aws ec2 describe-nat-gateways --region me-south-1 \
  --filter "Name=vpc-id,Values=$VPC_ID" "Name=state,Values=available" \
  --query 'NatGateways[*].[NatGatewayId,SubnetId]' --output table

# Add NAT gateway route to each private subnet route table
for subnet in $PRIVATE_SUBNETS; do
  RTB_ID=$(aws ec2 describe-route-tables --region me-south-1 \
    --filters "Name=association.subnet-id,Values=$subnet" \
    --query 'RouteTables[0].RouteTableId' --output text)

  aws ec2 create-route --region me-south-1 \
    --route-table-id $RTB_ID \
    --destination-cidr-block 0.0.0.0/0 \
    --nat-gateway-id <nat-gateway-id>
done

Replace <nat-gateway-id> with an appropriate NAT gateway from the output above. Each private subnet can share a single NAT gateway, or use dedicated gateways for higher availability.

Install AWS Load Balancer Controller

The AWS Load Balancer Controller provisions Application Load Balancers automatically based on Kubernetes Ingress resources. ATK uses ALBs to expose dApp, Grafana, and MinIO services with SSL termination and path-based routing.

Tag subnets for load balancer discovery:

Note: The following steps require the VPC_ID variable to be set. If you are running these commands in a new shell session or skipped previous steps, set it with:

VPC_ID=$(aws eks describe-cluster --name adi-demo --region me-south-1 \
  --query 'cluster.resourcesVpcConfig.vpcId' --output text)
# Ensure VPC_ID is set
VPC_ID=$(aws eks describe-cluster --name adi-demo --region me-south-1 \
  --query 'cluster.resourcesVpcConfig.vpcId' --output text)

if [ -z "$VPC_ID" ]; then
  echo "Error: VPC_ID is not set. Cannot retrieve VPC information."
  exit 1
fi

# Tag public subnets for external load balancers
PUBLIC_SUBNETS=$(aws ec2 describe-subnets --region me-south-1 \
  --filters "Name=vpc-id,Values=$VPC_ID" "Name=tag:Name,Values=*public*" \
  --query 'Subnets[*].SubnetId' --output text)

for subnet in $PUBLIC_SUBNETS; do
  aws ec2 delete-tags --region me-south-1 --resources $subnet \
    --tags Key=kubernetes.io/cluster/adi-demo 2>/dev/null || true

  aws ec2 create-tags --region me-south-1 --resources $subnet \
    --tags \
      Key=kubernetes.io/cluster/adi-demo,Value=shared \
      Key=kubernetes.io/role/elb,Value=1
done

# Tag private subnets for internal load balancers
PRIVATE_SUBNETS=$(aws ec2 describe-subnets --region me-south-1 \
  --filters "Name=vpc-id,Values=$VPC_ID" "Name=tag:Name,Values=*private*" \
  --query 'Subnets[*].SubnetId' --output text)

for subnet in $PRIVATE_SUBNETS; do
  aws ec2 delete-tags --region me-south-1 --resources $subnet \
    --tags Key=kubernetes.io/cluster/adi-demo 2>/dev/null || true

  aws ec2 create-tags --region me-south-1 --resources $subnet \
    --tags \
      Key=kubernetes.io/cluster/adi-demo,Value=shared \
      Key=kubernetes.io/role/internal-elb,Value=1
done

Create IAM policy and role:

AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
OIDC_PROVIDER=$(aws eks describe-cluster --name adi-demo --region me-south-1 \
  --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///")

# Download and create IAM policy for ALB controller
curl -o /tmp/iam-policy.json \
  https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json

aws iam create-policy \
  --policy-name AWSLoadBalancerControllerIAMPolicy \
  --policy-document file:///tmp/iam-policy.json \
  2>/dev/null || echo "Policy already exists"

# Create OIDC provider for service account authentication
aws iam create-open-id-connect-provider \
  --url https://${OIDC_PROVIDER} \
  --client-id-list sts.amazonaws.com \
  2>/dev/null || echo "OIDC provider already exists"

# Create trust policy for service account role
cat > /tmp/trust-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "${OIDC_PROVIDER}:aud": "sts.amazonaws.com",
          "${OIDC_PROVIDER}:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
        }
      }
    }
  ]
}
EOF

# Create IAM role for controller service account
aws iam create-role \
  --role-name AmazonEKSLoadBalancerControllerRole \
  --assume-role-policy-document file:///tmp/trust-policy.json \
  2>/dev/null || echo "Role already exists"

# Attach policy to role
aws iam attach-role-policy \
  --role-name AmazonEKSLoadBalancerControllerRole \
  --policy-arn arn:aws:iam::${AWS_ACCOUNT_ID}:policy/AWSLoadBalancerControllerIAMPolicy

Deploy controller with Helm:

# Create Kubernetes service account with IAM role annotation
kubectl create serviceaccount aws-load-balancer-controller -n kube-system \
  --dry-run=client -o yaml | kubectl apply -f -

kubectl annotate serviceaccount aws-load-balancer-controller -n kube-system \
  eks.amazonaws.com/role-arn=arn:aws:iam::${AWS_ACCOUNT_ID}:role/AmazonEKSLoadBalancerControllerRole \
  --overwrite

# Install controller via Helm
helm repo add eks https://aws.github.io/eks-charts
helm repo update

helm upgrade --install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=adi-demo \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=me-south-1 \
  --set vpcId=$VPC_ID \
  --wait

Verify controller is running:

kubectl get deployment -n kube-system aws-load-balancer-controller
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller

# Should show 2/2 READY replicas
kubectl wait --for=condition=available --timeout=300s \
  deployment/aws-load-balancer-controller -n kube-system

Verify subnet tags:

aws ec2 describe-subnets --region me-south-1 \
  --filters "Name=vpc-id,Values=$VPC_ID" \
  --query 'Subnets[*].[SubnetId,Tags[?Key==`Name`].Value|[0],Tags[?Key==`kubernetes.io/role/elb`].Value|[0],Tags[?Key==`kubernetes.io/cluster/adi-demo`].Value|[0]]' \
  --output table

Look for kubernetes.io/role/elb=1 on public subnets and kubernetes.io/role/internal-elb=1 on private subnets.

Service configuration

Configure subgraph indexing

The Graph Node subgraph indexes blockchain events emitted by ATK smart contracts, providing efficient historical data queries for the dApp. Update the subgraph manifest with your deployed contract addresses.

Update subgraph manifest (kit/subgraph/subgraph.yaml):

dataSources:
  - kind: ethereum
    name: ATKSystemFactory
    network: settlemint
    source:
      address: "0xbd6bFfcaD96244E46999E3b1EB1C2fE2f347be29" # Your factory address
      abi: ATKSystemFactory
      startBlock: 74703 # Block where factory was deployed

The startBlock tells Graph Node where to begin indexing. Using the deployment block (rather than 0) dramatically reduces initial sync time.

Update factory address in compilation script (kit/subgraph/tools/compile.sh):

#!/bin/bash
# ... existing script content ...

# Add your network's factory address
export FACTORY_ADDRESS_ADITESTNET="0xbd6bFfcaD96244E46999E3b1EB1C2fE2f347be29"

Compile subgraph:

cd kit/subgraph
bun run compile

This generates subgraph.yaml from the template, validates the manifest schema, and produces the final build artifacts. The command outputs an IPFS hash representing the compiled subgraph:

Subgraph compiled: QmfNAxKTYGdnvpgTYvhNeZ4DpdSKEy4dLjbxorHiTVXAhG

Note this hash—you'll use it to configure the Graph Node deployment.

Create subgraph ConfigMap

Graph Node needs the subgraph name and IPFS hash in a ConfigMap:

kubectl create namespace atk --dry-run=client -o yaml | kubectl apply -f -

kubectl create configmap besu-subgraph -n atk \
  --from-literal=SUBGRAPH=kit:QmfNAxKTYGdnvpgTYvhNeZ4DpdSKEy4dLjbxorHiTVXAhG \
  --dry-run=client -o yaml | kubectl apply -f -

Replace QmfNAxKTYGdnvpgTYvhNeZ4DpdSKEy4dLjbxorHiTVXAhG with your actual subgraph hash from the compilation step.

Create ABI ConfigMaps

The Portal (admin UI) requires contract ABIs to construct transaction data. The repository includes a helper script that creates ConfigMaps from generated ABIs.

Run the script from repository root:

./kit/charts/tools/create-abi-configmaps.sh

Or from the charts directory:

cd kit/charts
./tools/create-abi-configmaps.sh

What it does:

  • Reads JSON ABI files from kit/contracts/.generated/portal/
  • Creates a ConfigMap for each ABI (e.g., abi-atktoken, abi-compliance)
  • Annotates ConfigMaps with settlemint.com/artifact=abi for discovery
  • Uses lowercase naming convention for consistency

Script source (kit/charts/tools/create-abi-configmaps.sh):

Verify ConfigMaps were created:

kubectl get configmaps -n atk -l settlemint.com/artifact=abi

Create storage class for EKS Auto Mode

EKS Auto Mode requires a specific storage class configuration that targets Auto Mode compute. Create the storage class:

cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: auto-ebs-sc
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
allowedTopologies:
- matchLabelExpressions:
  - key: eks.amazonaws.com/compute-type
    values:
    - auto
provisioner: ebs.csi.eks.amazonaws.com
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
  type: gp3
  encrypted: "true"
EOF

Key parameters:

  • provisioner: ebs.csi.eks.amazonaws.com: Uses EKS-managed EBS CSI driver (not the legacy kubernetes.io/aws-ebs provisioner)
  • allowedTopologies: Restricts volume provisioning to Auto Mode nodes only
  • volumeBindingMode: WaitForFirstConsumer: Delays volume creation until a pod is scheduled, ensuring volume and pod are in the same availability zone
  • parameters.type: gp3: General Purpose SSD v3, balanced performance and cost
  • parameters.encrypted: "true": Encrypts volumes at rest using default AWS KMS key

Create Let's Encrypt ClusterIssuer

Cert-manager automatically provisions SSL certificates from Let's Encrypt for Ingress resources. Create a production ClusterIssuer:

cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: [email protected]  # Replace with your actual email address
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            class: atk-nginx
EOF

Replace [email protected] with your email address. Let's Encrypt sends certificate expiration notifications to this address (though cert-manager auto-renews certificates 30 days before expiry).

HTTP-01 challenge: Proves domain ownership by serving a specific file at http://<your-domain>/.well-known/acme-challenge/. The ALB ingress controller automatically routes ACME challenges to cert-manager.

Helm chart installation

Configure Harbor registry access

ATK container images are distributed through SettleMint's private Harbor registry. When you purchase an ATK license, you receive Harbor credentials that grant access to production-ready, security-scanned container images.

Why Harbor credentials are required:

  • Private components: Many ATK services (dApp, Portal, custom subgraph indexers) are proprietary and only available through Harbor
  • Security scanning: All images undergo vulnerability scanning and compliance checks before distribution
  • Version control: Licensed users receive access to specific tested versions with support guarantees
  • Production readiness: Harbor images include optimizations and configurations for production deployments

Your Harbor credentials will be provided by SettleMint as part of your license agreement. Without valid credentials, the deployment will fail when Kubernetes attempts to pull private images.

If you haven't received your credentials, contact SettleMint support at [email protected] or your account representative.

Update Helm charts for Harbor registry

Before deploying, update all Helm chart values files to use Harbor-proxied container images. The package:harbor script automatically prefixes all image repositories with the Harbor registry path:

cd kit/charts
bun run package:harbor

What this script does:

  • Scans all values.yaml and values-*.yaml files in kit/charts/atk/
  • Prefixes image registries with harbor.settlemint.com/:
    • ghcr.io/settlemint/atk-dappharbor.settlemint.com/ghcr.io/settlemint/atk-dapp
    • docker.io/postgresharbor.settlemint.com/docker.io/postgres
    • graphprotocol/graph-nodeharbor.settlemint.com/docker.io/graphprotocol/graph-node
  • Preserves existing Harbor prefixes (script is idempotent)
  • Handles explicit registries (ghcr.io, docker.io, quay.io, registry.k8s.io) and implicit Docker Hub images

Example transformation:

# Before
dapp:
  image:
    registry: ghcr.io
    repository: settlemint/atk-dapp
    tag: 2.0.0-main.131241

# After
dapp:
  image:
    registry: harbor.settlemint.com/ghcr.io
    repository: settlemint/atk-dapp
    tag: 2.0.0-main.131241

Verification:

# Check that registries were updated
grep -r "registry:" kit/charts/atk/values.yaml | head -5

All registry values should now be prefixed with harbor.settlemint.com/.

The script source is available at kit/charts/tools/package-harbor.ts.

Install ATK Helm chart

The ATK Helm chart orchestrates all services: blockchain node (optional), subgraph indexer, Hasura GraphQL engine, dApp, MinIO object storage, Grafana observability, and NGINX ingress controller.

Before installing, prepare the following:

  1. Contract addresses: From Hardhat Ignition deployment artifacts at kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json (see Locate deployed contract addresses)
  2. Harbor credentials: Your licensed username and password for harbor.settlemint.com
  3. Domain name: Your base domain for ATK services (the example below uses example.com—replace with your actual domain like yourcompany.com)
  4. MinIO credentials: Generate strong random credentials for object storage access (see Generate secure credentials below)

Domain naming convention: ATK exposes the following services publicly via ingress:

  • Main dApp: atk.example.com
  • Grafana dashboards: grafana.atk.example.com
  • MinIO S3 API: minio.atk.example.com (for object storage operations)

Note: The MinIO web console (administration UI) is not exposed externally for security. Only the S3-compatible API is publicly accessible, which the dApp uses for storing asset metadata, documents, and images. To access the MinIO console for administration, use kubectl port-forward.

Internal services (Graph Node, Hasura, ERPC) are only accessible within the Kubernetes cluster.

Replace all instances of example.com in the Helm values below with your actual domain before installation.

Generate secure credentials

CRITICAL SECURITY: Never use default or weak credentials in production deployments. Generate strong random credentials for all services.

MinIO credentials (S3 object storage):

# Generate secure random credentials
MINIO_ACCESS_KEY=$(openssl rand -base64 20 | tr -d '/+=' | head -c 20)
MINIO_SECRET_KEY=$(openssl rand -base64 32 | tr -d '/+=' | head -c 32)

# Save credentials securely (use a password manager in production)
echo "MinIO Access Key: $MINIO_ACCESS_KEY"
echo "MinIO Secret Key: $MINIO_SECRET_KEY"

# Export for use in Helm values
export MINIO_ACCESS_KEY
export MINIO_SECRET_KEY

IMPORTANT: Store these credentials securely in a password manager or secret management system (e.g., AWS Secrets Manager, HashiCorp Vault, 1Password). You'll need them for:

  • Helm chart installation (see SETTLEMINT_MINIO_ACCESS_KEY and SETTLEMINT_MINIO_SECRET_KEY below)
  • MinIO console access via port-forward for administration
  • Troubleshooting object storage issues

Create namespace and install:

kubectl create namespace atk --dry-run=client -o yaml | kubectl apply -f -

helm upgrade --install atk ./kit/charts/atk -n atk --create-namespace --timeout 15m -f - <<EOF
global:
  chainId: "36900"
  imagePullCredentials:
    registries:
      harbor:
        enabled: true
        registry: harbor.settlemint.com
        username: your-username
        password: your-access-token

observability:
  metrics-server:
    enabled: false  # EKS includes metrics-server by default
  grafana:
    global:
      imagePullSecrets:
        - name: image-pull-secret-harbor
        - name: image-pull-secret-ghcr
        - name: image-pull-secret-docker
    ingress:
      enabled: true
      ingressClassName: atk-nginx
      annotations:
        cert-manager.io/cluster-issuer: "letsencrypt-prod"
      hosts:
        - grafana.atk.example.com  # Replace example.com with your domain
      tls:
        - secretName: grafana-atk-tls
          hosts:
            - grafana.atk.example.com

network:
  enabled: false  # Disable local Besu node, use ADI RPC

blockscout:
  enabled: false  # ADI provides block explorer

erpc:
  ingress:
    enabled: false  # Internal service only
  config:
    projects:
      - id: aditestnet
        networks:
          - architecture: evm
            evm:
              integrity:
                enforceHighestBlock: true
                enforceGetLogsBlockRange: true
            directiveDefaults:
              retryEmpty: true
            failsafe:
              - matchMethod: "eth_getLogs"
                timeout:
                  duration: 45s
                retry:
                  maxAttempts: 3
                  delay: 500ms
                  backoffFactor: 2
                  backoffMaxDelay: 10s
                  jitter: 300ms
                hedge:
                  quantile: 0.9
                  minDelay: 200ms
                  maxDelay: 4s
                  maxCount: 1
              - matchMethod: "trace_*|debug_*|arbtrace_*"
                timeout:
                  duration: 90s
                retry:
                  maxAttempts: 1
              - matchMethod: "eth_getBlock*|eth_getTransaction*"
                timeout:
                  duration: 6s
                retry:
                  maxAttempts: 2
                  delay: 200ms
                  backoffFactor: 1.5
                  backoffMaxDelay: 3s
                  jitter: 150ms
              - matchMethod: "*"
                matchFinality:
                  - unfinalized
                  - realtime
                timeout:
                  duration: 4s
                retry:
                  maxAttempts: 2
                  delay: 150ms
                  jitter: 150ms
                hedge:
                  delay: 250ms
                  maxCount: 1
              - matchMethod: "*"
                matchFinality:
                  - finalized
                timeout:
                  duration: 20s
                retry:
                  maxAttempts: 4
                  delay: 400ms
                  backoffFactor: 1.8
                  backoffMaxDelay: 8s
                  jitter: 250ms
              - matchMethod: "*"
                timeout:
                  duration: 12s
                retry:
                  maxAttempts: 3
                  delay: 300ms
                  backoffFactor: 1.4
                  backoffMaxDelay: 5s
                  jitter: 200ms
                hedge:
                  quantile: 0.95
                  minDelay: 120ms
                  maxDelay: 2s
                  maxCount: 2
        upstreams:
          - id: public-rpc
            endpoint: https://rpc.ab.testnet.adifoundation.ai/
            evm: {}
            failsafe:
              - matchMethod: "*"
                circuitBreaker:
                  failureThresholdCount: 40
                  failureThresholdCapacity: 80
                  halfOpenAfter: 120s
                  successThresholdCount: 3
                  successThresholdCapacity: 10
  initContainer:
    tcpCheck:
      enabled: false  # ADI RPC always available

graph-node:
  configTemplate: |
    [store]
      [store.primary]
      connection = "postgresql://\${PRIMARY_SUBGRAPH_DATA_PGUSER}:\${PRIMARY_SUBGRAPH_DATA_PGPASSWORD}@\${PRIMARY_SUBGRAPH_DATA_PGHOST}:\${PRIMARY_SUBGRAPH_DATA_PGPORT}/\${PRIMARY_SUBGRAPH_DATA_PGDATABASE}"
      pool_size = 10
      weight = 1

    [chains]
    ingestor = "graph-node-index-0"
      [chains.settlemint]
      shard = "primary"
      protocol = "ethereum"

      [[chains.settlemint.provider]]
        label = "erpc"
        [chains.settlemint.provider.details]
          features = ["archive", "traces"]
          type = "web3"
          url = "http://erpc:4000/aditestnet/evm/36900"

    [deployment]
      [[deployment.rule]]
      shards = ["primary"]
      indexers = ["graph-node-index-0"]
  ingress:
    enabled: false  # Internal service only

hasura:
  ingress:
    enabled: false  # Internal service only

portal:
  ingress:
    enabled: false  # Future feature
  config:
    network:
      nodeRpcUrl: "http://txsigner:3000/aditestnet/evm/36900"

dapp:
  image:
    tag: 2.0.0-main.131241
  ingress:
    enabled: true
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod"
    hosts:
      - host: atk.example.com  # Replace example.com with your domain
        paths:
          - path: /
            pathType: ImplementationSpecific
    tls:
      - secretName: atk-tls
        hosts:
          - atk.example.com
  secretEnv:
    # All contract addresses below are from your Hardhat Ignition deployment.
    # Find them in: kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json
    #
    # Mapping from deployed_addresses.json to environment variables:
    # - SystemFactoryModule#ATKSystemFactory → SETTLEMINT_SYSTEM_REGISTRY_ADDRESS
    # - ComplianceModule#* → SETTLEMINT_COMPLIANCE_MODULE_*
    # - TokenBondModule#BondFactoryImplementation → SETTLEMINT_TOKEN_FACTORY_BOND_FACTORY_IMPLEMENTATION
    # - TokenBondModule#TokenBondImplementation → SETTLEMINT_TOKEN_BOND_IMPLEMENTATION
    # (and similarly for Equity, Fund, Stablecoin, Deposit, Real Estate)
    # - AddonModule#* → SETTLEMINT_ADDON_*

    SETTLEMINT_SYSTEM_REGISTRY_ADDRESS: 0xbd6bFfcaD96244E46999E3b1EB1C2fE2f347be29

    # Compliance Module Implementations
    SETTLEMINT_COMPLIANCE_MODULE_SMART_IDENTITY_VERIFICATION: "0x727312A34e941C295Ec4A8bD9ccb59c57d7644Af"
    SETTLEMINT_COMPLIANCE_MODULE_COUNTRY_ALLOW_LIST: "0x1069C2b2247885E893810977a248e0f1BC06E473"
    SETTLEMINT_COMPLIANCE_MODULE_COUNTRY_BLOCK_LIST: "0xe9B77Ef159e5f8C77c69AaA4fe80A899F3dDC6c9"
    SETTLEMINT_COMPLIANCE_MODULE_ADDRESS_BLOCK_LIST: "0x1880aF06209ACbd2D126E488b776307E0A268d0f"
    SETTLEMINT_COMPLIANCE_MODULE_IDENTITY_BLOCK_LIST: "0xa3cA239c50E487d63b31a0aD7F92886308723351"
    SETTLEMINT_COMPLIANCE_MODULE_IDENTITY_ALLOW_LIST: "0xf0324cC3343308375b96E386291220BE3894149e"
    SETTLEMINT_COMPLIANCE_MODULE_TOKEN_SUPPLY_LIMIT: "0x17FFF0732337c6884825DdF2db752136A4507770"
    SETTLEMINT_COMPLIANCE_MODULE_INVESTOR_COUNT: "0x4367A6d2Febb8695cE6457dc0Eb26B15D9c6aF42"
    SETTLEMINT_COMPLIANCE_MODULE_TIME_LOCK: "0x49F58a630e4572EFDA149338E36Cf46685A61503"
    SETTLEMINT_COMPLIANCE_MODULE_TRANSFER_APPROVAL: "0xfa0Fde2909eC467f0eaF300B8Ea23Ba0dF33fB0C"

    # Token Factory Implementations - Bond
    SETTLEMINT_TOKEN_FACTORY_BOND_FACTORY_IMPLEMENTATION: "0x296b67b9da894C967447668E2f8fD95340b0Aa44"
    SETTLEMINT_TOKEN_BOND_IMPLEMENTATION: "0x4331D1cb699b1C71Aca139C26D29969D562Bd093"

    # Token Factory Implementations - Equity
    SETTLEMINT_TOKEN_FACTORY_EQUITY_FACTORY_IMPLEMENTATION: "0x8823Ce8d147FbD04F8b07bF03F6Fbe358A781499"
    SETTLEMINT_TOKEN_EQUITY_IMPLEMENTATION: "0x5BfA71729A60D686dA086712c29EEd992E3a9E9e"

    # Token Factory Implementations - Fund
    SETTLEMINT_TOKEN_FACTORY_FUND_FACTORY_IMPLEMENTATION: "0x79d69AaFA6Ac8C5801FF6AcBbb32Daaf2FD28D1E"
    SETTLEMINT_TOKEN_FUND_IMPLEMENTATION: "0xd6b9BBcE4C7c045Fb6d6c5E4df493A50452a8ace"

    # Token Factory Implementations - Stablecoin
    SETTLEMINT_TOKEN_FACTORY_STABLECOIN_FACTORY_IMPLEMENTATION: "0xe5F6868971aC796110E074A3e85d7aa33239AA9e"
    SETTLEMINT_TOKEN_STABLECOIN_IMPLEMENTATION: "0x5c854C1ECB13fB9Ff537733b3645dC1C13D8C0F7"

    # Token Factory Implementations - Deposit
    SETTLEMINT_TOKEN_FACTORY_DEPOSIT_FACTORY_IMPLEMENTATION: "0xf730D4F3eE405107997C6DabF4aA69f8Fa97dc63"
    SETTLEMINT_TOKEN_DEPOSIT_IMPLEMENTATION: "0x6b6c22F1679cb2ad02c304692ce2eB75a75BC181"

    # Token Factory Implementations - Real Estate
    SETTLEMINT_TOKEN_FACTORY_REAL_ESTATE_FACTORY_IMPLEMENTATION: "0xDbbDA021e9bB146f3e4B1Cb73c864d8c7Adb7660"
    SETTLEMINT_TOKEN_REAL_ESTATE_IMPLEMENTATION: "0x0eB4399057c1BbA530FeC39E87e4BC6C68C181a9"

    # System Addon Factory Implementations
    SETTLEMINT_ADDON_AIRDROPS_PUSH_AIRDROP_FACTORY: "0xf96BB1e650e4Df836d25Bc4353735d8E78F757fc"
    SETTLEMINT_ADDON_AIRDROPS_VESTING_AIRDROP_FACTORY: "0x88B28900E46677632758971953a81071981092cE"
    SETTLEMINT_ADDON_YIELD_FIXED_YIELD_SCHEDULE_FACTORY: "0xA19e242A189981bB0134251e48d9aF1D334d50dE"
    SETTLEMINT_ADDON_XVP_XVP_SETTLEMENT_FACTORY: "0x3912C5436E6ae9037BfF8D88ad0c2a14DFf03A65"

    BETTER_AUTH_URL: "https://atk.example.com"  # Replace with your domain
    SETTLEMINT_BLOCKSCOUT_UI_ENDPOINT: "https://explorer.ab.testnet.adifoundation.ai/"
    SETTLEMINT_THEGRAPH_SUBGRAPHS_ENDPOINTS: '["http://graph-node-query:8000/subgraphs/name/kit"]'
    SETTLEMINT_MINIO_ENDPOINT: "https://minio.atk.example.com"  # Replace with your domain
    SETTLEMINT_MINIO_ACCESS_KEY: "<your-minio-access-key>"  # Use generated credentials
    SETTLEMINT_MINIO_SECRET_KEY: "<your-minio-secret-key>"  # Use generated credentials

support:
  minio:
    ingress:
      enabled: true
      ingressClassName: atk-nginx
      annotations:
        cert-manager.io/cluster-issuer: "letsencrypt-prod"
      hosts:
        - minio.atk.example.com  # Replace example.com with your domain
      tls:
        - secretName: minio-atk-tls
          hosts:
            - minio.atk.example.com
  ingress-nginx:
    controller:
      service:
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-type: "external"
          service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip"
          service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
EOF

Configuration highlights:

  • global.chainId: ADI testnet chain ID (36900)
  • imagePullCredentials: Harbor registry authentication for private image access
  • network.enabled: false: Disables local Besu node since we're using ADI's public RPC
  • blockscout.enabled: false: ADI Foundation provides block explorer, no need to deploy our own
  • erpc.config: Configures RPC gateway with retry, hedging, and circuit breaker policies for reliability
  • graph-node.configTemplate: Connects subgraph indexer to ADI RPC via ERPC proxy
  • dapp.secretEnv: Contract addresses from kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json, ADI block explorer URL, MinIO S3 API endpoint with secure generated credentials
  • support.minio.ingress.enabled: true: Exposes MinIO S3 API publicly for dApp object storage operations (web console remains internal-only)
  • support.ingress-nginx.controller.service.annotations: Provisions Network Load Balancer for ingress traffic

SECURITY WARNING: Replace all placeholder credentials (<your-*>) with your actual generated secure credentials before deploying. Never commit credentials to version control. Use environment variables, sealed secrets, or external secret managers for production deployments.

Extract contract addresses from deployment

Rather than manually copying addresses, you can extract them programmatically from the Ignition deployment manifest:

# Set the deployment file path
DEPLOYMENT_FILE="kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json"

# Example: Extract system factory address
jq -r '."SystemFactoryModule#ATKSystemFactory"' $DEPLOYMENT_FILE

# Example: Extract all compliance module addresses
jq -r 'to_entries[] | select(.key | startswith("ComplianceModule#")) | "\(.key): \(.value)"' $DEPLOYMENT_FILE

# Example: Extract all token implementation addresses
jq -r 'to_entries[] | select(.key | contains("TokenImplementation")) | "\(.key): \(.value)"' $DEPLOYMENT_FILE

Helper script to generate Helm values (optional):

#!/bin/bash
# Extract addresses and format as YAML for Helm

DEPLOYMENT_FILE="kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json"

echo "  secretEnv:"
echo "    SETTLEMINT_SYSTEM_REGISTRY_ADDRESS: $(jq -r '."SystemFactoryModule#ATKSystemFactory"' $DEPLOYMENT_FILE)"
echo ""
echo "    # Compliance Module Implementations"
echo "    SETTLEMINT_COMPLIANCE_MODULE_SMART_IDENTITY_VERIFICATION: \"$(jq -r '."ComplianceModule#SmartIdentityVerification"' $DEPLOYMENT_FILE)\""
echo "    SETTLEMINT_COMPLIANCE_MODULE_COUNTRY_ALLOW_LIST: \"$(jq -r '."ComplianceModule#CountryAllowList"' $DEPLOYMENT_FILE)\""
echo "    SETTLEMINT_COMPLIANCE_MODULE_COUNTRY_BLOCK_LIST: \"$(jq -r '."ComplianceModule#CountryBlockList"' $DEPLOYMENT_FILE)\""
# ... add other addresses as needed

Save this script as extract-addresses.sh, make it executable with chmod +x extract-addresses.sh, and run it to generate properly formatted Helm values you can paste directly into your configuration.

The installation takes 10-15 minutes as Kubernetes pulls container images, provisions persistent volumes, and waits for services to become healthy.

Monitor installation progress:

# Watch all pod status
kubectl get pods -n atk -w

# Check for events indicating issues
kubectl get events -n atk --sort-by='.lastTimestamp' | tail -20

# View logs for a specific service
kubectl logs -n atk deployment/dapp -f

Post-deployment verification

Check LoadBalancer provisioning

The AWS Load Balancer Controller provisions a Network Load Balancer for the NGINX ingress service. This can take 2-3 minutes:

# Wait for external IP/hostname assignment
kubectl get svc -n atk ingress-nginx-controller -w

# Get LoadBalancer DNS name
LB_DNS=$(kubectl get svc -n atk ingress-nginx-controller \
  -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

echo "LoadBalancer DNS: $LB_DNS"

# Resolve to IP addresses (NLB typically has multiple IPs)
nslookup $LB_DNS

Note the IP addresses returned by nslookup—you'll configure DNS records to point to these IPs.

Configure DNS records in Cloudflare

ATK exposes multiple services through subdomains on your chosen base domain. This guide uses example.com as a placeholder—replace it with your actual domain throughout the configuration.

Example domain substitution:

If your domain is mycompany.io, replace:

  • atk.example.comatk.mycompany.io
  • grafana.atk.example.comgrafana.atk.mycompany.io
  • minio.atk.example.comminio.atk.mycompany.io

Required DNS records:

SubdomainFull DomainPurpose
atkatk.example.comMain dApp UI
grafana.atkgrafana.atk.example.comObservability dashboards
minio.atkminio.atk.example.comMinIO S3 API (object storage)

Note: The MinIO S3 API is exposed publicly for the dApp to store and retrieve asset metadata, documents, and images. The MinIO web console (administration UI) remains internal-only for security.

You can use either individual A records or a wildcard record for subdomain coverage.

Create separate A records for each service. This provides explicit control and better visibility in DNS management.

In Cloudflare dashboard:

  1. Navigate to your domain's DNS management page
  2. Click Add record for each service:
TypeNameContentProxy statusTTL
Aatk<LoadBalancer-IP>DNS onlyAuto
Agrafana.atk<LoadBalancer-IP>DNS onlyAuto
Aminio.atk<LoadBalancer-IP>DNS onlyAuto

Using Cloudflare API (faster for multiple records):

# Set your Cloudflare credentials
export CF_API_TOKEN="your-cloudflare-api-token"
export CF_ZONE_ID="your-zone-id"  # Find in domain overview

# Get LoadBalancer IP
LB_IP=$(dig +short $LB_DNS | head -n 1)

# Create DNS records
for SUBDOMAIN in "atk" "grafana.atk" "minio.atk"; do
  curl -X POST "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records" \
    -H "Authorization: Bearer ${CF_API_TOKEN}" \
    -H "Content-Type: application/json" \
    --data "{
      \"type\": \"A\",
      \"name\": \"${SUBDOMAIN}\",
      \"content\": \"${LB_IP}\",
      \"ttl\": 1,
      \"proxied\": false
    }"
done

Option 2: Wildcard A record

Create a single wildcard record to cover all ATK subdomains. This is simpler but less explicit.

In Cloudflare dashboard:

TypeNameContentProxy statusTTL
A*.atk<LoadBalancer-IP>DNS onlyAuto

Using Cloudflare API:

LB_IP=$(nslookup $LB_DNS | grep -A1 "Name:" | tail -1 | awk '{print $2}')

curl -X POST "https://api.cloudflare.com/client/v4/zones/${CF_ZONE_ID}/dns_records" \
  -H "Authorization: Bearer ${CF_API_TOKEN}" \
  -H "Content-Type: application/json" \
  --data "{
    \"type\": \"A\",
    \"name\": \"*.atk\",
    \"content\": \"${LB_IP}\",
    \"ttl\": 1,
    \"proxied\": false
  }"

Important DNS configuration notes

DNS-only mode (grey cloud) is required initially:

  • Ensures requests go directly to your NLB without Cloudflare's proxy
  • Allows cert-manager to complete ACME HTTP-01 challenges for SSL certificate validation
  • Let's Encrypt challenge server must reach your Kubernetes ingress directly

After certificates are provisioned (check with kubectl get certificate -n atk):

  • You can enable Cloudflare proxy (orange cloud) for DDoS protection and CDN benefits
  • Enable proxy individually per record or keep DNS-only for direct access
  • Note: Enabling proxy will route traffic through Cloudflare's edge network, which may impact latency for regions far from Cloudflare POPs

Verify DNS propagation:

# Check DNS resolution (replace example.com with your domain)
dig atk.example.com +short
dig grafana.atk.example.com +short
dig minio.atk.example.com +short

# All should return the LoadBalancer IP
# If using wildcard, test a subdomain:
dig random.atk.example.com +short

DNS propagation typically takes 1-5 minutes with Cloudflare's fast network.

Alternative: AWS Route 53

For AWS-native DNS management, use Route 53 with alias records:

# Get hosted zone ID
HOSTED_ZONE_ID=$(aws route53 list-hosted-zones-by-name \
  --dns-name example.com \
  --query 'HostedZones[0].Id' --output text)

# Get NLB hosted zone ID (region-specific)
NLB_HOSTED_ZONE_ID=$(aws elbv2 describe-load-balancers \
  --query "LoadBalancers[?DNSName=='${LB_DNS}'].CanonicalHostedZoneId" \
  --output text)

# Create alias record for dApp
aws route53 change-resource-record-sets \
  --hosted-zone-id $HOSTED_ZONE_ID \
  --change-batch "{
    \"Changes\": [{
      \"Action\": \"CREATE\",
      \"ResourceRecordSet\": {
        \"Name\": \"atk.example.com\",
        \"Type\": \"A\",
        \"AliasTarget\": {
          \"HostedZoneId\": \"${NLB_HOSTED_ZONE_ID}\",
          \"DNSName\": \"${LB_DNS}\",
          \"EvaluateTargetHealth\": true
        }
      }
    }]
  }"

# Repeat for grafana.atk.example.com and minio.atk.example.com

Route 53 alias records automatically update if AWS changes the NLB's IP addresses, unlike Cloudflare A records which require manual updates.

Test ingress routing

Verify the NGINX ingress controller correctly routes traffic based on Host header (replace example.com with your actual domain):

# Test dApp endpoint (should return HTML)
curl -H "Host: atk.example.com" http://$LB_DNS

# Test Grafana endpoint (should return HTML or redirect)
curl -H "Host: grafana.atk.example.com" http://$LB_DNS

# Test MinIO S3 API endpoint (should return XML error for unauthenticated request)
curl -H "Host: minio.atk.example.com" http://$LB_DNS

# Check all ingress resources exist
kubectl get ingress -n atk

Expected ingress resources:

  • dapp: Routes atk.example.com to dApp service
  • grafana: Routes grafana.atk.example.com to Grafana dashboard
  • minio: Routes minio.atk.example.com to MinIO S3 API (port 9000)

Monitor certificate issuance

Cert-manager automatically requests certificates from Let's Encrypt. This process takes 1-2 minutes per certificate:

# Watch certificate status
kubectl get certificate -n atk -w

# Check certificate details (use actual certificate name: atk-tls, grafana-atk-tls, or minio-atk-tls)
kubectl describe certificate -n atk atk-tls

# View cert-manager logs if issues occur
kubectl logs -n cert-manager deployment/cert-manager -f

Certificate troubleshooting:

  • Challenge failing: Ensure DNS records point to correct LoadBalancer IP
  • Rate limit errors: Let's Encrypt has rate limits (50 certificates per registered domain per week). Use staging issuer for testing.
  • Invalid configuration: Check ClusterIssuer email address and solver configuration

Validate deployment health

Check all pods are running without restarts:

# All pods should show READY (e.g., 1/1, 2/2)
kubectl get pods -n atk

# Check for recent errors or warnings
kubectl get events -n atk --sort-by='.lastTimestamp' | grep -i error

# View ingress controller logs for routing issues
kubectl logs -n atk -l app.kubernetes.io/name=ingress-nginx --tail=50 -f

All ATK services should show Running status with ready containers:

Kubernetes pods running successfully

Common issues:

  • Graph Node not syncing: Check graph-node-index-0 logs for RPC connection errors
  • dApp CrashLoopBackOff: Verify all SETTLEMINT_* environment variables are set correctly
  • MinIO access denied: Check MinIO credentials in Helm values match dApp configuration

Access deployed services

Once DNS propagates and certificates are issued, access your deployment (replace example.com with your actual domain):

Public services:

  • dApp UI: https://atk.example.com

ATK dApp home page

  • Grafana dashboards: https://grafana.atk.example.com (default credentials: admin / check Helm chart output for generated password)
  • MinIO S3 API: https://minio.atk.example.com (S3-compatible object storage for dApp document/image storage; use your generated credentials from the Generate secure credentials section)
  • ADI testnet block explorer: https://explorer.ab.testnet.adifoundation.ai/ (public ADI Foundation service)

ADI testnet block explorer

Internal services (not exposed via ingress):

  • MinIO web console: Administration UI at http://minio:9001 (internal only for security)
  • Graph Node: Subgraph queries at http://graph-node-query:8000
  • Hasura GraphQL: Database queries at http://hasura:8080
  • ERPC gateway: RPC proxying at http://erpc:4000

Access internal services for administration:

Use kubectl port-forward to securely access internal-only services:

# Access MinIO web console for administration
kubectl port-forward -n atk svc/minio 9001:9001

# Then open http://localhost:9001 in your browser
# Use the MinIO credentials you generated in the "Generate secure credentials" section

# Access Graph Node admin interface
kubectl port-forward -n atk svc/graph-node-query 8000:8000

# Access Hasura console
kubectl port-forward -n atk svc/hasura 8080:8080

First-time setup:

  1. Navigate to dApp UI and complete organization onboarding
  2. Configure identity registry and compliance modules for your use case
  3. Verify subgraph sync status in Grafana (Graph Node dashboard)
  4. Create test assets using the dApp token creation wizard

Asset Designer:

Use the visual asset designer to configure token parameters, compliance rules, and lifecycle settings:

Asset designer interface for creating tokenized assets

Activity Dashboard:

Monitor all platform activity including asset creation, transfers, compliance checks, and lifecycle events:

Activity dashboard showing platform operations

Monitor with observability stack

ATK includes comprehensive Grafana dashboards for all components. Access Grafana and explore pre-configured dashboards:

  • System Overview: Resource utilization, pod health, persistent volume usage
  • Graph Node: Subgraph sync progress, query performance, indexing errors
  • PostgreSQL: Database connections, query latency, table sizes
  • NGINX Ingress: HTTP request rates, response times, error rates
  • MinIO: Object storage metrics, bucket sizes, API latency

These dashboards provide real-time visibility into system health and performance, enabling proactive issue detection before users are impacted.

Security best practices

Credential management

Never use default credentials in production. This guide uses placeholders that MUST be replaced with secure, randomly generated values:

What to secure:

  • MinIO access key and secret key (object storage)
  • Grafana admin password (observability dashboards)
  • PostgreSQL passwords (database access)
  • Any API keys or tokens for external services

How to generate secure credentials:

# Generate a strong random password (32 characters)
openssl rand -base64 32 | tr -d '/+='

# Generate an access key (20 characters, alphanumeric)
openssl rand -base64 20 | tr -d '/+=' | head -c 20

# Alternative: use pwgen if available
pwgen -s 32 1

How to store credentials securely:

  1. Development/Testing: Use environment variables or .env.local files (gitignored)
  2. Production: Use a secret management solution:
    • AWS: AWS Secrets Manager or Systems Manager Parameter Store
    • Kubernetes: Sealed Secrets or External Secrets Operator
    • Enterprise: HashiCorp Vault, 1Password, or CyberArk

Kubernetes Sealed Secrets example:

# Install Sealed Secrets controller
kubectl apply -f https://github.com/bitnami-labs/sealed-secrets/releases/download/v0.24.0/controller.yaml

# Create secret and seal it
kubectl create secret generic minio-credentials \
  --from-literal=accessKey=$MINIO_ACCESS_KEY \
  --from-literal=secretKey=$MINIO_SECRET_KEY \
  --dry-run=client -o yaml | \
  kubeseal -o yaml > sealed-minio-credentials.yaml

# Commit sealed secret to git (safe - encrypted)
git add sealed-minio-credentials.yaml

Network security

Minimize public exposure:

  • Only dApp, Grafana, and MinIO S3 API are publicly accessible
  • Internal services (Graph Node, Hasura, MinIO console) require port-forwarding
  • Use VPN or bastion hosts for administrative access in production

Enable Cloudflare proxy (after certificate provisioning):

  • DDoS protection and rate limiting
  • Web Application Firewall (WAF) rules
  • Bot detection and mitigation

Implement network policies:

# Example: Restrict MinIO access to dApp pods only
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: minio-access-policy
  namespace: atk
spec:
  podSelector:
    matchLabels:
      app: minio
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: dapp
    ports:
    - protocol: TCP
      port: 9000
EOF

Certificate management

Let's Encrypt rate limits:

  • 50 certificates per registered domain per week
  • 5 duplicate certificates per week
  • Use staging issuer for testing: cert-manager.io/cluster-issuer: "letsencrypt-staging"

Monitor certificate expiration:

# Check certificate expiry dates
kubectl get certificates -n atk -o custom-columns=\
NAME:.metadata.name,\
READY:.status.conditions[0].status,\
SECRET:.spec.secretName,\
EXPIRY:.status.notAfter

Regular security updates

Keep dependencies updated:

# Update Helm chart dependencies
helm repo update
helm search repo atk --versions | head -5

# Check for security advisories
kubectl get vulnerabilityreports -n atk

Enable automatic security patches for EKS Auto Mode (enabled by default).

Next steps

After successful deployment:

  • Configure compliance: Set up identity verification providers, define country restrictions, configure token supply limits. See Compliance configuration guide.
  • Create test assets: Issue bonds, equities, or stablecoins to validate the full asset lifecycle. Refer to Asset creation tutorial.
  • Integrate with existing systems: Connect ATK APIs to your back-office, CRM, or accounting systems. Review API integration guide.
  • Plan for production: Review Production operations guide for backup strategies, disaster recovery, and monitoring best practices.
  • Explore ADI ecosystem: Learn about ADI Chain's roadmap, governance model, and developer incentives at ADI Foundation.