Deploy Asset Tokenization Kit on ADI Foundation testnet with AWS EKS
Complete deployment guide for running the Asset Tokenization Kit on ADI Foundation's Layer 2 testnet using AWS EKS Auto Mode. This guide covers smart contract deployment, Kubernetes infrastructure setup, networking configuration, and production-ready observability.
ADI Foundation is an Abu Dhabi-based blockchain
initiative founded by Sirius International, the digital arm of IHC (a $240B UAE
holding company). ADI Chain is a modular, high-performance Layer 2 blockchain
secured by Ethereum, built on ZKsync's Atlas and Airbender technology stacks.
ADI Foundation's goal is to bring one billion people onchain by 2030, primarily
targeting emerging markets across the Middle East, Africa, and Asia. These
regions represent approximately 75% of the world's population but have been
largely excluded from blockchain innovation due to infrastructure gaps and
compliance challenges.
ADI Chain is an EVM-compatible Layer 2 that executes transactions off-chain,
verifies them via zero-knowledge validity proofs, and finalizes on Ethereum. This
architecture combines high throughput with Ethereum-grade security while
dramatically reducing transaction costs.
Key capabilities:
Modular Layer 3 domains: Entities can operate compliance-optimized L3
chains segmented by jurisdiction, sector, or policy while staying connected to
ADI Chain L2
Built-in compliance: Unlike most blockchain networks, ADI Chain provides
L3 chains with native compliance capabilities meeting regulatory requirements
for government and institutional use
ZKsync infrastructure: Leverages ZKsync's proven technology for
zero-knowledge proofs and rollup execution
Cross-border interoperability: Designed for interconnected institutional
and government L3s enabling transparent, trackable cross-border flows with
same-day settlement
This guide walks through a complete production deployment of the Asset
Tokenization Kit on ADI Foundation's testnet. The process involves three major
phases:
Smart contract deployment: Deploy ATK smart contracts to ADI testnet using
Hardhat Ignition
Infrastructure provisioning: Create and configure AWS EKS Auto Mode
cluster with networking and load balancing
Service orchestration: Deploy ATK Helm charts with contracts, subgraph,
and application services
ATK license and Harbor credentials: Valid SettleMint ATK license with Harbor
registry credentials (username and password). Contact
[email protected] if you haven't received your credentials.
AWS account: Active account with permissions to create EKS clusters, VPCs,
subnets, and IAM resources
AWS CLI: Installed and configured with appropriate credentials (aws sts get-caller-identity should succeed)
kubectl: Kubernetes CLI tool (version 1.28+)
Helm: Kubernetes package manager (version 3.14+)
Bun: JavaScript runtime and package manager (version 1.1+)
ADI testnet private key: Funded wallet private key for contract deployment
(obtain test ADI tokens from
ADI faucet)
Domain access: Ability to configure DNS records for your deployment domain
(e.g., Cloudflare, Route 53)
Hardhat Ignition stores all deployed contract addresses in a JSON file within the
deployment artifacts directory. This file serves as the deployment manifest
containing every contract address from your deployment.
Hardhat Ignition creates a deployment folder for each network (identified by chain
ID) to track deployment state and addresses. This folder contains:
deployed_addresses.json: Complete mapping of module names to deployed
contract addresses
journal.jsonl: Deployment execution log with timestamps and transaction
hashes
Build artifacts: Compiled contract metadata and deployment parameters
These artifacts enable:
Reproducible deployments: Ignition tracks which contracts are already
deployed, preventing accidental re-deployments
Upgrade management: When upgrading proxy implementations, Ignition knows
which addresses to update
Deployment verification: You can verify contract addresses against the
deployment log
Configuration management: Export addresses directly to application
configuration
Extract addresses for Helm configuration:
# View all deployed addressescat kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json# Extract specific address using jqjq -r '."SystemFactoryModule#ATKSystemFactory"' \ kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json
You'll use these addresses to populate the dapp.secretEnv section of the Helm
chart configuration (detailed in the Install ATK Helm chart
section below).
Find deployment block number:
The deployment block number is not stored in deployed_addresses.json. Search for
your ATKSystemFactory address in the
ADI testnet explorer to find the
block number where it was deployed. You'll need this for subgraph configuration.
AWS EKS Auto Mode provides simplified Kubernetes cluster management with
automatic node provisioning, scaling, and patching. Create a cluster using the
AWS Console, CLI, or Infrastructure-as-Code tools.
EKS Auto Mode creates private subnets for worker nodes. These subnets require
NAT gateway routes for outbound internet access (pulling container images,
reaching external APIs).
Check existing routes:
# Get VPC and subnet IDsVPC_ID=$(aws eks describe-cluster --name adi-demo --region me-south-1 \ --query 'cluster.resourcesVpcConfig.vpcId' --output text)PRIVATE_SUBNETS=$(aws ec2 describe-subnets --region me-south-1 \ --filters "Name=vpc-id,Values=$VPC_ID" "Name=tag:Name,Values=*private*" \ --query 'Subnets[*].SubnetId' --output text)# Inspect route tables for private subnetsfor subnet in $PRIVATE_SUBNETS; do echo "Routes for subnet $subnet:" aws ec2 describe-route-tables --region me-south-1 \ --filters "Name=association.subnet-id,Values=$subnet" \ --query 'RouteTables[*].Routes'done
If route tables lack 0.0.0.0/0 routes to a NAT gateway, create them:
# List available NAT gatewaysaws ec2 describe-nat-gateways --region me-south-1 \ --filter "Name=vpc-id,Values=$VPC_ID" "Name=state,Values=available" \ --query 'NatGateways[*].[NatGatewayId,SubnetId]' --output table# Add NAT gateway route to each private subnet route tablefor subnet in $PRIVATE_SUBNETS; do RTB_ID=$(aws ec2 describe-route-tables --region me-south-1 \ --filters "Name=association.subnet-id,Values=$subnet" \ --query 'RouteTables[0].RouteTableId' --output text) aws ec2 create-route --region me-south-1 \ --route-table-id $RTB_ID \ --destination-cidr-block 0.0.0.0/0 \ --nat-gateway-id <nat-gateway-id>done
Replace <nat-gateway-id> with an appropriate NAT gateway from the output above.
Each private subnet can share a single NAT gateway, or use dedicated gateways
for higher availability.
The AWS Load Balancer Controller provisions Application Load Balancers
automatically based on Kubernetes Ingress resources. ATK uses ALBs to expose dApp,
Grafana, and MinIO services with SSL termination and path-based routing.
Tag subnets for load balancer discovery:
Note: The following steps require the VPC_ID variable to be set. If you are running these commands in a new shell session or skipped previous steps, set it with:
The Graph Node subgraph indexes blockchain events emitted by ATK smart contracts,
providing efficient historical data queries for the dApp. Update the subgraph
manifest with your deployed contract addresses.
This generates subgraph.yaml from the template, validates the manifest schema,
and produces the final build artifacts. The command outputs an IPFS hash
representing the compiled subgraph:
The Portal (admin UI) requires contract ABIs to construct transaction data. The
repository includes a helper script that creates ConfigMaps from generated ABIs.
Run the script from repository root:
./kit/charts/tools/create-abi-configmaps.sh
Or from the charts directory:
cd kit/charts./tools/create-abi-configmaps.sh
What it does:
Reads JSON ABI files from kit/contracts/.generated/portal/
Creates a ConfigMap for each ABI (e.g., abi-atktoken, abi-compliance)
Annotates ConfigMaps with settlemint.com/artifact=abi for discovery
Replace [email protected] with your email address. Let's Encrypt sends
certificate expiration notifications to this address (though cert-manager
auto-renews certificates 30 days before expiry).
HTTP-01 challenge: Proves domain ownership by serving a specific file at
http://<your-domain>/.well-known/acme-challenge/. The ALB ingress controller
automatically routes ACME challenges to cert-manager.
ATK container images are distributed through SettleMint's private Harbor registry.
When you purchase an ATK license, you receive Harbor credentials that grant
access to production-ready, security-scanned container images.
Why Harbor credentials are required:
Private components: Many ATK services (dApp, Portal, custom subgraph
indexers) are proprietary and only available through Harbor
Security scanning: All images undergo vulnerability scanning and compliance
checks before distribution
Version control: Licensed users receive access to specific tested versions
with support guarantees
Production readiness: Harbor images include optimizations and
configurations for production deployments
Your Harbor credentials will be provided by SettleMint as part of your license
agreement. Without valid credentials, the deployment will fail when Kubernetes
attempts to pull private images.
If you haven't received your credentials, contact SettleMint support at
[email protected] or your account representative.
Before deploying, update all Helm chart values files to use Harbor-proxied
container images. The package:harbor script automatically prefixes all image
repositories with the Harbor registry path:
cd kit/chartsbun run package:harbor
What this script does:
Scans all values.yaml and values-*.yaml files in kit/charts/atk/
Prefixes image registries with harbor.settlemint.com/:
The ATK Helm chart orchestrates all services: blockchain node (optional),
subgraph indexer, Hasura GraphQL engine, dApp, MinIO object storage, Grafana
observability, and NGINX ingress controller.
Before installing, prepare the following:
Contract addresses: From Hardhat Ignition deployment artifacts at
kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json (see
Locate deployed contract addresses)
Harbor credentials: Your licensed username and password for
harbor.settlemint.com
Domain name: Your base domain for ATK services (the example below uses
example.com—replace with your actual domain like yourcompany.com)
MinIO credentials: Generate strong random credentials for object storage
access (see Generate secure credentials below)
Domain naming convention: ATK exposes the following services publicly via
ingress:
Note: The MinIO web console (administration UI) is not exposed externally for
security. Only the S3-compatible API is publicly accessible, which the dApp uses
for storing asset metadata, documents, and images. To access the MinIO console
for administration, use kubectl port-forward.
Internal services (Graph Node, Hasura, ERPC) are only accessible within the
Kubernetes cluster.
Replace all instances of example.com in the Helm values below with your actual
domain before installation.
CRITICAL SECURITY: Never use default or weak credentials in production
deployments. Generate strong random credentials for all services.
MinIO credentials (S3 object storage):
# Generate secure random credentialsMINIO_ACCESS_KEY=$(openssl rand -base64 20 | tr -d '/+=' | head -c 20)MINIO_SECRET_KEY=$(openssl rand -base64 32 | tr -d '/+=' | head -c 32)# Save credentials securely (use a password manager in production)echo "MinIO Access Key: $MINIO_ACCESS_KEY"echo "MinIO Secret Key: $MINIO_SECRET_KEY"# Export for use in Helm valuesexport MINIO_ACCESS_KEYexport MINIO_SECRET_KEY
IMPORTANT: Store these credentials securely in a password manager or secret
management system (e.g., AWS Secrets Manager, HashiCorp Vault, 1Password). You'll
need them for:
Helm chart installation (see SETTLEMINT_MINIO_ACCESS_KEY and
SETTLEMINT_MINIO_SECRET_KEY below)
MinIO console access via port-forward for administration
imagePullCredentials: Harbor registry authentication for private image access
network.enabled: false: Disables local Besu node since we're using ADI's
public RPC
blockscout.enabled: false: ADI Foundation provides block explorer, no need to
deploy our own
erpc.config: Configures RPC gateway with retry, hedging, and circuit breaker
policies for reliability
graph-node.configTemplate: Connects subgraph indexer to ADI RPC via ERPC
proxy
dapp.secretEnv: Contract addresses from
kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json, ADI
block explorer URL, MinIO S3 API endpoint with secure generated credentials
support.minio.ingress.enabled: true: Exposes MinIO S3 API publicly for dApp
object storage operations (web console remains internal-only)
support.ingress-nginx.controller.service.annotations: Provisions Network Load
Balancer for ingress traffic
SECURITY WARNING: Replace all placeholder credentials (<your-*>) with your
actual generated secure credentials before deploying. Never commit credentials to
version control. Use environment variables, sealed secrets, or external secret
managers for production deployments.
Rather than manually copying addresses, you can extract them programmatically from
the Ignition deployment manifest:
# Set the deployment file pathDEPLOYMENT_FILE="kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json"# Example: Extract system factory addressjq -r '."SystemFactoryModule#ATKSystemFactory"' $DEPLOYMENT_FILE# Example: Extract all compliance module addressesjq -r 'to_entries[] | select(.key | startswith("ComplianceModule#")) | "\(.key): \(.value)"' $DEPLOYMENT_FILE# Example: Extract all token implementation addressesjq -r 'to_entries[] | select(.key | contains("TokenImplementation")) | "\(.key): \(.value)"' $DEPLOYMENT_FILE
Helper script to generate Helm values (optional):
#!/bin/bash# Extract addresses and format as YAML for HelmDEPLOYMENT_FILE="kit/contracts/ignition/deployments/chain-36900/deployed_addresses.json"echo " secretEnv:"echo " SETTLEMINT_SYSTEM_REGISTRY_ADDRESS: $(jq -r '."SystemFactoryModule#ATKSystemFactory"' $DEPLOYMENT_FILE)"echo ""echo " # Compliance Module Implementations"echo " SETTLEMINT_COMPLIANCE_MODULE_SMART_IDENTITY_VERIFICATION: \"$(jq -r '."ComplianceModule#SmartIdentityVerification"' $DEPLOYMENT_FILE)\""echo " SETTLEMINT_COMPLIANCE_MODULE_COUNTRY_ALLOW_LIST: \"$(jq -r '."ComplianceModule#CountryAllowList"' $DEPLOYMENT_FILE)\""echo " SETTLEMINT_COMPLIANCE_MODULE_COUNTRY_BLOCK_LIST: \"$(jq -r '."ComplianceModule#CountryBlockList"' $DEPLOYMENT_FILE)\""# ... add other addresses as needed
Save this script as extract-addresses.sh, make it executable with
chmod +x extract-addresses.sh, and run it to generate properly formatted Helm
values you can paste directly into your configuration.
The installation takes 10-15 minutes as Kubernetes pulls container images,
provisions persistent volumes, and waits for services to become healthy.
Monitor installation progress:
# Watch all pod statuskubectl get pods -n atk -w# Check for events indicating issueskubectl get events -n atk --sort-by='.lastTimestamp' | tail -20# View logs for a specific servicekubectl logs -n atk deployment/dapp -f
The AWS Load Balancer Controller provisions a Network Load Balancer for the NGINX
ingress service. This can take 2-3 minutes:
# Wait for external IP/hostname assignmentkubectl get svc -n atk ingress-nginx-controller -w# Get LoadBalancer DNS nameLB_DNS=$(kubectl get svc -n atk ingress-nginx-controller \ -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')echo "LoadBalancer DNS: $LB_DNS"# Resolve to IP addresses (NLB typically has multiple IPs)nslookup $LB_DNS
Note the IP addresses returned by nslookup—you'll configure DNS records to
point to these IPs.
ATK exposes multiple services through subdomains on your chosen base domain. This
guide uses example.com as a placeholder—replace it with your actual domain
throughout the configuration.
Note: The MinIO S3 API is exposed publicly for the dApp to store and retrieve
asset metadata, documents, and images. The MinIO web console (administration UI)
remains internal-only for security.
You can use either individual A records or a wildcard record for subdomain
coverage.
Ensures requests go directly to your NLB without Cloudflare's proxy
Allows cert-manager to complete ACME HTTP-01 challenges for SSL certificate
validation
Let's Encrypt challenge server must reach your Kubernetes ingress directly
After certificates are provisioned (check with kubectl get certificate -n atk):
You can enable Cloudflare proxy (orange cloud) for DDoS protection and CDN
benefits
Enable proxy individually per record or keep DNS-only for direct access
Note: Enabling proxy will route traffic through Cloudflare's edge network,
which may impact latency for regions far from Cloudflare POPs
Verify DNS propagation:
# Check DNS resolution (replace example.com with your domain)dig atk.example.com +shortdig grafana.atk.example.com +shortdig minio.atk.example.com +short# All should return the LoadBalancer IP# If using wildcard, test a subdomain:dig random.atk.example.com +short
DNS propagation typically takes 1-5 minutes with Cloudflare's fast network.
Verify the NGINX ingress controller correctly routes traffic based on Host
header (replace example.com with your actual domain):
# Test dApp endpoint (should return HTML)curl -H "Host: atk.example.com" http://$LB_DNS# Test Grafana endpoint (should return HTML or redirect)curl -H "Host: grafana.atk.example.com" http://$LB_DNS# Test MinIO S3 API endpoint (should return XML error for unauthenticated request)curl -H "Host: minio.atk.example.com" http://$LB_DNS# Check all ingress resources existkubectl get ingress -n atk
Expected ingress resources:
dapp: Routes atk.example.com to dApp service
grafana: Routes grafana.atk.example.com to Grafana dashboard
minio: Routes minio.atk.example.com to MinIO S3 API (port 9000)
MinIO S3 API: https://minio.atk.example.com (S3-compatible object storage
for dApp document/image storage; use your generated credentials from the
Generate secure credentials section)
ADI testnet block explorer: https://explorer.ab.testnet.adifoundation.ai/
(public ADI Foundation service)
Internal services (not exposed via ingress):
MinIO web console: Administration UI at http://minio:9001 (internal only
for security)
Graph Node: Subgraph queries at http://graph-node-query:8000
Hasura GraphQL: Database queries at http://hasura:8080
ERPC gateway: RPC proxying at http://erpc:4000
Access internal services for administration:
Use kubectl port-forward to securely access internal-only services:
# Access MinIO web console for administrationkubectl port-forward -n atk svc/minio 9001:9001# Then open http://localhost:9001 in your browser# Use the MinIO credentials you generated in the "Generate secure credentials" section# Access Graph Node admin interfacekubectl port-forward -n atk svc/graph-node-query 8000:8000# Access Hasura consolekubectl port-forward -n atk svc/hasura 8080:8080
First-time setup:
Navigate to dApp UI and complete organization onboarding
Configure identity registry and compliance modules for your use case
Verify subgraph sync status in Grafana (Graph Node dashboard)
Create test assets using the dApp token creation wizard
Asset Designer:
Use the visual asset designer to configure token parameters, compliance rules, and
lifecycle settings:
Activity Dashboard:
Monitor all platform activity including asset creation, transfers, compliance
checks, and lifecycle events:
Configure compliance: Set up identity verification providers, define
country restrictions, configure token supply limits. See
Compliance configuration guide.
Create test assets: Issue bonds, equities, or stablecoins to validate the
full asset lifecycle. Refer to
Asset creation tutorial.
Integrate with existing systems: Connect ATK APIs to your back-office,
CRM, or accounting systems. Review
API integration guide.
Plan for production: Review
Production operations guide
for backup strategies, disaster recovery, and monitoring best practices.
Explore ADI ecosystem: Learn about ADI Chain's roadmap, governance model,
and developer incentives at ADI Foundation.