• SettleMintSettleMint
    • Introduction
    • Market pain points
    • Lifecycle platform approach
    • Platform capabilities
    • Use cases
    • Compliance & security
    • Glossary
    • Core component overview
    • Frontend layer
    • API layer
    • Blockchain layer
    • Data layer
    • Deployment layer
    • System architecture
    • Smart contracts
    • Application layer
    • Data & indexing
    • Integration & operations
    • Performance
    • Quality
    • Getting started
    • Asset issuance
    • Platform operations
    • Troubleshooting
    • Development environment
    • Code structure
    • Smart contracts
    • API integration
    • Data model
    • Deployment & ops
    • Testing and QA
    • Developer FAQ
Back to the application
  1. Documentation
  2. Developer guides

Setting up your development environment

Build a complete local development stack for the Asset Tokenization Kit. This guide walks you through the architecture, dependencies, and observability tools that power your local blockchain network, database, and dApp.

The Asset Tokenization Kit runs as a distributed system with multiple services working together: a local blockchain network (Anvil), PostgreSQL database, TheGraph indexing node, and the TanStack Start web application. Understanding this architecture helps you diagnose issues, extend functionality, and leverage the built-in observability stack effectively.

Developer onboarding flow

The following flowchart visualizes the complete setup process with key decision points for troubleshooting:

Rendering chart...

This diagram shows the complete onboarding path from prerequisites through verification. Each decision point includes troubleshooting guidance to help resolve common setup issues quickly.

Understanding the local architecture

Your development environment mirrors production deployments at a smaller scale. This approach ensures that code behaving correctly locally will work in Kubernetes-orchestrated production environments without surprises.

Why this matters: Traditional blockchain development often uses simplified local setups that diverge from production, leading to "works on my machine" problems. ATK's architecture-parity approach catches integration issues early, before they reach testing or production environments.

The stack consists of three layers:

  1. Blockchain layer: Anvil (Foundry's local node) provides instant block mining and deterministic addresses for testing. Pre-deployed contracts via genesis file eliminate setup friction.
  2. Data layer: PostgreSQL stores user accounts, session data, and cached blockchain state. TheGraph indexes on-chain events for efficient querying. Redis handles session management and caching.
  3. Application layer: TanStack Start serves the React-based dApp with server-side rendering, API routes via ORPC, and type-safe routing.

Observability integration: The Helm charts include Prometheus metrics, Grafana dashboards, and Loki logging for production deployments. While Docker Compose doesn't include these by default, you can enable them by uncommenting the observability services in docker-compose.yml. This gives you real-time visibility into transaction latency, gas consumption, database query performance, and API response times.

Prerequisites

Your system needs these tools before starting. Version mismatches are the leading cause of cryptic build failures, so verify each one carefully.

Required:

  • Bun (version 1.2.19): JavaScript runtime and package manager
    • Install: curl -fsSL https://bun.sh/install | bash
    • Verify: bun --version
    • Why Bun? It's 2-4x faster than npm/yarn for installs and provides a unified runtime for development and testing. The workspace protocol requires Bun's monorepo features.
  • Docker (version 20.10+): Container runtime for local services
    • Install: Docker Desktop
    • Verify: docker --version and docker compose version
    • Allocate at least 8 GB RAM in Docker Desktop settings (see Preferences → Resources). Insufficient memory causes silent failures in TheGraph indexing and Anvil block production.
  • Node.js (version 22+): Required by some build tools (Hardhat, Vite)
    • Install via nvm: nvm install 22
    • Verify: node --version
    • Why both Node and Bun? Bun handles package management and runtime, but some native modules (like Hardhat plugins) still depend on Node's C++ bindings.

Optional but recommended:

  • Git (version 2.30+): Version control
  • VS Code: Editor with recommended extensions (see .vscode/extensions.json) for Solidity syntax, ESLint integration, and GraphQL autocomplete
  • SettleMint CLI (version 2.6.2): Blockchain deployment and management tools (already included as dev dependency)

System requirements:

  • Disk space: Minimum 10 GB free (Docker volumes, node modules, and build artifacts consume 6-8 GB)
  • Memory: At least 8 GB RAM (16 GB recommended for running full test suite with TheGraph indexing)
  • OS: macOS, Linux, or WSL2 on Windows (Docker Desktop required for macOS/Windows)

Clone the repository

Clone the repository and examine its structure to understand the monorepo layout:

git clone https://github.com/settlemint/asset-tokenization-kit.git
cd asset-tokenization-kit

The repository uses Turborepo to orchestrate builds, tests, and development tasks across multiple packages. Each package declares its dependencies and outputs, enabling Turbo to parallelize builds and cache results aggressively.

Workspace structure:

kit/
├── contracts/   # Solidity contracts, Foundry tests, Hardhat scripts
├── dapp/       # TanStack Start web app, ORPC API, Drizzle ORM schemas
├── subgraph/   # TheGraph mappings, GraphQL schema, indexing logic
├── charts/     # Helm deployment manifests for Kubernetes
└── e2e/        # Playwright end-to-end tests covering full user flows

Workspace dependencies: The dApp depends on contract ABIs and TypeScript types generated from Solidity code. The subgraph depends on contract ABIs and event signatures. E2E tests depend on the dApp being built and running. Turbo handles these dependencies automatically via dependsOn declarations in turbo.json, ensuring correct build order and incremental builds.

Why monorepo? Keeping contracts, frontend, and indexing in one repository ensures version alignment. When you modify a contract, TypeScript types update automatically across the dApp and subgraph. This prevents the "mismatched ABI" runtime errors common in multi-repo setups.

Install dependencies

Install all workspace dependencies in a single operation:

bun install

This command performs several critical operations:

  1. Resolves dependencies for all packages while respecting workspace protocol (workspace:* links in package.json)
  2. Links internal packages so the dApp can import from @packages/zod and @packages/config
  3. Runs post-install scripts to set up Git hooks (via Lefthook), generate initial types, and validate environment

Expected completion time: 1-2 minutes on first run with cold cache. Bun's lockfile (bun.lock) ensures deterministic installs across machines.

Troubleshooting: If installation fails with "version mismatch" errors, verify your Bun version matches package.json (1.2.19) exactly. Lock file conflicts require rm bun.lock && bun install to regenerate. Check that Docker is running if you see native module build failures—some dependencies expect Docker for integration tests during install.

Generate artifacts

ATK requires pre-generated artifacts before starting services. These artifacts bridge the gap between compiled smart contracts and the runtime environment, enabling type-safe contract interactions and optimized blockchain initialization.

Artifacts generated:

  • Genesis file (kit/contracts/.generated/genesis.json): Pre-deploys core contracts (SMART Protocol, identity registries, token factories) to deterministic addresses in the local blockchain. This eliminates the need for manual deployment scripts and ensures consistent addresses across development machines.
  • Contract ABIs (kit/contracts/.generated/portal/): JSON ABI files for every contract, plus TypeScript type definitions generated by Viem. The dApp and subgraph import these for type-safe contract calls.
  • Subgraph manifest (kit/subgraph/.generated/subgraph.yaml): Injects contract addresses and ABIs into TheGraph's configuration, along with IPFS hash for deployment verification.
  • Database schema (kit/dapp/drizzle/): Drizzle migrations reflecting your schema definitions in kit/dapp/src/lib/db/schemas/.

Generate all artifacts now:

bun run artifacts

What happens under the hood:

  1. Compile (turbo run compile): Foundry compiles Solidity contracts to bytecode and ABI. Hardhat runs in parallel for additional type generation.
  2. Codegen (turbo run codegen): Viem generates TypeScript types from ABIs. TheGraph CLI generates AssemblyScript types from GraphQL schema. Drizzle introspects PostgreSQL schema.
  3. Artifacts (turbo run artifacts): Custom scripts assemble genesis file, copy ABIs to shared directories, compute subgraph IPFS hashes.

Expected output:

• Packages in scope: contracts, dapp, subgraph
• Running compile in 3 packages
• Running codegen in 3 packages
• Running artifacts in 3 packages
✓ Genesis file written to kit/contracts/.generated/genesis.json
✓ Portal ABIs written to kit/contracts/.generated/portal/
✓ Subgraph hash: QmXyz...

When to regenerate: Re-run bun run artifacts whenever you:

  • Modify smart contracts in kit/contracts/contracts/
  • Change database schemas in kit/dapp/src/lib/db/schemas/
  • Update subgraph mappings in kit/subgraph/src/
  • Pull changes from Git that touch any of the above

Why this approach? Pre-generating artifacts catches compilation errors before runtime and enables Turbo to cache build outputs. If contract compilation succeeds, you know the artifacts are valid before starting Docker services. This fail-fast approach prevents the frustrating cycle of starting services, discovering broken ABIs, stopping services, and rebuilding.

Configure environment variables

The Docker Compose stack includes sensible defaults for local development, eliminating the need for manual .env configuration in most cases. However, understanding these defaults helps when troubleshooting port conflicts or customizing your setup.

Default configuration (no .env required):

Docker Compose uses these values from docker-compose.yml:

  • Chain ID: 1337 (standard for local Ethereum networks)
  • PostgreSQL: Port 5432, user postgres, password postgres, database postgres
  • Redis: Port 6379, password shared
  • Anvil RPC: Port 8545 (standard Ethereum RPC port)
  • Hasura GraphQL: Port 8080, admin secret hasura
  • TheGraph Node: Port 8000 (GraphQL API), 8001 (WebSocket), 8020 (JSON-RPC admin)
  • MinIO (S3 storage): Port 9000 (API), 9001 (web console), credentials minio / miniominio
  • Blockscout: Port 4000 (API), 4001 (frontend)

When to create .env.local:

Create a .env.local file in the project root if you need to override defaults. Common scenarios:

  1. Port conflicts: Another service on your machine uses port 8545 or 5432
  2. Custom credentials: You want different database passwords for security
  3. Network customization: Testing multi-chain scenarios with different chain IDs

Example .env.local for custom ports:

# Override default ports to avoid conflicts
ANVIL_PORT=8546
POSTGRES_PORT=5433
HASURA_PORT=8081
BLOCKSCOUT_FRONTEND_PORT=4002

# Keep other defaults
CHAIN_ID=1337
POSTGRES_PASSWORD=postgres

Important security note: Never commit .env.local to version control—it's already in .gitignore. Use .env.example as a template when sharing configuration patterns with team members.

Observability configuration: To enable local monitoring dashboards, add these variables to .env.local:

# Enable Prometheus metrics collection
ENABLE_METRICS=true

# Enable Grafana dashboards (requires uncommenting services in docker-compose.yml)
GRAFANA_PORT=3001
GRAFANA_ADMIN_PASSWORD=admin

Then uncomment the prometheus, grafana, and loki services in docker-compose.yml. Access Grafana at http://localhost:3001 to view transaction throughput, gas consumption trends, and database query latency dashboards.

Start Docker services

Launch the complete development stack with a single command:

bun run dev:up

What this command does:

  1. Starts Docker Compose services defined in docker-compose.yml
  2. Creates named volumes for persistent data storage (PostgreSQL data, Anvil chain state, MinIO buckets)
  3. Waits for health checks to pass on all services (typically 30-60 seconds on first run)
  4. Executes settlemint connect --instance local to configure CLI for local blockchain

Services started:

ServicePortPurpose
anvil8545Local Ethereum node (Foundry Anvil)
txsigner8547Transaction signing service
postgres5432PostgreSQL database
redis6379Caching and session storage
hasura8080GraphQL API (not used in current version)
graph-node8000TheGraph indexing node
minio9000S3-compatible object storage
minio-console9001MinIO web console
portal7700SettleMint blockchain portal
blockscout-backend4000Blockchain explorer API
blockscout-frontend4001Blockchain explorer UI
drizzle-gateway4983Drizzle Studio database gateway

Wait for services to be healthy:

docker compose ps

Expected output:

NAME                 STATUS                   PORTS
anvil                Up (healthy)             0.0.0.0:8545->8545/tcp
postgres             Up (healthy)             0.0.0.0:5432->5432/tcp
graph-node           Up (healthy)             0.0.0.0:8000->8000/tcp
...

All services should show (healthy) status. If any service shows (unhealthy) or Exit 1, use docker compose logs -f [service-name] to diagnose.

Common startup issues and fixes:

  • Port conflict errors: Another process is using required ports. Identify with lsof -i :8545,5432,8080 and either kill the process or change ports in .env.local.
  • Out of memory: Docker Desktop has insufficient memory allocation. Increase to 8+ GB in Docker Desktop → Settings → Resources → Memory.
  • Slow startup (>2 minutes): First run downloads images and initializes volumes. Subsequent starts should take 15-30 seconds. If slow persists, check disk I/O with docker stats.
  • Anvil won't start: Genesis file may be missing or invalid. Re-run bun run artifacts to regenerate, then docker compose restart anvil.
  • TheGraph indexing errors: Check that contracts deployed successfully with docker compose logs graph-node | grep "error". Subgraph hash mismatch requires regenerating artifacts and restarting services.

Observability checkpoint: If you enabled Prometheus and Grafana in Step 4, verify they're accessible now:

  • Prometheus UI: http://localhost:9090 (check targets are "UP")
  • Grafana: http://localhost:3001 (login with admin/admin)
  • Default dashboards show transaction latency (target: <2s), gas consumption trends, and database query performance

Run the dApp

With all services healthy, start the web application:

cd kit/dapp
bun run dev

The dApp starts on http://localhost:3000 with several optimizations enabled:

  • Hot module replacement (HMR): Code changes appear instantly without page reload
  • React Server Components: Optimized initial page load with server-side rendering
  • TanStack Router (version 1.133): Type-safe routing with automatic code splitting
  • TanStack Query (version 5.90): Server state management with automatic caching and refetching
  • Vite (version 7.1): Sub-second rebuild times with native ESM support

Expected terminal output:

  ➜  Local:   http://localhost:3000/
  ➜  Network: http://192.168.1.x:3000/

  VITE version 7.1.12 ready in 1234 ms
  ➜ TanStack Router version 1.133.32

First-time setup in the dApp:

  1. Navigate to http://localhost:3000 in your browser
  2. Click "Sign In" → "Create Account" to register a test user
  3. Complete the onboarding wizard:
    • Set up identity (name, email)
    • Create or connect wallet (MetaMask, WalletConnect, or built-in wallet)
    • Complete KYC verification (dev mode uses instant approval)
  4. Explore the dashboard showing available asset classes and your portfolio

Development workflow tip: Keep the dApp terminal open to see real-time logs for API calls, database queries, and contract interactions. Error messages include file paths and line numbers for quick navigation in your editor.

HMR troubleshooting: If hot reload stops working, restart the dev server with Ctrl+C then bun run dev. Vite occasionally loses HMR connection after large file changes or Git branch switches.

Verifying the complete setup

Run these verification checks to ensure all components communicate correctly. Each check validates a different integration point in the architecture.

Blockchain connectivity

The dApp should connect to Anvil automatically. Verify by checking the network indicator in the top-right corner:

  • Network: ATK (Chain ID: 1337)
  • Block height: Incrementing every 12 seconds (Anvil auto-mines blocks)
  • Connection status: Green indicator with "Connected"

If you see "Disconnected" or "Wrong Network", check that:

  1. Anvil is running (docker compose ps anvil shows "healthy")
  2. Port 8545 is accessible (curl http://localhost:8545 returns JSON-RPC response)
  3. Browser wallet (if using MetaMask) is configured for http://localhost:8545 with chain ID 1337

Observability insight: The transaction latency dashboard shows RPC call response times. Values over 500ms indicate Anvil performance issues—check Docker memory allocation.

Database connection

Run a database migration to verify PostgreSQL connectivity and schema generation:

cd kit/dapp
bun run db:generate
bun run db:migrate

Expected output:

✓ Migrations generated successfully
✓ Applied 3 migrations (users, sessions, portfolio)

What this verifies: Drizzle ORM can connect to PostgreSQL, read schema definitions from src/lib/db/schemas/, and apply changes. If migrations fail, check:

Database management UI:

Access Drizzle Studio for database inspection and management:

  • URL: http://localhost:4983

The Drizzle Gateway provides a web interface to browse tables, run queries, and inspect your database schema. Connect to your database using the connection string from your environment configuration.

3. Subgraph indexing

Manual verification: Connect directly with psql:

docker compose exec postgres psql -U postgres -d postgres -c "\dt"

You should see tables like users, sessions, identity_claims, etc.

Subgraph indexing

TheGraph node should index blockchain events automatically. Verify by querying the GraphQL playground:

  • URL: http://localhost:8000/subgraphs/name/atk/graphql

Run this test query:

{
  tokens(first: 5) {
    id
    name
    symbol
    totalSupply
  }
}

Expected result (no tokens deployed yet):

{
  "data": {
    "tokens": []
  }
}

An empty array is correct—you haven't deployed tokens yet. The important part is receiving a valid GraphQL response, which proves TheGraph indexed the genesis block successfully.

If you see errors:

  • "subgraph not found": Graph node hasn't deployed the subgraph yet. Check logs: docker compose logs graph-node | grep "atk"
  • Network errors: Graph node isn't running or accessible. Verify: docker compose ps graph-node
  • Indexing failed: Subgraph manifest may be invalid. Regenerate with bun run artifacts and restart: docker compose restart graph-node

Advanced verification: Check indexing status with this query at http://localhost:8000/graphql:

{
  indexingStatusForCurrentVersion(subgraphName: "atk") {
    synced
    health
    chains {
      chainHeadBlock {
        number
      }
      latestBlock {
        number
      }
    }
  }
}

The latestBlock number should match chainHeadBlock (fully synced), and health should be "healthy".

Observability insight: The subgraph indexing dashboard shows indexing lag (target: <5 blocks) and query latency (target: <100ms). High lag indicates resource constraints—increase Docker memory or optimize subgraph mappings.

Block explorer

Open Blockscout at http://localhost:4001 to verify blockchain data visibility.

Expected content:

  • Genesis block (block #0) with timestamp and state root
  • Pre-deployed system contracts (SMART Protocol, registries) at deterministic addresses
  • Contract details pages with source code and ABI (if Hardhat verification plugin ran during genesis)
  • Empty transaction history initially

Use cases for block explorer:

  1. Debug contract interactions: Find transaction hash in dApp logs, paste into Blockscout search to see revert reason and gas usage
  2. Verify contract deployments: Check that token contract addresses match expected values from artifacts
  3. Inspect token transfers: View ERC-3643 transfer events with compliance check results
  4. Monitor network activity: See block production rate and transaction throughput during load testing

If Blockscout shows "No blocks found" or won't load, check logs:

docker compose logs blockscout-backend | grep -i error

Common issues:

  • Database connection failed: blockscout-db isn't healthy
  • Indexer stuck: Restart with docker compose restart blockscout-backend
  • Frontend can't reach backend: Port 4000 not accessible from browser (firewall issue)

Generating test data

An empty system is hard to develop against. Populate the database and blockchain with sample data using integration tests:

cd kit/dapp
bun run test:integration

What this creates:

  • System administrator account with full permissions
  • 10 investor identities with verified KYC claims (name, country, accreditation status)
  • 5 sample bonds (corporate debt, varying maturities and coupon rates)
  • 3 equity tokens (different share classes with dividend rights)
  • 2 fund tokens (open-ended funds with NAV updates)
  • 50+ transactions (issuances, transfers, compliance checks, yield distributions)

Important caveat: Integration tests call resetDatabase() to ensure clean state, so running this command erases existing data. Only use when you need fresh test data or after bun run dev:reset.

Verification after data generation:

  • dApp dashboard shows portfolio with holdings
  • TheGraph query returns tokens: Run the tokens query from "Verify subgraph indexing" above
  • Blockscout displays transaction history
  • Database contains records: docker compose exec postgres psql -U postgres -d postgres -c "SELECT COUNT(*) FROM users;"

Observability insight: Watch the transaction throughput dashboard during test data generation to see peak load handling. Transaction rates should stay under 100ms per transaction with 8 GB Docker memory.

Development workflow patterns

With the environment fully operational, understand these common workflows to maximize productivity.

Making contract changes

Smart contract changes require rebuilding artifacts and restarting the blockchain to deploy updated genesis state:

  1. Edit Solidity files in kit/contracts/contracts/ (e.g., add new compliance rule to TransferRestrictions.sol)
  2. Regenerate artifacts: Run bun run artifacts from project root
  3. Restart blockchain: bun run dev:reset (stops containers, deletes volumes, regenerates artifacts, starts fresh)
  4. Restart dApp: cd kit/dapp && bun run dev (the dApp auto-reloads when contract types change)

Why full reset? Anvil stores blockchain state in Docker volumes. Changing contracts without resetting creates an inconsistent state where old contract code runs at genesis addresses. The reset ensures your genesis file matches deployed code.

Faster iteration alternative: For rapid prototyping, deploy modified contracts to new addresses with Hardhat scripts instead of modifying genesis. This preserves existing data while testing new code. See kit/contracts/scripts/hardhat/deploy-*.ts for examples.

Making dApp changes

Frontend and backend code hot-reloads automatically without restarting services:

  1. Edit TypeScript/React files in kit/dapp/src/ (components, routes, API procedures)
  2. Changes appear instantly via Vite HMR (React state preserved when possible)
  3. Database schema changes require migration:
    cd kit/dapp
    bun run db:generate  # Generate migration from schema changes
    bun run db:migrate   # Apply migration to PostgreSQL

When to restart the dApp:

  • After pulling Git changes that modify dependencies
  • When environment variables change in .env.local
  • If HMR stops working (rare, usually after branch switches)

Debugging tip: Keep the dApp terminal visible. API errors, database query logs, and contract interaction traces appear here with clickable file links.

Making subgraph changes

TheGraph mappings require regenerating artifacts and restarting the indexer:

  1. Edit mappings in kit/subgraph/src/ (AssemblyScript event handlers)
  2. Update GraphQL schema in kit/subgraph/schema.graphql if adding new entities
  3. Regenerate artifacts: bun run artifacts (regenerates subgraph manifest with new IPFS hash)
  4. Restart services: bun run dev:reset (Graph node re-indexes from block 0 with new mappings)

Incremental testing alternative: Deploy updated subgraph to running Graph node without full reset:

cd kit/subgraph
bun run deploy:local

This preserves dApp state while updating indexing logic. Use during development, then run full dev:reset before committing to verify clean-slate behavior.

Observability tip: The subgraph indexing dashboard shows handler execution times. Slow handlers (>500ms) cause indexing lag—optimize by reducing contract calls in mappings or pre-computing values in smart contracts.

Clean restart procedure

When you need to completely reset the development environment (corrupted state, debugging strange issues, or starting a new feature branch):

# From project root
bun run dev:reset

What this does:

  1. Stops all Docker containers gracefully
  2. Removes containers and deletes named volumes (destroys all data: blockchain state, database records, MinIO files)
  3. Regenerates artifacts from current code (bun run artifacts)
  4. Starts fresh Docker services with health checks
  5. Waits for all services to report healthy

Completion time: 2-3 minutes (includes image pulls if Docker pruned cache, artifact generation, volume initialization)

When to use this:

  • Blockchain state corrupted (Anvil won't start)
  • Database migrations failed and manual rollback is complex
  • Testing clean installation experience
  • Switching between feature branches with incompatible schema changes
  • After pulling major upstream changes that modify genesis or database schema

Warning: This destroys all local data. If you have test accounts or deployed contracts you want to preserve, export them first:

# Export database
docker compose exec postgres pg_dump -U postgres postgres > backup.sql

# Export contract addresses
cat kit/contracts/.generated/genesis.json > genesis-backup.json

Restore after reset by importing the database dump and redeploying contracts with Hardhat scripts.

Observability note: After reset, observability dashboards reset to empty graphs. Generate test data (bun run test:integration) to populate metrics for dashboard development.

Expected results checklist

After completing this guide, verify these outcomes to confirm a working setup:

Running services (verify with docker compose ps):

  • ✅ Anvil producing blocks every 12 seconds
  • ✅ PostgreSQL accepting connections on port 5432
  • ✅ TheGraph node indexed genesis block
  • ✅ TanStack Start dApp serving on http://localhost:3000
  • ✅ Blockscout explorer showing genesis contracts
  • ✅ All services report (healthy) status

Functional verification:

  • ✅ dApp loads without console errors (warnings acceptable)
  • ✅ Network indicator shows "ATK (Chain ID: 1337)" with incrementing block height
  • ✅ Database migrations apply successfully (bun run db:migrate)
  • ✅ TheGraph GraphQL playground responds at http://localhost:8000
  • ✅ Blockscout displays genesis block and pre-deployed contracts
  • ✅ Test data generation creates users and tokens (bun run test:integration)

Development capabilities:

  • ✅ Contract changes trigger recompilation and type regeneration
  • ✅ Frontend code changes hot-reload without page refresh
  • ✅ Database schema changes apply via Drizzle migrations
  • ✅ Subgraph mappings update and re-index events
  • ✅ API procedures (ORPC routes) reload on file save

Observability (if enabled):

  • ✅ Prometheus scrapes metrics from services
  • ✅ Grafana dashboards display transaction latency and gas consumption
  • ✅ Loki aggregates logs from Docker containers

If any item fails, consult the troubleshooting section below or review the specific setup step. The most common issues are port conflicts, insufficient Docker memory, and version mismatches.

Next steps

Now that your environment is operational, explore these paths:

For developers:

  • Code structure — Understand the monorepo layout and module boundaries
  • Extending contracts — Add custom compliance rules or token features
  • API integration — Build external integrations with ORPC procedures

For product managers/QA:

  • Getting started — Deploy your first token and complete a transaction
  • Use cases — Explore bond, equity, and fund workflows

For DevOps:

  • Deployment guide — Deploy to Kubernetes with Helm charts
  • Observability setup — Configure Prometheus, Grafana, and alerting rules

Testing:

  • Run unit tests: bun run test (executes Vitest across all packages)
  • Run E2E tests: cd kit/e2e && bun run test:e2e:ui (Playwright browser tests)
  • Check code quality: bun run ci (format, lint, typecheck, build, test)

Troubleshooting common issues

Asset operations
Code structure
llms-full.txt

On this page

Developer onboarding flowUnderstanding the local architecturePrerequisitesClone the repositoryInstall dependenciesGenerate artifactsConfigure environment variablesStart Docker servicesRun the dAppVerifying the complete setupBlockchain connectivityDatabase connection3. Subgraph indexingSubgraph indexingBlock explorerGenerating test dataDevelopment workflow patternsMaking contract changesMaking dApp changesMaking subgraph changesClean restart procedureExpected results checklistNext stepsTroubleshooting common issues