• SettleMintSettleMint
    • Introduction
    • Market pain points
    • Lifecycle platform approach
    • Platform capabilities
    • Use cases
    • Compliance & security
    • Glossary
    • Core component overview
    • Frontend layer
    • API layer
    • Blockchain layer
    • Data layer
    • Deployment layer
    • System architecture
    • Smart contracts
    • Application layer
    • Data & indexing
    • Integration & operations
    • Performance
    • Quality
      • Testing & quality gates
      • Security & validation
      • Compliance & certification
      • Compliance automation
      • Monitoring & disaster recovery
    • Getting started
    • Asset issuance
    • Platform operations
    • Troubleshooting
    • Development environment
    • Code structure
    • Smart contracts
    • API integration
    • Data model
    • Deployment & ops
    • Testing and QA
    • Developer FAQ
Back to the application
  1. Documentation
  2. Architecture
  3. Quality

Testing and quality gates

Quality assurance in financial infrastructure is not negotiable. A compliance bug that allows non-verified investors to purchase securities destroys institutional trust. A performance regression that makes the platform unusable during market hours costs customers millions.

ATK treats quality as a first-class architectural concern enforced through automated testing, continuous integration gates, and production monitoring. Every code change runs through a gauntlet of validation before reaching production.

Related topics: Security validation, Compliance certification

Testing philosophy

Traditional tokenization platforms test manually, if at all. Compliance checks happen in spreadsheets. Performance testing consists of "it worked on my machine." Security audits happen once before launch, then never again as the codebase evolves.

ATK inverts this model. Testing is automated, continuous, and comprehensive. The CI pipeline executes thousands of tests on every commit. Quality gates block merging when tests fail, code coverage drops, or performance regresses. Production monitoring detects issues before users notice them.

Testing pyramid

ATK implements a balanced testing pyramid that validates correctness at multiple levels:

Rendering chart...

Unit tests (foundation)

Unit tests validate individual functions and components in isolation. They run fast (entire suite completes in under 60 seconds) and provide rapid feedback during development.

Smart contract unit tests - Foundry tests verify contract behavior under every code path. Each compliance module tests valid inputs (transfers succeed), invalid inputs (transfers revert), and edge cases (transfers at exactly the cap). Test coverage targets 95%+ for contract code.

Frontend component tests - Vitest tests validate React components with mocked dependencies. Form validation logic, state management, and UI interactions test without touching real APIs or blockchains.

API procedure tests - ORPC procedures test with mocked database queries and blockchain interactions. Each procedure validates happy paths (correct inputs return expected results), error paths (invalid inputs throw typed errors), and authorization (unauthorized users receive permission errors).

Integration tests (middle layer)

Integration tests validate that multiple components work together correctly. They run slower than unit tests (suite completes in 5-10 minutes) but catch interface mismatches and data flow issues.

Database integration tests - Drizzle ORM queries test against a real PostgreSQL database. Tests verify that migrations run successfully, indexes exist, and complex queries return correct results under realistic data volumes.

Subgraph integration tests - TheGraph subgraph tests index events from a local blockchain, validate entity relationships, and execute GraphQL queries. This ensures the subgraph schema stays synchronized with smart contract events.

API integration tests - End-to-end API tests execute real ORPC procedures against test databases and mock blockchains. Tests verify request validation, business logic execution, database writes, and response serialization.

End-to-end tests (top layer)

End-to-end tests validate complete user workflows from browser to blockchain. They run slowest (suite completes in 30-40 minutes) but provide the highest confidence that features work as users experience them.

Playwright UI tests - Automated browser tests execute critical user journeys: onboarding a new user, issuing a bond, minting tokens to an investor, executing a transfer, viewing transaction history. Tests run in headless Chrome against a local development environment with real smart contracts on a local blockchain.

Multi-actor scenarios - Complex tests simulate multiple users interacting with the platform simultaneously. One user issues a token while another attempts to transfer it. Compliance officers update verification status while investors execute trades. These tests catch race conditions and state synchronization bugs.

XvP settlement tests - Cross-chain payment-versus-payment settlement tests validate the most complex workflow: coordinating simultaneous transfers of security tokens and payment tokens with atomic settlement guarantees.

Static analysis (foundation)

Static analysis runs before any tests execute. It catches type errors, linting violations, and code style issues immediately.

TypeScript strict mode - The entire codebase compiles with TypeScript strict mode enabled (strict: true, noUncheckedIndexedAccess, noImplicitAny). This eliminates entire classes of runtime errors (null reference errors, undefined property access, type mismatches) at compile time.

ESLint and Solhint - Linters enforce code quality rules across TypeScript and Solidity. Rules prevent common mistakes (unused variables, missing return types, dangerous patterns). The CI pipeline fails when lint warnings exceed zero.

Prettier formatting - Code formatting is automated and enforced. Developers cannot commit unformatted code. This eliminates formatting debates and ensures consistent style across the codebase.

Quality gates in CI

The continuous integration pipeline enforces quality gates that block merging when standards are not met.

Rendering chart...

Quality gate checklist

Each pull request must pass these gates before merging:

GateToolThresholdBlocks Merge?
Code formattingPrettier100% formatted✅
TypeScript compilebun run typecheckZero errors✅
Solidity compileFoundry + HardhatZero errors✅
Code generationGraphQL Codegen, ABI exportsZero errors✅
LintingESLint (TS), Solhint (Sol)Zero warnings✅
Type checkingTypeScript strict modeZero errors✅
BuildTurborepo build taskAll packages succeed✅
Unit testsVitest (frontend), Foundry100% passing✅
Integration testsSubgraph, Database100% passing✅
E2E testsPlaywright100% passing✅
Code coverageVitest coverage, Foundry>80% lines/branches✅
Gas benchmarksFoundry gas reports<5% regression✅
Security scanningGitleaks, TrivyNo high/critical✅
Bundle sizeVite bundle analyzer<10% growth⚠️
Performance budgetsLighthouse CIScore >90⚠️

Gates marked with ✅ block merging automatically. Gates marked with ⚠️ trigger manual review but do not block.

Local development validation

Developers run quality checks locally before pushing code:

# Full CI suite (takes 10-15 minutes)
bun run ci

# Fast feedback loop (takes 2-3 minutes)
bun run ci:base

# Contract-specific validation
bun run --cwd kit/contracts test

# Frontend-specific validation
bun run --cwd kit/dapp test:unit

The ci script executes the same checks that run in GitHub Actions. This prevents developers from pushing code that will fail CI, wasting time and pipeline resources.

Test data management

Realistic test data is critical for catching bugs that only manifest with production-like data volumes and patterns.

Test data generation

Fixture factories - Tests use factory functions to generate realistic test data: investors with verified identities, tokens with compliance rules, transactions with varying amounts and timestamps.

Parameterized testing - Tests run against multiple data sets to validate behavior across different scenarios. Bond tests run with various maturity dates, coupon rates, and redemption schedules. Equity tests run with different voting configurations and dividend schedules.

Edge case coverage - Tests explicitly validate boundary conditions: zero amounts, maximum amounts, empty lists, single-item lists, very long lists (1,000+ items).

Test isolation

Database cleanup - Integration tests run against a fresh PostgreSQL database. Each test suite drops and recreates tables to ensure isolation. Tests cannot interfere with each other via shared state.

Blockchain snapshots - Contract tests use Foundry's snapshot and revert functionality. Each test starts from a known blockchain state, executes operations, validates results, then reverts to the snapshot. This keeps tests fast and isolated.

Parallel test execution - Unit tests and integration tests run in parallel to maximize CI throughput. Tests are designed to avoid side effects that break parallelism (shared files, global state, external services).

Performance validation

Performance testing runs continuously to catch regressions before they reach production.

Load testing

K6 performance tests - Automated load tests simulate realistic user behavior: browsing assets, executing transfers, viewing dashboards. Tests run with 100, 500, and 1,000 concurrent users. Response times must stay within defined percentile targets (P50 <100ms, P95 <300ms, P99 <500ms).

Spike testing - Spike tests validate graceful degradation under sudden load increases. The system should slow down predictably (serve cached data, queue background jobs) rather than fail catastrophically (return 500 errors, drop connections).

Soak testing - Soak tests run at moderate load for 24 hours to detect memory leaks, connection pool exhaustion, and gradual performance degradation that only manifest under sustained load.

Gas benchmarking

Foundry gas reports - Every contract test measures gas consumption. CI compares gas costs against the previous commit:

  • Increases <2% = acceptable (minor changes)
  • Increases 2-5% = warning (requires justification)
  • Increases >5% = blocks merge (requires optimization or approval)

Gas regression tracking - A dashboard tracks gas costs over time for key operations: minting tokens, executing transfers, updating compliance rules. Unexpected increases trigger alerts and investigation.

Bundle size monitoring

Vite bundle analysis - Frontend builds generate bundle size reports. CI compares bundle sizes against the previous commit:

  • Increases <5% = acceptable
  • Increases 5-10% = warning (review dependencies)
  • Increases >10% = manual review required (explain why)

Lighthouse CI - Automated Lighthouse audits run on every pull request. Performance scores must stay above 90. Regressions trigger warnings but do not block merging (manual review required).

Quality metrics and reporting

Quality is measured and reported continuously to maintain visibility into system health.

Key quality metrics

MetricTargetCurrentTrend
Unit test coverage>80%87%↑
Integration test coverage>70%74%→
E2E test success rate100%100%→
CI pipeline success rate>95%97%↑
Mean time to detect (MTTD)<5 min3 min↓
Mean time to resolve (MTTR)<30 min22 min↓
Production incident count<5/month2/month↓
Security vulnerability count0 high/crit0→
Gas cost (mint operation)<150k gas142k↓
API response time P95<300ms287ms→
Frontend Lighthouse score>9094↑

Continuous improvement

Monthly quality reviews - Engineering teams review quality metrics monthly. Metrics trending in the wrong direction trigger investigation and corrective action.

Quarterly goal setting - Quality goals are set quarterly as part of OKR planning. Example: "Increase unit test coverage from 85% to 90%" or "Reduce MTTR from 30 minutes to 20 minutes."

Knowledge sharing - Teams share quality improvements across the engineering organization: new testing patterns, useful tools, effective debugging techniques. This propagates best practices and raises baseline quality.

Monitoring and observability

Automated testing catches most bugs before production, but monitoring catches the rest.

Application monitoring

OpenTelemetry instrumentation - The API exports traces, metrics, and logs to a centralized observability platform. Traces reveal latency bottlenecks. Metrics track request rates, error rates, and resource utilization. Logs capture detailed diagnostic information.

Real User Monitoring (RUM) - The frontend reports Core Web Vitals and error rates from real user sessions. This reveals issues that synthetic tests miss: browser-specific bugs, network latency spikes, device-specific performance problems.

Error tracking - Unhandled exceptions are captured and reported to Sentry. Errors include full stack traces, user context, and session replay data. This enables rapid debugging without requiring users to file detailed bug reports.

Blockchain monitoring

Event indexing lag - Alerts trigger when the subgraph falls more than 10 blocks behind the blockchain tip. This indicates indexing performance issues or blockchain node problems.

Transaction failure rate - Alerts trigger when the transaction failure rate exceeds 5%. This indicates contract bugs, gas estimation errors, or blockchain congestion.

Gas price tracking - Alerts trigger when average gas prices exceed thresholds that make operations uneconomical. This enables proactive communication to users about increased transaction costs.

Conclusion

Quality assurance in ATK is not a phase that happens before launch—it is a continuous discipline embedded into every aspect of development. Automated testing validates correctness at multiple levels. CI gates enforce standards before code reaches production. Monitoring catches issues that escape testing.

This approach enables ATK to operate in regulated markets where failures have severe consequences. The result is a platform that meets institutional standards for reliability, security, and regulatory compliance.

For security-specific validation, see Security validation. For compliance certification requirements, see Compliance certification.

Testing & monitoring
Security & validation
llms-full.txt

On this page

Testing philosophyTesting pyramidUnit tests (foundation)Integration tests (middle layer)End-to-end tests (top layer)Static analysis (foundation)Quality gates in CIQuality gate checklistLocal development validationTest data managementTest data generationTest isolationPerformance validationLoad testingGas benchmarkingBundle size monitoringQuality metrics and reportingKey quality metricsContinuous improvementMonitoring and observabilityApplication monitoringBlockchain monitoringConclusion