Ardra automation, crystal clear
Ardra is the command center for airdrop farming on perpetual DEXs. Pilots launch venue-specific bots, track funding and slippage, and grow referral networks without leaving a single interface. These docs map every layer—from SIWE authentication through the referral economy—so teams can scale volume with confidence.
Smart scalping on Aster DEX with deterministic rollouts and heartbeat monitoring.
Direct rebates and network shares stack automatically for every pilot and referral.
Postgres backbone with ingest fallbacks so leaderboard jobs never silently fail.
Interconnected perp DEX orbit
Every venue links into Ardra's control plane. The cube below visualizes the active roster, spinning to show how liquidity, custody, and referrals stay synchronized while pilots rotate strategies between them.
Aster
Latency-tuned scalper that anchors the stack.
Pacifica
Referral rotation with dual-wallet orchestration.
Backpack
Exchange-grade custody with wallet-native flows.
Hyperliquid
Onchain L1 venue with latency routing.
Avantis
Global leverage surface with zero-fee tiers.
ApeX
Exchange-native spreads aligned with ApeX tiers.
Product pillars
The experience is built around three non-negotiables: automation that genuinely models venue incentives, a referral engine stitched into every flow, and visible progression toward airdrops and partner rewards.
Volume-aligned automation
Bots are tuned for point farming on perpetual DEXs. Strategies are latency-aware, read funding, and rotate through curated symbol sets.
Network flywheel
Referral capture is native. Wallet logins, cookies, and Prisma relations keep every rebate, fee split, and invite mapped to a refCode.
Transparent progression
The leaderboard packages direct and network points. Operators unlock tiers with clear rules—10% direct rebate, 20% network share, up to 50% for top partners.
Automation stack
Ardra links local agents, Next.js APIs, and ingestion utilities. The breakdown below shows responsibilities per layer so teams know where to extend or plug in.
Client-side agents
LayerAutomation runs close to the user. The extension bundle and `.aster` workspace let pilots launch bots from their own machines, keeping custody local.
- Config files saved per user under `.aster/<userId>/config.json` via the Aster manager.
- Browser extension blueprints ensure deterministic execution windows and visual parity.
- Anime.js-driven UI states mirror command center status changes in real time.
Control plane APIs
LayerNext.js App Router exposes REST hooks (`/api/aster/*`, `/api/users/*`) to manage bot lifecycles—start, stop, fetch logs, and sync wallet metadata.
- All routes gate on `auth()` with instant `401` responses for unauthenticated access.
- Process orchestration relies on Node's `child_process.spawn` with in-memory log buffers.
- Error paths resolve to JSON payloads so the UI can surface precise remedial actions.
Data orchestration
LayerLeaderboard ingestion merges on-chain identifiers and spreadsheet uploads. Prisma is the single source of truth with Supabase Postgres underneath.
- Import scripts normalize venue wallet identifiers with referral ownership resolution.
- Saved uploads become immutable snapshots (`saveImportedLeaderboard`) for auditability.
- Fallback file stores (`lib/user-store.ts`) protect ingest pipelines when the DB is offline.
Aster
Live- Python strategy orchestrated through `lib/aster-manager`.
- Smart scalping presets, funding awareness, and rotation guards.
- Runbook: store keys, write config, POST `/api/aster/start` to spawn the session.
Hyperliquid
Staging- Latency-optimized connector with point optimizer toggles.
- Launch-ready UI cards already published under `/bots/hyperliquid`.
- Queueing behind final handler tests for local custody assumptions.
Backpack
Staging- Wallet-exchange blend demands dual session storage.
- Config scaffolding mirrors Aster to reuse orchestration tooling.
- Compliance checks tracked alongside regulated venue rollout.
ApeX, Pacifica, Paradex, Avantis, StandX, Lighter, OUTKAST
In design- Each bot ships with venue-specific taglines and risk rails already visible on `/bots`.
- Playbooks include referral schema, liquidity guards, and margin heuristics.
- Ops team maintains parity so switching venues keeps the same control plane.
Command center
The command center is the pilot's bridge. It exposes API-backed controls and real-time UI states so operators can configure bots, launch sessions, and inspect telemetry without leaving the dashboard.
Configuration workspace
- Wallet inventory stored in `profileWallets` and exposed through `/api/users/wallets`.
- Strategy presets saved on disk via `writeUserConfig`, keeping API secrets off the cloud.
- Form state mirrors diffed configs so operators know what changed before relaunching.
Runtime observability
- Session heartbeat endpoints (`/api/aster/status`) report PID, uptime, and last error.
- Log buffers stream the final 1,000 lines per user so troubleshooting stays contextual.
- UI cards animate with Anime.js to highlight state changes the moment they land.
Operational safeguards
- Start requests validate config existence, blocking accidental launches without keys.
- Stop commands always fire, even if the process already exited, guaranteeing cleanup.
- Login logs recorded via Prisma provide a tamper-evident audit for every auth method.
Ecosystem visibility
- Leaderboard entries include referrer and referred codes for multi-hop attribution.
- Partner filters (soon) let DAOs slice data by community or campaign.
- Exports align with spreadsheet imports to reconcile incentives on both sides.
Referral & points economy
Ardra's economics reward execution and network growth simultaneously. The logic used on the marketing site is the same logic that powers the leaderboard and partner dashboards.
Every trade a pilot executes through Ardra returns 10% of the venue fees. Leaderboard rows expose `feesGenerated` so partners can reconcile externally.
Invites captured by `ReferralCapture` feed Prisma relations (`Referral` model). Points multiply by 0.20 on referred volume and store as `referralFees`.
Tier unlock logic lives in the leaderboard importer. Metadata reserves room for future multipliers so DAO votes can adjust shares without refactoring UI.
1) Referral codes originate from `ensureUserRefCode` and attach on login. 2) Volume imports reference wallet identifiers, reconcile them with owner refCodes, and compute point totals. 3) The leaderboard API serves aggregated rows with `points`, `feesGenerated`, and `referralPoints`, enabling instant UI updates and CSV exports.
Data & identity layer
Ardra uses Prisma as the abstraction over a Supabase-hosted Postgres cluster. Authentication runs through NextAuth with a Solana Sign-In workflow (SIWS); pilots verify ownership of their wallet and every session gets recorded with IP, user agent, and method metadata for auditability.
Wallet addresses persist in the `Wallet` table with chain metadata, while `Referral` relations bind referrers and referees. Production traffic stores Solana addresses exclusively, and SIWS nonces live in the `SiweNonce` table to block replay attacks. The application reuses the same Prisma client across requests (`lib/prisma.ts`) to avoid connection churn.
When the database is unreachable, the ingest layer falls back to JSON snapshots (`lib/user-store.ts`) so leaderboard updates never silently fail. Once connectivity returns, data merges back into Postgres without duplicating refCodes.
Key models
- User: canonical profile with optional username, referral code, and wallet JSON snapshot.
- Wallet: chain-specific address records with custody metadata.
- Referral: directed edges mapping referrer → referred for payouts.
- LoginLog: append-only audit trail capturing every authentication attempt.
- SiweNonce: expiring records that secure Solana signature flows.
Security & observability
Custody posture
Bots never receive keys over the network. Operators input secrets locally, the server writes configs directly to disk, and processes read from that file system path. No API key is persisted in Supabase.
Audit events
`createLoginLog` records every authentication attempt with IP and user agent via Next headers. The data informs breach monitoring and helps partners diagnose suspicious access.
Error recovery
Spawned processes capture stderr prefixed with `[err]`, making triage easier. Status endpoints flip to `error` and show the last message so operators know whether retries are safe.
Rate & session control
NextAuth issues JWT sessions with short TTLs. Sensitive routes disable caching (cache: "no-store") to eliminate stale responses. Referral cookies sanitize inputs and uppercase refCodes before storage.
Getting started
Ready to pilot Ardra? Follow the checklist below. Each item links back to the platform so you can jump straight in.
Connect your Solana wallet with the SIWS flow. `lib/auth.ts` issues the credential and persists every pilot with a generated refCode.
Share `https://www.ardra.xyz/?ref=YOURCODE`. The `ReferralCapture` provider stores the code in cookies/localStorage before sign-up.
Open the bot card (Aster today), input API keys, and persist. The server writes `.aster/<userId>/config.json` and confirms existence via `/api/aster/status`.
Press start or call `/api/aster/start`. Track runtime logs, funding alerts, and volume metrics from the command center panels.
Visit `/leaderboard` or the profile dashboard to monitor `points`, `referralPoints`, and fees in near real time.
FAQ
Why does Ardra insist on local custody for bots?
Running automation from the pilot's environment keeps API keys and signing devices under user control. The platform only orchestrates processes and never proxies keys through shared infrastructure.
How are referrals linked if someone signs up days later?
The referral cookie lives for 30 days. On sign-in, `attachReferralIfAny` in `lib/auth.ts` binds the new user to the stored refCode so downstream imports map volume correctly.
Can teams audit leaderboard updates?
Yes. Import jobs persist raw files, normalize identifiers, and emit snapshots saved through `saveImportedLeaderboard`. Operators can download CSVs for compliance trails.
What happens if a bot crashes mid-cycle?
Process exits flip the status to `error`, persist the last message, and surface the issue through `/api/aster/status`. The UI prompts for restart after the operator reviews logs.






