documentation
Run Workflow
Each campaign follows a consistent execution pattern. The operator submits an objective, all four agents spin up simultaneously, and results stream back in real-time. Here's what happens at each stage.
Objective Intake
The operator describes what they want to test: a specific x402 endpoint, a class of vulnerabilities to probe, or a general reconnaissance task. The platform enriches this prompt with context: current date and time, available tools, and any conversation history from previous runs in the same session. Each agent receives an identical, fully-formed objective.
Parallel Execution
All four agents launch simultaneously and begin working through their reasoning loops. Each agent decides independently which tools to invoke, what requests to make, and how to interpret responses. They stream their progress to individual terminals in real-time, so operators can watch four different approaches unfold in parallel. Agents continue iterating until they reach a conclusion or hit their iteration limit.
Tool Invocation
As agents work, they call out to the tool layer: scraping documentation, making HTTP requests, or executing x402 payments. Every tool call is logged with full request and response details. If an agent triggers a paid x402 endpoint, the payment is automatically constructed and executed within operator-defined budget limits. Failed requests are captured alongside successes, providing a complete audit trail of what was attempted.
Result Comparison
Once agents complete, the operator reviews findings across all four terminals. Consistent findings across multiple agents indicate high-confidence results. Divergent conclusions warrant deeper investigation since they often reveal genuine ambiguity in the target system or expose edge cases worth exploring. The platform preserves the full conversation history, allowing operators to continue the session with follow-up prompts that build on what was already discovered.
On this page