Adversarial Coach OpenClaw Skill - ClawHub
Do you want your AI agent to automate Adversarial Coach workflows? This free skill from ClawHub helps with personal development tasks without building custom tools from scratch.
What this skill does
Adversarial implementation review based on Block's g3 dialectical autocoding research. Use when validating implementation completeness against requirements with fresh objectivity.
Install
npx clawhub@latest install adversarial-coachFull SKILL.md
Open original| name | description |
|---|---|
| adversarial-coach | Adversarial implementation review based on Block's g3 dialectical autocoding research. Use when validating implementation completeness against requirements with fresh objectivity. |
/coach - Adversarial Implementation Review
Usage
/coach [requirements-file]
/coach- Infer requirements from context/coach requirements.md- Validate against specific file
Coach-Player Loop
You orchestrate this dialectical loop between implementing agent (player) and reviewer (coach):
- You (player) implement features
/coachinvokes adversarial review with independent evaluation of compliance to requirements- Coach returns:
IMPLEMENTATION_APPROVEDor specific fixes - Address feedback, loop until approved
Review Process
Step 1: Identify Requirements
Check (in order):
- Specified requirements file or issue/ticket mentioned
requirements.md,REQUIREMENTS.md,SPEC.md,TODO.md- Conversation context; ask user if nothing found
Step 2: Adversarial Review
Review with fresh objectivity - discard prior knowledge, don't rationalize shortcuts.
| Check Category | Items |
|---|---|
| Requirements | Each item: implemented or missing with specific gap |
| Compilation | Compiles? Tests pass? Runs? |
| Common Gaps | Auth on endpoints, token refresh endpoint, HTTPS, bcrypt for passwords, error handling, input validation |
| Functional | Test actual flows (not just compilation), verify edge cases work |
| Test Coverage | Auth error cases (401/403), token expiry, invalid inputs, rate limits |
Step 3: Return Verdict
If approved (>95% complete):
IMPLEMENTATION_APPROVED
- [Requirement 1]: Verified
- [Requirement 2]: Verified
- Compilation: Success
- Tests: All passing
If fixes needed:
REQUIREMENTS COMPLIANCE:
- [Requirement]: Implemented
- [Requirement]: Missing - [specific gap]
IMMEDIATE ACTIONS NEEDED:
1. [Specific fix with file/line if known]
2. [Specific fix]
Key Principles
Rigorous but fair:
- Catch real gaps (security, logic, data flow), not style preferences
- Functionality over aesthetics; always flag security issues (auth, crypto, validation)
Concise:
- Bullets, not essays; specific issues, not vague concerns
- No file contents or verbose analysis in output
Fresh context is your superpower:
- Review as if you've never seen this code; validate against requirements, not intentions
Magic Signal
IMPLEMENTATION_APPROVED = termination signal
Only use when: all requirements met, code compiles/runs, tests pass, no significant gaps.
If in doubt, don't approve.
Minimal Example
User: /coach SPEC.md
Claude:
REQUIREMENTS COMPLIANCE:
- User registration endpoint (bcrypt hashing)
- Login with JWT tokens
- Logout endpoint - not implemented
- Token refresh - missing /auth/refresh endpoint
IMMEDIATE ACTIONS NEEDED:
1. Add POST /auth/logout endpoint
2. Add POST /auth/refresh for token renewal
3. Add tests for 401 on expired tokens
---
User: [fixes] /coach SPEC.md
Claude:
IMPLEMENTATION_APPROVED
- All auth endpoints verified (register, login, logout, refresh)
- 18 tests passing including auth error cases
Research
- Paper: Adversarial Cooperation in Code Synthesis
- Implementation: g3
- Key insight: Discard implementing agent's self-report; perform independent evaluation against requirements.