7.7 KiB
Mortdecai API — Pay With Data
A public inference API for the Mortdecai Minecraft AI model where users earn access by contributing training data.
Concept
Every AI model gets better with more data. Most APIs charge money. This one charges data.
Users submit training examples — prompt/response pairs verified against a real Minecraft server — and earn credits for inference calls. The model improves from every contribution, which improves the API for everyone.
How It Works
┌──────────┐ ┌──────────────┐ ┌─────────────┐
│ User │────▶│ API Gateway │────▶│ Mortdecai │
│ Client │◀────│ (credits, │◀────│ (Ollama) │
│ │ │ validation)│ │ │
└──────────┘ └──────┬───────┘ └─────────────┘
│
┌──────┴───────┐
│ Validation │
│ Server │
│ (RCON test) │
└──────────────┘
1. Get an API Key
Register and receive an API key with an initial credit balance.
2. Use Credits for Inference
curl -X POST https://api.mortdec.ai/v1/generate \
-H "Authorization: Bearer mk_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"mode": "sudo",
"prompt": "give me a diamond sword with sharpness 5",
"player": "Steve",
"server_context": {"version": "1.21.x"}
}'
Response:
{
"commands": ["give Steve minecraft:diamond_sword[enchantments={sharpness:5}] 1"],
"reasoning": "Diamond sword with sharpness 5 enchantment in 1.21 component syntax",
"risk_level": 3,
"credits_remaining": 99
}
Each call costs 1 credit.
3. Earn Credits by Contributing Data
curl -X POST https://api.mortdec.ai/v1/contribute \
-H "Authorization: Bearer mk_your_api_key" \
-H "Content-Type: application/json" \
-d '{
"prompt": "sudo give me a trident with riptide 3 and loyalty 3",
"expected_output": {
"commands": ["give Steve minecraft:trident[enchantments={riptide:3,loyalty:3}] 1"],
"reasoning": "Riptide and loyalty are mutually exclusive in vanilla Minecraft"
},
"notes": "This should fail — riptide and loyalty cannot coexist"
}'
Response:
{
"accepted": false,
"reason": "Correct! Riptide and loyalty are mutually exclusive. This is a valuable negative example.",
"credits_earned": 15,
"credits_remaining": 114,
"validation": {
"schema_valid": true,
"rcon_tested": true,
"rcon_result": "error — conflicting enchantments",
"duplicate": false,
"quality_score": 0.92
}
}
Accepted contributions earn 5-20 credits depending on quality:
| Quality | Credits | What makes it valuable |
|---|---|---|
| Basic | 5 | Valid format, correct command |
| Good | 10 | Uncommon prompt phrasing or edge case |
| Excellent | 15 | Negative example, error correction, or novel scenario |
| Exceptional | 20 | New command pattern, modded content, or bug discovery |
Rejected contributions earn 0 credits but don't cost anything.
Credit System
| Tier | Starting Credits | Monthly Refresh | Earn Rate |
|---|---|---|---|
| Free | 100 | 100 | 5-20 per accepted contribution |
| Contributor | 500 | Unlimited* | Same |
| Paid | Per purchase | N/A | Optional contributions |
*Contributor tier requires maintaining a positive contribution rate (at least 10 accepted submissions per month).
Credit Economics
- 1 inference call = 1 credit
- 1 accepted basic contribution = 5 credits (5 free inferences)
- 1 accepted excellent contribution = 15 credits (15 free inferences)
- Average user contributes 2-3 examples per session = 10-45 credits earned
The math works: heavy API users are also heavy contributors. The model improves proportionally to usage.
Validation Pipeline
Every contribution goes through automated validation before credits are awarded:
Contribution → Schema Check → Dedup → RCON Test → Quality Score → Accept/Reject
Schema Check
- Valid JSON format
- Required fields present (prompt, expected_output)
- Commands use correct Minecraft 1.21 syntax
- No forbidden commands (ban, op, stop, etc.)
Deduplication
- Compare against existing training dataset
- Fuzzy match on prompt similarity (>90% = duplicate)
- Same prompt from multiple users: first contributor gets full credit, subsequent get 2 credits for validation
RCON Verification
- Commands executed on a sandboxed Minecraft server
- Success/failure recorded
- Error messages captured for error-correction training pairs
Quality Scoring
- Novelty: how different is this from existing data? (0-1)
- Correctness: did RCON succeed? (0-1)
- Complexity: multi-command, enchantments, execute chains score higher
- Category balance: underrepresented categories score higher
Abuse Prevention
| Attack | Defense |
|---|---|
| Garbage submissions | Schema validation + RCON testing rejects invalid data |
| Duplicate farming | Fuzzy dedup, diminishing returns on similar prompts |
| Bot accounts | Rate limit: max 50 contributions/hour per key |
| Prompt injection in training data | All contributions sandboxed, reviewed before merge |
| Adversarial training data | Anomaly detection: flag keys with >80% rejection rate |
| Credit farming without using API | Credits expire after 90 days |
API Endpoints
Inference
POST /v1/generate
POST /v1/pray (God mode — dramatic responses)
POST /v1/translate (Sudo mode — command translation only)
Data Contribution
POST /v1/contribute Submit a training example
GET /v1/contribute/schema Get the expected format
GET /v1/contribute/needed Get categories where data is most needed
Account
GET /v1/account Credits, contribution stats, tier
GET /v1/account/history Recent inference and contribution history
POST /v1/account/register Create a new API key
Meta
GET /v1/model/info Current model version, training stats
GET /v1/model/categories What the model knows, with coverage scores
GET /v1/health API status
What Makes This Different
Most ML APIs treat users as consumers. This treats them as collaborators.
- Users with niche servers contribute data the core team would never generate (modded commands, custom plugins, unusual configurations)
- Users who find bugs get rewarded for reporting them as negative examples
- Power users naturally generate the most valuable edge cases because they push the model hardest
- The model gets better for everyone with every contribution
It's a flywheel: better model → more users → more contributions → better model.
Implementation Status
Phase 1 (Current): Concept and API design
Phase 2: API gateway with credit system
Phase 3: Contribution validation pipeline
Phase 4: Public beta
The core model (Mortdecai v4) is running on a live Minecraft server. The API layer and contribution system are being designed.
Tech Stack
- Model: Mortdecai v4 (Qwen3.5-9B fine-tuned, QLoRA)
- Inference: Ollama on consumer GPUs
- API: FastAPI + SQLite for credits/keys
- Validation: Sandboxed Paper 1.21 server with RCON
- Training: Automated pipeline — contributions → validation → merge → retrain
Learn more at mortdec.ai