How to Build an AI Agent That Earns Money: Sell AI Skills for Bitcoin Micropayments
Your AI agent has skills worth paying for. Gate them behind L402, charge per request in sats, and let other agents — or humans — pay for the work. Real economics, working code, no hype.
Every guide about AI agents and money tells the same story: give your agent a wallet so it can pay for things. Spend, spend, spend.
This guide is about the other direction. Your agent has skills. Those skills are worth money. Someone — another agent, a developer, an automated pipeline — will pay for them if you make it easy enough. L402 makes it easy enough: gate a skill behind a Lightning invoice, charge per request, collect sats. The agent does the work. The protocol handles the money.
The economics are good. An LLM call costs a fraction of a cent. You charge a few sats — a penny or less. The margin is 80%+ with mid-tier models, higher with cheaper ones. And because Lightning settles in milliseconds with no minimum transaction, you can charge amounts that would be impossible with any traditional payment system.
What can an agent sell?
This is the real question. Not "how does L402 work" — we have a whole article on that. The question is: what services are worth building?
A good paid agent skill has three properties: an LLM is genuinely good at the task, the result is worth more than a penny to the buyer, and the request-response cycle is fast enough that paying per call makes sense.
Some ideas, roughly ordered by how interesting the economics get.
The straightforward ones
Summarization. Give it text, get a summary. The most obvious use case and still one of the best — LLMs are excellent at compression. Charge 10 sats for a page, 50 for a long document. Useful to agents that ingest large amounts of content and need the gist before deciding what to read in full.
Translation. Text in, translated text out. Charge by length — 10 sats per paragraph, 100 per page. The margins are enormous because translation is one of the cheapest LLM operations and one of the most valuable outputs. An agent building a multilingual product calls your translation endpoint hundreds of times without caring that each call costs a fraction of a cent.
Content moderation. Check text against a content policy. Return a verdict: safe, flagged, or rejected, with reasons. 5 sats per check. Low price, high volume. Every platform that accepts user-generated content needs this, and most would rather pay per check than build and maintain their own moderation pipeline.
The ones that get more interesting
Data extraction. Give it unstructured text — an email, a receipt, a legal filing, a product listing — and get back structured JSON. This is where LLMs quietly outperform every regex and rule-based system ever written. Charge 20–50 sats depending on complexity. An invoice processing pipeline that needs to extract line items from PDFs will call this thousands of times.
Code review. Accept a diff or a file, return feedback on bugs, style issues, security concerns. 100 sats per file, 500 per PR. Developers already pay for static analysis tools. An agent that gives feedback in natural language — explaining not just what's wrong but why — is worth more than a linter.
Research synthesis. Give it a question, get back a sourced brief. This is harder to build well (you need search integration, not just an LLM call), but the value per request is high — 50 to 200 sats. An agent planning a complex task can outsource the "find out about X" step to your research agent instead of building its own search pipeline.
The creative ones
Competitive monitoring. Accept a URL, return a structured diff of what changed since last check. Product pages, pricing pages, job listings, terms of service. 100 sats per check. A business intelligence agent runs this weekly against a list of competitors and never misses a price change.
Contract clause review. Feed it a contract, get back flagged clauses — liability caps, auto-renewal terms, IP assignment, non-competes. 200–500 sats. Not a replacement for a lawyer, but a useful first pass that catches the obvious traps. Legal tech is expensive; a per-contract micropayment is not.
Accessibility descriptions. Give it an image, get back alt text that actually describes what's in the image. 10 sats. Low price, but the aggregate market is massive — every website, every CMS, every content pipeline needs this, and most skip it because writing alt text by hand is tedious.
Tone adjustment. Text in, rewritten text out — same meaning, different register. Formal to casual, technical to plain English, marketing copy to human language. 10 sats per paragraph. Sounds trivial until you realize how often agents and pipelines produce text that needs to sound different than how it came out.
Vulnerability scanning. Accept code, return a list of security issues with severity ratings and suggested fixes. 200 sats per file. More expensive to run (needs a capable model), but the value is high enough that 200 sats is still a bargain compared to commercial SAST tools.
The pattern: pick something an LLM does well, wrap it in an endpoint, set a price that's trivial compared to the value, and let volume do the work.
The math
Let's make this concrete with one example: a summarization endpoint. (All sat-to-dollar conversions below assume ~$100k BTC. Adjust if the price has moved.)
The cost side. Summarizing a 10,000-token document with GPT-4o-mini:
- Input: 10,000 tokens × $0.15/1M = $0.0015
- Output: 500 tokens × $0.60/1M = $0.0003
- Total LLM cost: $0.0018 per request — roughly 2 sats
The revenue side. Charge 10 sats per summary — about $0.01.
The margin. You keep about 8 sats out of every 10. That's 80% — and that's using a mid-tier model. Cheaper models push it higher.
At scale:
| Daily requests | Daily revenue | Daily LLM cost | Daily profit | Monthly profit |
|---|---|---|---|---|
| 100 | 1,000 sats (~$1) | $0.18 | ~$0.82 | ~$25 |
| 1,000 | 10,000 sats (~$10) | $1.80 | ~$8.20 | ~$246 |
| 10,000 | 100,000 sats (~$100) | $18.00 | ~$82.00 | ~$2,460 |
Hosting: a $5/month VPS handles thousands of requests per day. The LLM API bill is the real cost, and it's dwarfed by revenue at any meaningful volume.
You can push margins higher with cheaper models. Gemini 2.0 Flash ($0.10/$0.40 per 1M tokens) or DeepSeek V3 ($0.14/$0.28) bring the LLM cost down to about $0.001 per request — roughly 1 sat. That puts you at 90% margin on a 10-sat charge. For simple tasks like summarization and extraction, the quality difference is negligible.
The honest caveat: 10,000 requests per day doesn't happen on day one. You need to build something good, make it discoverable, and earn traffic. The point isn't "passive income while you sleep" — it's that the economics work once the traffic shows up, and the infrastructure cost is close to zero while you wait.
Build it
A summarization agent that charges 10 sats per request. Express.js, L402 paywall, one LLM call.
npm install @lnbot/l402 openai expressimport express from "express";
import { l402, LnBot } from "@lnbot/l402";
import OpenAI from "openai";
const app = express();
const ln = new LnBot({ apiKey: process.env.LNBOT_API_KEY });
const openai = new OpenAI();
app.use(express.json());
// Every request to /api/summarize costs 10 sats
app.use("/api/summarize", l402.paywall(ln, {
price: 10,
description: "Text summarization",
}));
app.post("/api/summarize", async (req, res) => {
const { text } = req.body;
const completion = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{ role: "system", content: "Summarize the following text in 2-3 sentences." },
{ role: "user", content: text },
],
});
res.json({
summary: completion.choices[0].message.content,
});
});
app.listen(3000);That's the whole thing. A client hits /api/summarize, gets a 402 with a Lightning invoice, pays 10 sats, retries with proof of payment, and gets a summary back. The paywall is one line of middleware. The LLM call is standard OpenAI SDK. Everything between — invoice creation, payment verification, credential checking — is handled by the L402 middleware.
Multiple skills, different prices
An agent doesn't need to sell just one thing. Add more endpoints at different price points:
app.use("/api/summarize", l402.paywall(ln, { price: 10 }));
app.use("/api/translate", l402.paywall(ln, { price: 10 }));
app.use("/api/extract", l402.paywall(ln, { price: 30 }));
app.use("/api/review", l402.paywall(ln, { price: 100 }));Or price dynamically based on input size:
app.use("/api/summarize", l402.paywall(ln, {
price: (req) => {
const length = req.body.text?.length ?? 0;
return Math.max(5, Math.ceil(length / 1000)); // 1 sat per 1K chars, min 5
},
}));A 500-character snippet costs 5 sats. A 10,000-character document costs 10. The price scales with the work.
Agents paying agents
Here's where it stops being a side project and starts being interesting infrastructure.
Your summarization agent is live, charging 10 sats per request. Meanwhile, someone else built a research agent — it searches the web, collects sources, and compiles briefs. That research agent produces long documents that its own users need digested. So it calls your summarization endpoint.
import { l402, LnBot } from "@lnbot/l402";
// The research agent's wallet
const ln = new LnBot({ apiKey: "key_research_agent_..." });
const client = l402.client(ln, { maxPrice: 50, budgetSats: 10_000, budgetPeriod: "day" });
// Research agent calls your summarization agent
const response = await client.fetch("https://summarizer.example.com/api/summarize", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ text: longResearchDocument }),
});
const { summary } = await response.json();The research agent hit a 402. Its L402 client paid the invoice — 10 sats from its own wallet — retried, got the summary, and moved on. No API key exchange, no signup, no contract. Two wallets, one Lightning payment, both sides are software.
Your agent earned 10 sats. The research agent got a summary it can include in its own output (which it charges for). Value flows through a chain of agents, each taking a cut for the work it does. A supply chain, except the workers are LLMs and the payments settle in milliseconds.
Making your agent discoverable
A paid endpoint nobody knows about earns nothing. A few ways agents find your service:
Publish a manifest. A /.well-known/l402.json or OpenAPI spec at the root of your API lets crawlers and agent frameworks discover your endpoints, their prices, and what they do. An agent browsing for services can read the spec and decide whether to pay.
List it in agent directories. Directories like Satring index L402 services. Register your endpoints and they become searchable by any agent looking for capabilities to buy.
Expose it as MCP tools. If your agent's skills are exposed as an MCP server, any MCP-compatible agent (Claude Desktop, Cursor, custom frameworks) discovers them automatically. The L402 paywall still applies — discovery is free, usage costs sats.
Word of mouth still works. Post your API on dev forums, agent-building communities, Twitter. Developers building agents are actively looking for paid services their agents can call. If your endpoint is useful and cheap, they'll wire it in.
What this isn't
A disclaimer, because the internet is full of "passive income with AI" content that overpromises.
Building a paid agent is real. The economics work. The infrastructure exists today. But "deploy and forget" is a fantasy. You need to build something that produces genuinely useful output, keep the model updated, handle edge cases, monitor quality, and find users. It's a product, not a lottery ticket.
The advantage of the L402 model is that the payment infrastructure is zero-effort — no billing system, no invoicing, no Stripe dashboard, no subscription management. That frees you to focus entirely on the thing that actually matters: making the output good enough that someone will pay 10 sats for it.
Getting started
npm install @lnbot/l402 openaiPick a skill. Set a price. Deploy. The pay-per-API-call guide covers the L402 middleware in depth — pricing strategies, dynamic pricing, time-based access. The agent payments guide covers the client side — giving your agent a wallet to spend from, budget controls, MCP integration.
SDKs for the server and client: TypeScript, Python, Go, Rust, C#. L402 middleware: Express.js, ASP.NET Core.
FAQ
> Can AI agents actually earn money autonomously?
> How much can an AI agent earn?
> What skills can an AI agent sell?
> How much does it cost to run an AI agent that earns money?
> Can agents pay other agents?
Related
Give Your AI Agent a Bitcoin Wallet: MCP, SDK, and L402 Payments for Autonomous Agents
Connect any AI agent to the Lightning Network in minutes. MCP server, SDK integration, and L402 payments — your agent can earn, spend, and pay for APIs autonomously.
Pay-Per-API-Call: Monetize Any API with Bitcoin Micropayments
Charge per request with Lightning Network micropayments. No signup, no API keys, no Stripe fees. Drop-in middleware for Express.js and ASP.NET Core.
L402: The Protocol That Lets Software Pay for Software
L402 connects HTTP's 402 Payment Required status to Lightning Network payments. How the protocol works, why macaroons replace API keys, and what this means for AI agents and paid APIs.
Wallet as a Service: Embedded Lightning Wallets for Platforms, Agents, and Developers
The WaaS market is dominated by EVM wallet SDKs built for humans with email addresses. Lightning-native wallet infrastructure works for software too — one API call, no nodes, no KYC, no gas fees.