Tracked: morrowind agent (py/cfg), skills/, training-data/, research/, notes/, specs/, test-results/, metrics/, heartbeat/, briefings/, memories/, skins/, hooks/, decisions.md, OPERATIONS.md, SOUL.md Excluded: screenshots, PNGs, binaries, sessions, databases, secrets, audio cache, timmy-config/ and timmy-telemetry/ (separate repos)
1259 lines
49 KiB
Markdown
1259 lines
49 KiB
Markdown
Sovereign AI Platform: Technical Implementation Report
|
||
Executive Summary
|
||
This report provides production-ready implementation guidance for building sovereign AI inference platforms using Bitcoin’s Lightning Network for payment gating, honest accounting with automated refunds, self-custodial agent wallets, accurate cost estimation, and pseudonymous identity. The technologies prioritized are LND (Lightning Network Daemon) for payment infrastructure, l402-python and aperture for protocol implementation, macaroons for cryptographic authorization, Nostr/Lightning pubkeys for identity, and GPU-time metering for cost attribution. All patterns are designed for edge deployment on single VPS or home server infrastructure without custodial dependencies.
|
||
|
||
1. L402 Protocol Implementation for AI Inference API Gating
|
||
1.1 Protocol Overview and Core Concepts
|
||
1.1.1 L402 as HTTP 402 Payment Required Standard
|
||
The L402 protocol reactivates the dormant HTTP 402 Payment Required status code to create a native internet payment layer for machine-to-machine transactions. Unlike traditional payment systems that require browser-based checkout flows, account registration, and API key management, L402 embeds payment directly into HTTP request/response cycles, enabling AI agents to procure compute resources with sub-second latency and no human intervention (Lightning Labs) .
|
||
The protocol’s architectural significance lies in its stateless verification model: servers need not maintain session databases for unpaid requests. Instead, cryptographic credentials (macaroons) encode all authorization parameters, allowing any server in a distributed system to verify payments without database lookups. This property is essential for horizontally-scaled AI inference clusters where requests may be handled by different GPU nodes (Lightning Labs) .
|
||
Economic advantages are substantial. Traditional payment processors charge 2-3% plus $0.30 per transaction, making micropayments below ~$0.50 impractical. Lightning Network fees routinely fall below $0.00001 per payment, enabling genuine pay-per-token pricing where users pay only for actual compute consumed. This alignment eliminates the subsidy patterns of subscription models, where light users overpay to cover heavy users (dwellir.com) .
|
||
1.1.2 Four-Step Payment Flow: Request → Challenge → Payment → Access
|
||
The L402 flow has been refined through production deployment at Lightning Labs and Fewsats into a deterministic, automatable sequence:
|
||
The payment context token is the security anchor: a cryptographically random, time-bound identifier that binds all offers in a response to a single session. This prevents cross-session replay attacks where an attacker might apply payment proofs to different services or price points. Tokens typically expire in 5-15 minutes, matching Lightning invoice lifecycles (L402 Protocol) .
|
||
The macaroon-invoice binding occurs through shared secret material. The macaroon’s third-party caveat commits to the invoice’s payment hash; satisfaction requires the preimage revealed only upon payment settlement. This creates atomic authorization: payment and credential are inseparable, eliminating double-spend risks without blockchain confirmation delays (Lightning Labs) .
|
||
1.1.3 Relationship to Legacy LSAT Protocol
|
||
L402 evolved from LSAT (Lightning Service Authentication Token) through the bLIP (Bitcoin Lightning Improvement Proposal) standardization process. The renaming reflects protocol maturation beyond authentication to encompass general payment use cases, particularly AI agent commerce (Lightning Labs) .
|
||
Key specification changes from LSAT to L402:
|
||
Backwards compatibility is maintained: servers should accept both macaroon and token parameters. Existing LSAT implementations require only cosmetic updates to comply with current L402 specifications (Lightning Labs) .
|
||
1.2 Open-Source Libraries and SDKs
|
||
1.2.1 Python: l402-python (Fewsats) — Client and Server Patterns
|
||
The l402-python library provides the most AI-agent-focused implementation, with explicit support for automated payment flows and multi-backend flexibility. Installation via pip install l402 avoids dependency conflicts with PyTorch/TensorFlow through careful version pinning (Github) .
|
||
Client Implementation Pattern:
|
||
from l402.payment_clients import Client, CoinbaseProvider
|
||
from cdp import Wallet
|
||
import httpx
|
||
from http import HTTPStatus
|
||
|
||
# Initialize with configurable payment backend
|
||
wallet = Wallet.create()
|
||
client = Client(onchain_provider=CoinbaseProvider(wallet=wallet))
|
||
|
||
# Automated payment on 402 detection
|
||
response = httpx.get("https://api.inference.example.com/v1/generate")
|
||
if response.status_code == HTTPStatus.PAYMENT_REQUIRED:
|
||
payment_result = client.pay(response.json()) # Full flow: select, pay, retry
|
||
# Credential automatically cached for reuse
|
||
The Client.pay() method implements intelligent offer selection based on configured provider availability, price optimization, and payment method compatibility. For AI agents, this abstraction eliminates payment logic from task-specific code, enabling focus on inference orchestration (Github) .
|
||
Server Implementation Pattern:
|
||
from l402.payment_providers import PaymentServer, Offer
|
||
from fastapi import FastAPI
|
||
from fastapi.responses import JSONResponse
|
||
from uuid import uuid4
|
||
|
||
app = FastAPI()
|
||
payment_server = PaymentServer(
|
||
payment_request_url="https://api.inference.example.com/l402/payment-request",
|
||
onchain_provider=wallet, # Or Lightning provider
|
||
)
|
||
|
||
@app.get("/v1/generate")
|
||
def generate_endpoint():
|
||
"""Protected inference endpoint - returns 402 without valid credentials"""
|
||
# Check for L402 authorization header...
|
||
# If missing/invalid, return offers
|
||
offers = [
|
||
Offer(
|
||
amount=500, # USD cents = $5.00
|
||
currency='USD',
|
||
description='Claude-3-Opus inference, up to 4K tokens',
|
||
offer_id=str(uuid4()),
|
||
payment_methods=['lightning', 'onchain'],
|
||
title='Premium Inference',
|
||
type='one-time',
|
||
# AI-specific extensions
|
||
metadata={
|
||
'model_id': 'claude-3-opus-20240229',
|
||
'max_tokens': 4096,
|
||
'estimated_latency_ms': 800,
|
||
'refund_policy': 'pro_rata_unused'
|
||
}
|
||
)
|
||
]
|
||
offers_response = payment_server.create_offers(offers)
|
||
return JSONResponse(content=offers_response.model_dump(), status_code=402)
|
||
The two-phase design—/offers for discovery, /payment_request for payment initiation—enables edge caching of offers while keeping invoice generation (which requires Lightning node access) on origin servers. This topology minimizes latency for initial requests and reduces Lightning node load (Github) .
|
||
1.2.2 Go: aperture (Lightning Labs) — Reverse Proxy and Middleware
|
||
aperture provides production-hardened proxy-based L402 enforcement, operational since 2019 for Lightning Loop and other Lightning Labs services. Its architecture positions L402 authentication as infrastructure layer rather than application concern (Bitcoin Magazine) .
|
||
Deployment configuration for AI inference:
|
||
# aperture.yaml
|
||
lnd:
|
||
host: localhost:10009
|
||
macaroon_path: /lnd/data/chain/bitcoin/mainnet/invoices.macaroon # Restricted!
|
||
tls_path: /lnd/tls.cert
|
||
|
||
services:
|
||
- name: inference-gpu-cluster
|
||
host: localhost:8000
|
||
path: /v1/generate
|
||
price:
|
||
base: 100 # millisatoshis
|
||
rate: 10 # per token estimate
|
||
constraints:
|
||
max_tokens: 128000
|
||
timeout_seconds: 120
|
||
Proxy pattern advantages: - Zero application code changes: Existing FastAPI/Flask/Node services gain L402 gating - Centralized policy management: Pricing, rate limits, access controls in one configuration - Multi-backend consistency: Uniform authentication across polyglot services
|
||
Proxy pattern disadvantages: - Added latency: Every request transits proxy hop - Connection complexity: Streaming responses (SSE, WebSockets) require careful handling - State management: Invoice subscription state must be maintained or delegated
|
||
For latency-sensitive inference (<100ms target), direct library integration outperforms proxy deployment. For multi-tenant platforms with diverse backend technologies, aperture provides operational simplicity worth the performance cost (Bitcoin Magazine) .
|
||
1.2.3 TypeScript/JavaScript: lnget and Web Client Libraries
|
||
lnget provides reference CLI and programmatic client implementation for Node.js and browser environments. Its fetch-compatible interface enables drop-in replacement for standard HTTP clients (Bitcoin Magazine) .
|
||
import { L402Client } from 'lnget';
|
||
|
||
const client = new L402Client({
|
||
lnd: {
|
||
host: 'localhost:10009',
|
||
macaroonPath: './admin.macaroon',
|
||
tlsCertPath: './tls.cert'
|
||
}
|
||
});
|
||
|
||
// Automatic payment and retry
|
||
const response = await client.fetch('https://api.inference.example.com/v1/generate', {
|
||
method: 'POST',
|
||
body: JSON.stringify({ prompt: 'Explain quantum computing', max_tokens: 500 })
|
||
});
|
||
WebLN browser integration enables non-custodial client-side payment without exposing node credentials:
|
||
// Browser environment with WebLN-compatible wallet (Alby, Zeus, etc.)
|
||
if (window.webln) {
|
||
await window.webln.enable();
|
||
const response = await fetch('https://api.inference.example.com/v1/generate', {
|
||
method: 'POST',
|
||
body: JSON.stringify({ prompt: userInput })
|
||
});
|
||
|
||
if (response.status === 402) {
|
||
const { payment_request } = await response.json();
|
||
const result = await window.webln.sendPayment(payment_request);
|
||
// Retry with preimage...
|
||
}
|
||
}
|
||
The WebLN pattern is essential for browser-based AI applications—interactive demos, fine-tuning interfaces, no-code tools—where users control their own Lightning wallets without platform custody (Github) .
|
||
1.2.4 Library Selection Criteria for Production Deployment
|
||
Recommendation: Start with l402-python for Python-based inference services; evaluate aperture for multi-language deployments or when operational separation of authentication is required. Custom implementations justified only for specialized macaroon attenuation patterns or non-standard payment flows unsupported by existing libraries.
|
||
1.3 Server-Side Implementation Patterns
|
||
1.3.1 FastAPI/Express Server Structure: /offers and /payment_request Endpoints
|
||
The standardized two-endpoint structure separates offer discovery (cacheable, lightweight) from payment initiation (stateful, Lightning-node-dependent).
|
||
Complete FastAPI implementation:
|
||
from fastapi import FastAPI, HTTPException, Header
|
||
from fastapi.responses import JSONResponse, StreamingResponse
|
||
from pydantic import BaseModel
|
||
from typing import Optional, List
|
||
import lndgrpc
|
||
import pymacaroons
|
||
import hashlib
|
||
import time
|
||
from uuid import uuid4
|
||
|
||
app = FastAPI()
|
||
lnd = lndgrpc.LNDClient(
|
||
"localhost:10009",
|
||
macaroon_filepath="/lnd/data/chain/bitcoin/mainnet/invoices.macaroon",
|
||
cert_filepath="/lnd/tls.cert"
|
||
)
|
||
|
||
# In-memory context store (production: Redis/distributed cache)
|
||
payment_contexts = {}
|
||
|
||
class JobParameters(BaseModel):
|
||
model_id: str
|
||
estimated_input_tokens: int
|
||
estimated_output_tokens: int
|
||
max_tokens: int
|
||
temperature: float = 0.7
|
||
refund_policy: str = "pro_rata_unused"
|
||
|
||
@app.get("/v1/generate")
|
||
async def generate(
|
||
prompt: str,
|
||
max_tokens: int = 500,
|
||
authorization: Optional[str] = Header(None)
|
||
):
|
||
"""Main inference endpoint - L402 gated"""
|
||
|
||
# Check for valid L402 authorization
|
||
if authorization and authorization.startswith("L402 "):
|
||
try:
|
||
result = verify_l402_auth(authorization, prompt, max_tokens)
|
||
if result.valid:
|
||
return await execute_inference(prompt, max_tokens, result.job_params)
|
||
except Exception as e:
|
||
logger.warning(f"L402 verification failed: {e}")
|
||
|
||
# No valid auth: return 402 with offers
|
||
return await create_offers_response(prompt, max_tokens)
|
||
|
||
async def create_offers_response(prompt: str, max_tokens: int):
|
||
"""Generate L402 challenge with AI-specific offers"""
|
||
|
||
# Estimate job parameters
|
||
input_tokens=estima...mpt)
|
||
estimated_output = min(max_tokens, int(input_tokens * 0.75)) # Heuristic
|
||
|
||
context_token=f"pct_...ex}"
|
||
payment_contexts[context_token] = {
|
||
"created_at": time.time(),
|
||
"prompt": prompt, # For job reconstruction
|
||
"estimated_input": input_tokens,
|
||
"estimated_output": estimated_output,
|
||
"max_tokens": max_tokens
|
||
}
|
||
|
||
offers = {
|
||
"version": "0.2.2",
|
||
"payment_request_url": "https://api.inference.example.com/l402/payment-request",
|
||
"payment_context_token": context_token,
|
||
"offers": [
|
||
{
|
||
"id": f"inf_{uuid4().hex[:8]}",
|
||
"title": "Standard Inference",
|
||
"description": f"Up to {max_tokens} tokens, ~{estimated_output} estimated output",
|
||
"type": "one-time",
|
||
"amount": calculate_price_usd_cents(input_tokens, estimated_output, "standard"),
|
||
"currency": "USD",
|
||
"payment_methods": ["lightning", "onchain"],
|
||
"metadata": {
|
||
"model_id": "claude-3-sonnet-20240229",
|
||
"estimated_input_tokens": input_tokens,
|
||
"estimated_output_tokens": estimated_output,
|
||
"max_tokens": max_tokens,
|
||
"refund_policy": "pro_rata_unused"
|
||
}
|
||
},
|
||
{
|
||
"id": f"inf_{uuid4().hex[:8]}",
|
||
"title": "Premium Inference",
|
||
"description": f"Priority queue, up to {max_tokens} tokens",
|
||
"type": "one-time",
|
||
"amount": calculate_price_usd_cents(input_tokens, estimated_output, "premium"),
|
||
"currency": "USD",
|
||
"payment_methods": ["lightning"],
|
||
"metadata": {
|
||
"model_id": "claude-3-opus-20240229",
|
||
"priority_queue": True,
|
||
"estimated_latency_ms": 300
|
||
}
|
||
}
|
||
],
|
||
"terms_url": "https://inference.example.com/terms",
|
||
"metadata": {
|
||
"service_tier": "inference_api",
|
||
"rate_limit": "100_requests_per_minute"
|
||
}
|
||
}
|
||
|
||
return JSONResponse(content=offers, status_code=402)
|
||
|
||
@app.post("/l402/payment-request")
|
||
async def payment_request(request: dict):
|
||
"""Generate payment-specific details for selected offer"""
|
||
|
||
context = payment_contexts.get(request.get("payment_context_token"))
|
||
if not context or time.time() - context["created_at"] > 600:
|
||
raise HTTPException(status_code=400, detail="Invalid or expired context")
|
||
|
||
offer_id = request.get("offer_id")
|
||
payment_method = request.get("payment_method")
|
||
|
||
# Retrieve selected offer details
|
||
offer = find_offer_by_id(offer_id) # Implementation omitted
|
||
if not offer:
|
||
raise HTTPException(status_code=400, detail="Offer not found")
|
||
|
||
# Generate Lightning invoice
|
||
if payment_method == "lightning":
|
||
amount_sat = usd_cents_to_satoshis(offer["amount"])
|
||
|
||
invoice = lnd.add_invoice(
|
||
value=amount_sat,
|
||
memo=f"Inference:{offer['metadata']['model_id']}:{context_token[:16]}",
|
||
expiry=600, # 10 minutes
|
||
)
|
||
|
||
# Mint macaroon with job-specific caveats
|
||
macaroon = mint_job_macaroon(
|
||
payment_hash=invoice.r_hash.hex(),
|
||
context=context,
|
||
offer=offer
|
||
)
|
||
|
||
return {
|
||
"version": "0.2.2",
|
||
"payment_request": {
|
||
"lightning_invoice": invoice.payment_request,
|
||
},
|
||
"expires_at": (datetime.utcnow() + timedelta(minutes=10)).isoformat(),
|
||
"macaroon": macaroon.serialize()
|
||
}
|
||
|
||
# On-chain payment handling...
|
||
1.3.2 HTTP 402 Response Format with Offer Array and Payment Context Token
|
||
The L402 response structure has been standardized through bLIP process to enable interoperable client implementations (L402 Protocol) :
|
||
Offer structure extensions for AI inference:
|
||
The payment_context_token implementation requires cryptographic randomness (not uuid4 for security-critical contexts) and distributed expiration (Redis TTL, not local memory). Production deployments should bind context to client IP or TLS session for additional replay protection.
|
||
1.3.3 Lightning Invoice Generation via LND gRPC Integration
|
||
Critical AddInvoice parameters for AI inference:
|
||
Production implementation with error handling:
|
||
import grpc
|
||
from lndgrpc import invoicesrpc
|
||
|
||
class LNDInvoiceManager:
|
||
def __init__(self, lnd_client):
|
||
self.lnd = lnd_client
|
||
self.rate_cache = {} # USD/SAT rate with TTL
|
||
|
||
async def create_inference_invoice(
|
||
self,
|
||
amount_usd_cents: int,
|
||
job_context: dict,
|
||
expiry_seconds: int = 600
|
||
) -> dict:
|
||
"""
|
||
Create Lightning invoice for AI inference job.
|
||
|
||
Handles exchange rate conversion, memo construction,
|
||
and liquidity validation.
|
||
"""
|
||
# Get cached or fresh exchange rate
|
||
sat_per_usd = await self.get_exchange_rate()
|
||
amount_sat = int(amount_usd_cents * sat_per_usd / 100)
|
||
|
||
# Validate we have inbound capacity
|
||
channels = self.lnd.list_channels()
|
||
total_inbound = sum(c.remote_balance for c in channels.channels)
|
||
if total_inbound < amount_sat * 10: # 10x safety margin
|
||
logger.warning(f"Low inbound liquidity: {total_inbound} sat")
|
||
# Trigger liquidity acquisition or reject large payments
|
||
|
||
# Construct descriptive memo (BOLT 11 limit: 639 bytes)
|
||
memo_parts = [
|
||
"AI",
|
||
job_context.get("model_id", "unknown")[:20],
|
||
job_context.get("context_token", "unknown")[:16],
|
||
str(int(time.time()))[-6:] # Truncated timestamp
|
||
]
|
||
memo = ":".join(memo_parts)[:639]
|
||
|
||
try:
|
||
response = self.lnd.add_invoice(
|
||
value=amount_sat,
|
||
memo=memo,
|
||
expiry=expiry_seconds,
|
||
# Optional: custom preimage for refund patterns
|
||
# payment_preimage=os.urandom(32) if refund_enabled else None
|
||
)
|
||
|
||
return {
|
||
"payment_request": response.payment_request,
|
||
"payment_hash": response.r_hash.hex(),
|
||
"add_index": response.add_index, # For subscription tracking
|
||
"amount_sat": amount_sat,
|
||
"expiry": expiry_seconds
|
||
}
|
||
|
||
except grpc.RpcError as e:
|
||
if e.code() == grpc.StatusCode.UNAVAILABLE:
|
||
raise LNDConnectionError("Lightning node unreachable")
|
||
elif e.code() == grpc.StatusCode.RESOURCE_EXHAUSTED:
|
||
raise LNDCapacityError("Insufficient inbound liquidity")
|
||
elif "invoice with payment hash already exists" in str(e):
|
||
# Idempotency: return existing invoice
|
||
existing = self.lnd.lookup_invoice(r_hash=extract_hash_from_error(e))
|
||
return format_existing_invoice(existing)
|
||
raise
|
||
1.3.4 Invoice Subscription Streams for Real-Time Settlement Detection
|
||
The SubscribeInvoices gRPC stream provides event-driven settlement detection with lower latency and resource consumption than polling (Bitcoin Magazine) :
|
||
import asyncio
|
||
from lndgrpc import invoicesrpc
|
||
|
||
class SettlementMonitor:
|
||
def __init__(self, lnd_client, job_queue):
|
||
self.lnd = lnd_client
|
||
self.job_queue = job_queue # Redis/RabbitMQ for job dispatch
|
||
self.pending = {} # payment_hash -> job_metadata
|
||
self.last_add_index = 0 # Persisted recovery point
|
||
|
||
async def start(self):
|
||
"""Begin monitoring with automatic reconnection."""
|
||
while True:
|
||
try:
|
||
await self._monitor_stream()
|
||
|
||
|
||
... [OUTPUT TRUNCATED - 92466 chars omitted out of 142466 total] ...
|
||
|
||
tput token estimation error of 15-30% RMS for unconstrained generation, improving to 5-10% with max_tokens constraints.
|
||
4.5.2 Anthropic Claude Series Variance Patterns
|
||
Claude-specific factors:
|
||
4.5.3 Together.ai and Open-Source Model Comparisons
|
||
Open-source deployment variance:
|
||
4.6 Special Case Handling
|
||
4.6.1 Long-Context Window Cost Spikes
|
||
Context length cost models:
|
||
Detection and pricing:
|
||
def long_context_premium(input_tokens: int, model_id: str) -> float:
|
||
"""Apply premium for long-context requests."""
|
||
|
||
specs = MODEL_SPECS[model_id]
|
||
|
||
if input_tokens <= specs.efficient_context_length:
|
||
return 1.0
|
||
|
||
# Superlinear scaling
|
||
excess = input_tokens - specs.efficient_context_length
|
||
scaling_exponent = 1.5 # Empirical
|
||
|
||
premium = 1 + (excess / specs.efficient_context_length) ** scaling_exponent * 0.5
|
||
|
||
return min(premium, 5.0) # Cap at 5x
|
||
4.6.2 Chain-of-Thought and Reasoning Model Overhead
|
||
Reasoning-specific estimation:
|
||
4.6.3 Tool-Use and Multi-Turn Conversation Escalation
|
||
Tool use cost model:
|
||
Base cost: input_tokens × rate_in + estimated_output × rate_out
|
||
Tool overhead:
|
||
- Per-call latency: 500ms-2s (billed as equivalent tokens)
|
||
- Result processing: result_tokens × rate_in
|
||
- Retry logic: 1.5x average for error recovery
|
||
|
||
Multi-turn:
|
||
- Accumulated context: sum of all prior tokens
|
||
- KV cache reuse: -20% effective cost (implementation dependent)
|
||
4.7 Concrete Formulas and Calibration Algorithms
|
||
4.7.1 Base Cost Formula: C = (T_in × R_in) + (T_est × R_out) + O
|
||
Complete pricing formula:
|
||
C_total = [
|
||
(T_input × R_input) +
|
||
(T_output_est × R_output × M_confidence) +
|
||
(T_context_long × R_context_premium) +
|
||
(N_tools × C_tool_overhead) +
|
||
(G_gpu_seconds × R_gpu)
|
||
] × (1 + P_platform)
|
||
|
||
Where:
|
||
- T_input: Exact token count from tokenizer
|
||
- T_output_est: Estimated from heuristics (Section 4.1.2)
|
||
- M_confidence: Percentile multiplier (Section 4.4.1)
|
||
- R_context_premium: Long-context scaling (Section 4.6.1)
|
||
- C_tool_overhead: Fixed + variable per tool call
|
||
- G_gpu_seconds: Measured or estimated compute time
|
||
- P_platform: Platform margin (typically 20-50%)
|
||
4.7.2 Adaptive Calibration Algorithm with Error Correction
|
||
class AdaptiveCalibrator:
|
||
"""
|
||
Online learning for cost estimation accuracy.
|
||
"""
|
||
|
||
def __init__(self, model_id: str, initial_params: dict):
|
||
self.model = model_id
|
||
self.params = initial_params
|
||
|
||
# Per-feature weights for linear model
|
||
self.weights = {
|
||
'input_tokens': 0.0,
|
||
'complexity_score': 0.0,
|
||
'task_type_indicator': 0.0,
|
||
'bias': initial_params.get('base_rate', 0.0)
|
||
}
|
||
|
||
# Learning rate schedule
|
||
self.learning_rate = 0.01
|
||
self.min_lr = 0.001
|
||
self.lr_decay = 0.999
|
||
|
||
def predict(self, features: dict) -> float:
|
||
"""Predict cost from features."""
|
||
|
||
prediction = self.weights['bias']
|
||
for feature, value in features.items():
|
||
if feature in self.weights:
|
||
prediction += self.weights[feature] * value
|
||
|
||
return max(0, prediction) # Non-negative
|
||
|
||
def update(self, features: dict, actual_cost: float):
|
||
"""Online gradient descent update."""
|
||
|
||
predicted = self.predict(features)
|
||
error = actual_cost - predicted
|
||
|
||
# Update weights
|
||
for feature, value in features.items():
|
||
if feature in self.weights:
|
||
self.weights[feature] += self.learning_rate * error * value
|
||
|
||
# Decay learning rate
|
||
self.learning_rate = max(self.min_lr, self.learning_rate * self.lr_decay)
|
||
|
||
# Track performance
|
||
self.history.append({
|
||
'predicted': predicted,
|
||
'actual': actual_cost,
|
||
'relative_error': error / actual_cost if actual_cost > 0 else 0
|
||
})
|
||
|
||
def get_confidence_interval(self, features: dict, confidence: float = 0.95) -> tuple:
|
||
"""Return prediction interval based on recent error distribution."""
|
||
|
||
if len(self.history) < 100:
|
||
# Insufficient data: use conservative default
|
||
prediction = self.predict(features)
|
||
return (prediction * 0.5, prediction * 2.0)
|
||
|
||
recent_errors = [h['relative_error'] for h in self.history[-500:]]
|
||
|
||
mean_error = statistics.mean(recent_errors)
|
||
std_error = statistics.stdev(recent_errors) if len(recent_errors) > 1 else 0.5
|
||
|
||
z = {0.90: 1.28, 0.95: 1.96, 0.99: 2.58}[confidence]
|
||
|
||
prediction = self.predict(features)
|
||
lower = prediction * (1 + mean_error - z * std_error)
|
||
upper = prediction * (1 + mean_error + z * std_error)
|
||
|
||
return (max(0, lower), upper)
|
||
4.7.3 Confidence Interval Calculation from Historical Variance
|
||
Statistical foundation:
|
||
For normally-distributed relative errors (validated with Kolmogorov-Smirnov test), the confidence interval for prediction ŷ with historical mean error μ_e and standard deviation σ_e:
|
||
CI_95% = [ŷ × (1 + μ_e - 1.96 × σ_e), ŷ × (1 + μ_e + 1.96 × σ_e)]
|
||
For heavy-tailed distributions (common in AI inference due to outliers), use t-distribution with degrees of freedom from data or non-parametric bootstrap:
|
||
def bootstrap_confidence_interval(
|
||
predictions: List[float],
|
||
actuals: List[float],
|
||
new_prediction: float,
|
||
n_bootstrap: int = 10000,
|
||
confidence: float = 0.95
|
||
) -> tuple:
|
||
"""
|
||
Non-parametric confidence interval via bootstrap.
|
||
"""
|
||
|
||
errors = [(a - p) / p for p, a in zip(predictions, actuals) if p > 0]
|
||
|
||
bootstrapped = []
|
||
for _ in range(n_bootstrap):
|
||
# Resample errors with replacement
|
||
sample_errors = [random.choice(errors) for _ in errors]
|
||
|
||
# Apply to new prediction
|
||
adjusted = new_prediction * (1 + random.choice(sample_errors))
|
||
bootstrapped.append(adjusted)
|
||
|
||
bootstrapped.sort()
|
||
|
||
lower_idx = int(n_bootstrap * (1 - confidence) / 2)
|
||
upper_idx = int(n_bootstrap * (1 + confidence) / 2)
|
||
|
||
return (bootstrapped[lower_idx], bootstrapped[upper_idx])
|
||
|
||
5. Bitcoin-Native Identity Without KYC
|
||
5.1 Identity Anchor Mechanisms
|
||
5.1.1 Nostr npub as Primary Identifier: Key Generation, Derivation Paths
|
||
Nostr key architecture:
|
||
Hierarchical derivation for service-specific keys:
|
||
Master seed (BIP39 mnemonic)
|
||
└── m/44'/1237'/0'/0/0 (Nostr primary identity)
|
||
└── m/44'/1237'/0'/0/{index} (Per-service derived keys)
|
||
|
||
Derivation path: m/44'/1237'/account'/change/index
|
||
- 44': BIP44 purpose
|
||
- 1237': Nostr coin type (registered)
|
||
- account': Service separation
|
||
- change: 0 for external, 1 for internal
|
||
- index: Key sequence
|
||
Implementation with nostr library:
|
||
import nostr
|
||
from nostr.key import PrivateKey
|
||
|
||
class NostrIdentityManager:
|
||
def __init__(self, master_seed: bytes):
|
||
self.master = bip32.HDKey.from_seed(master_seed)
|
||
|
||
def derive_service_key(
|
||
self,
|
||
service_name: str,
|
||
index: int = 0
|
||
) -> PrivateKey:
|
||
"""
|
||
Derive service-specific Nostr key.
|
||
|
||
Deterministic: same service_name always produces same key.
|
||
"""
|
||
# Hash service name to account number
|
||
account = int(hashlib.sha256(service_name.encode()).hexdigest()[:8], 16) % (2**31)
|
||
|
||
path = f"m/44'/1237'/{account}'/0/{index}"
|
||
derived = self.master.derive(path)
|
||
|
||
return PrivateKey(derived.private_key)
|
||
|
||
def get_npub(self, private_key: PrivateKey) -> str:
|
||
"""Bech32-encoded public identifier."""
|
||
return private_key.public_key.bech32()
|
||
5.1.2 Lightning Node Public Key as Secondary/Verification Identity
|
||
Lightning identity properties:
|
||
Cross-identity linking:
|
||
class CrossIdentityVerifier:
|
||
"""
|
||
Verify linkage between Nostr and Lightning identities.
|
||
"""
|
||
|
||
async def verify_linkage(
|
||
self,
|
||
npub: str,
|
||
node_pubkey: str,
|
||
proof: dict
|
||
) -> bool:
|
||
"""
|
||
Verify that Nostr identity controls Lightning node.
|
||
|
||
Proof formats:
|
||
1. Nostr event signed with node key
|
||
2. Lightning invoice with Nostr pubkey in description
|
||
3. Mutual attestation from trusted third party
|
||
"""
|
||
|
||
if proof['type'] == 'nostr_event':
|
||
# Verify event signed by node key
|
||
event = proof['event']
|
||
|
||
# Convert node pubkey to Nostr format
|
||
derived_npub = self.lightning_to_nostr_pubkey(node_pubkey)
|
||
|
||
# Verify signature
|
||
return verify_event_signature(event, derived_npub)
|
||
|
||
elif proof['type'] == 'invoice_attestation':
|
||
# Verify invoice contains Nostr pubkey
|
||
invoice = decode_bolt11(proof['invoice'])
|
||
description = invoice.description
|
||
|
||
# Parse description for embedded npub
|
||
return f"npub:{npub}" in description or f"nostr:{npub}" in description
|
||
|
||
return False
|
||
|
||
def lightning_to_nostr_pubkey(self, node_pubkey: str) -> str:
|
||
"""
|
||
Convert 33-byte Lightning pubkey to Nostr format.
|
||
|
||
Note: Same curve, different encoding. Nostr uses
|
||
32-byte x-only, Lightning uses compressed 33-byte.
|
||
"""
|
||
# Strip compression byte if y is even
|
||
pubkey_bytes = bytes.fromhex(node_pubkey)
|
||
if pubkey_bytes[0] in (0x02, 0x03):
|
||
# Compressed: convert to x-only
|
||
x = pubkey_bytes[1:]
|
||
# Verify y parity matches compression byte
|
||
return x.hex()
|
||
|
||
return node_pubkey # Already x-only
|
||
5.1.3 Cross-Anchor Linking and Verification Protocols
|
||
Linkage attestation Nostr event (NIP-39):
|
||
{
|
||
"kind": 1984, // Identity attestation
|
||
"pubkey": "<npub of attestor>",
|
||
"tags": [
|
||
["i", "ln:<node_pubkey>", "<proof>"],
|
||
["i", "npub:<target_npub>", ""],
|
||
["claim", "These identities are controlled by the same entity"]
|
||
],
|
||
"content": "",
|
||
"sig": "<signature>"
|
||
}
|
||
5.2 Cryptographic Proof Systems
|
||
5.2.1 BIP-322 Generic Signed Message Format
|
||
BIP-322 advantages over legacy signmessage:
|
||
BIP-322 implementation for identity proof:
|
||
import bip322
|
||
|
||
def create_identity_proof(
|
||
private_key: bytes,
|
||
message: str,
|
||
domain: str = "sovereign-ai.example.com"
|
||
) -> dict:
|
||
"""
|
||
Create BIP-322 signed message for identity verification.
|
||
|
||
Domain separation prevents cross-service replay.
|
||
"""
|
||
|
||
# Domain-separated message
|
||
full_message = f"{domain}:{message}"
|
||
|
||
# Sign with proper script construction
|
||
proof = bip322.sign_message(
|
||
private_key,
|
||
full_message,
|
||
script_type='p2tr' # Taproot for modern wallets
|
||
)
|
||
|
||
return {
|
||
'message': full_message,
|
||
'signature': proof.signature.hex(),
|
||
'pubkey': proof.pubkey.hex(),
|
||
'script_type': 'p2tr'
|
||
}
|
||
|
||
def verify_identity_proof(
|
||
proof: dict,
|
||
expected_npub: Optional[str] = None
|
||
) -> bool:
|
||
"""Verify BIP-322 proof and optional npub binding."""
|
||
|
||
# Core BIP-322 verification
|
||
valid = bip322.verify_message(
|
||
proof['message'],
|
||
proof['signature'],
|
||
proof['pubkey']
|
||
)
|
||
|
||
if not valid:
|
||
return False
|
||
|
||
# Optional: verify pubkey matches claimed npub
|
||
if expected_npub:
|
||
derived_npub = pubkey_to_npub(proof['pubkey'])
|
||
if derived_npub != expected_npub:
|
||
return False
|
||
|
||
# Check domain and timestamp freshness
|
||
domain, payload = proof['message'].split(':', 1)
|
||
if domain != expected_domain:
|
||
return False
|
||
|
||
# Verify timestamp within window
|
||
timestamp = extract_timestamp(payload)
|
||
if abs(time.time() - timestamp) > 300: # 5 minute window
|
||
return False
|
||
|
||
return True
|
||
5.2.2 Nostr Event Signing for Authentication Challenges
|
||
Challenge-response authentication:
|
||
class NostrAuthChallenge:
|
||
def __init__(self, relay_url: str):
|
||
self.relay = relay_url
|
||
self.challenges = {} # challenge_id -> {npub, expires}
|
||
|
||
async def create_challenge(self, npub: str) -> str:
|
||
"""Create time-bound authentication challenge."""
|
||
|
||
challenge_id = secrets.token_hex(16)
|
||
challenge_text = f"Authenticate to sovereign-ai: {challenge_id}:{int(time.time())}"
|
||
|
||
self.challenges[challenge_id] = {
|
||
'npub': npub,
|
||
'expires': time.time() + 300,
|
||
'challenge_text': challenge_text
|
||
}
|
||
|
||
return challenge_text
|
||
|
||
async def verify_response(self, challenge_id: str, signed_event: dict) -> bool:
|
||
"""Verify signed challenge response."""
|
||
|
||
challenge = self.challenges.get(challenge_id)
|
||
if not challenge or time.time() > challenge['expires']:
|
||
return False
|
||
|
||
# Verify event kind and content
|
||
if signed_event['kind'] != 22242: # Authentication
|
||
return False
|
||
|
||
if signed_event['content'] != challenge['challenge_text']:
|
||
return False
|
||
|
||
# Verify signature
|
||
if not verify_event_signature(signed_event):
|
||
return False
|
||
|
||
# Verify pubkey matches claimed identity
|
||
if signed_event['pubkey'] != challenge['npub']:
|
||
return False
|
||
|
||
# Success: consume challenge
|
||
del self.challenges[challenge_id]
|
||
|
||
return True
|
||
5.2.3 Lightning Invoice Signing for Payment-Linked Identity
|
||
Invoice-based identity attestation:
|
||
def create_payment_linked_identity(
|
||
lnd_client,
|
||
npub: str,
|
||
amount_sat: int = 1000
|
||
) -> dict:
|
||
"""
|
||
Create Lightning invoice that binds to Nostr identity.
|
||
|
||
Payment of this invoice proves control of both
|
||
Lightning node (via payment) and Nostr key (via description).
|
||
"""
|
||
|
||
description = json.dumps({
|
||
'type': 'identity_attestation',
|
||
'npub': npub,
|
||
'timestamp': int(time.time()),
|
||
'service': 'sovereign-ai.example.com',
|
||
'nonce': secrets.token_hex(8)
|
||
})
|
||
|
||
invoice = lnd_client.add_invoice(
|
||
value=amount_sat,
|
||
memo=description,
|
||
expiry=3600
|
||
)
|
||
|
||
return {
|
||
'payment_request': invoice.payment_request,
|
||
'payment_hash': invoice.r_hash.hex(),
|
||
'description': description,
|
||
'verification': 'Pay to prove Lightning + Nostr control'
|
||
}
|
||
5.3 Decentralized Reputation Construction
|
||
5.3.1 Payment History as Trust Graph: Successful Completion, Timeliness
|
||
Reputation dimensions from payment history:
|
||
Privacy-preserving reputation:
|
||
class ZeroKnowledgeReputation:
|
||
"""
|
||
Prove reputation without revealing transaction details.
|
||
"""
|
||
|
||
def __init__(self, merkle_root: str):
|
||
self.root = merkle_root # Public commitment to history
|
||
|
||
def create_proof(
|
||
self,
|
||
private_history: List[Payment],
|
||
claim: dict # e.g., {"total_paid_sat": ">1000000"}
|
||
) -> dict:
|
||
"""
|
||
Create zk-proof of reputation claim.
|
||
|
||
Uses range proofs for numeric claims,
|
||
membership proofs for specific transactions.
|
||
"""
|
||
|
||
# Build Merkle tree of hashed payments
|
||
leaves = [hash_payment(p) for p in private_history]
|
||
tree = MerkleTree(leaves)
|
||
|
||
# Verify root matches public commitment
|
||
assert tree.root == self.root
|
||
|
||
# Create proof for specific claim
|
||
if 'total_paid_sat' in claim:
|
||
total = sum(p.amount_sat for p in private_history)
|
||
operator, threshold = parse_claim(claim['total_paid_sat'])
|
||
|
||
# Range proof: total satisfies operator vs threshold
|
||
proof = create_range_proof(total, operator, threshold)
|
||
|
||
return {
|
||
'type': 'range_proof',
|
||
'claim': claim,
|
||
'proof': proof.serialize(),
|
||
'merkle_root': self.root
|
||
}
|
||
|
||
# Membership proof: specific payment occurred
|
||
# ...
|
||
|
||
def verify_proof(self, proof: dict) -> bool:
|
||
"""Verify reputation proof without learning history."""
|
||
# ZK verification logic
|
||
pass
|
||
5.3.2 Lightning Channel-Based Web of Trust
|
||
Channel graph reputation:
|
||
Web of trust traversal:
|
||
async def calculate_channel_reputation(
|
||
node_pubkey: str,
|
||
trusted_seeds: List[str],
|
||
max_hops: int = 3
|
||
) -> float:
|
||
"""
|
||
Calculate reputation based on proximity to trusted nodes.
|
||
|
||
Exponentially decaying trust with hop distance.
|
||
"""
|
||
|
||
# BFS from trusted seeds
|
||
trust_scores = {seed: 1.0 for seed in trusted_seeds}
|
||
current_frontier = set(trusted_seeds)
|
||
|
||
for hop in range(1, max_hops + 1):
|
||
next_frontier = set()
|
||
|
||
for node in current_frontier:
|
||
# Get channel peers
|
||
peers = await get_channel_peers(node)
|
||
|
||
for peer in peers:
|
||
if peer in trust_scores:
|
||
continue # Already scored
|
||
|
||
# Decay trust with distance
|
||
decay = 0.5 ** hop
|
||
# Weight by channel capacity
|
||
capacity = await get_channel_capacity(node, peer)
|
||
weight = min(capacity / 1_000_000, 1.0) # Cap at 1 BTC
|
||
|
||
trust_scores[peer] = trust_scores[node] * decay * weight
|
||
next_frontier.add(peer)
|
||
|
||
current_frontier = next_frontier
|
||
|
||
return trust_scores.get(node_pubkey, 0.0)
|
||
5.3.3 Mutual Endorsement Protocols and Attestation Chains
|
||
NIP-32 labeling for reputation:
|
||
{
|
||
"kind": 1985, // Label
|
||
"pubkey": "<endorser_npub>",
|
||
"tags": [
|
||
["L", "reputation"],
|
||
["l", "reliable-payer", "reputation"],
|
||
["p", "<subject_npub>", "wss://relay.example.com"],
|
||
["e", "<attestation_event_id>", "wss://relay.example.com"],
|
||
["expiration", "1704067200"]
|
||
],
|
||
"content": "Consistently pays invoices within 60 seconds over 50+ transactions",
|
||
"sig": "<signature>"
|
||
}
|
||
5.4 Know Your Counterparty (KYC) Alternatives
|
||
5.4.1 Stake-Based Reputation: Time-Locked Bonds, Channel Reserves
|
||
Stake mechanisms:
|
||
Time-locked bond implementation:
|
||
class TimeLockedBond:
|
||
def __init__(self, bitcoin_rpc):
|
||
self.rpc = bitcoin_rpc
|
||
|
||
async def create_bond(
|
||
self,
|
||
user_pubkey: str,
|
||
amount_sat: int,
|
||
lock_blocks: int # ~10 days at 10 min/block
|
||
) -> dict:
|
||
"""
|
||
Create OP_CHECKLOCKTIMEVERIFY bond.
|
||
|
||
User can reclaim after lock period, or
|
||
forfeit to platform if misbehavior proven.
|
||
"""
|
||
|
||
# Build P2SH script:
|
||
# IF <platform_pubkey> CHECKSIG
|
||
# ELSE <locktime> CHECKLOCKTIMEVERIFY DROP <user_pubkey> CHECKSIG
|
||
# ENDIF
|
||
|
||
redeem_script = build_timelock_redeem(
|
||
user_pubkey=user_pubkey,
|
||
platform_pubkey=self.platform_key,
|
||
lock_blocks=lock_blocks
|
||
)
|
||
|
||
p2sh_address = script_to_p2sh(redeem_script)
|
||
|
||
# User funds this address
|
||
return {
|
||
'bond_address': p2sh_address,
|
||
'redeem_script': redeem_script.hex(),
|
||
'lock_blocks': lock_blocks,
|
||
'unlock_height': await self.rpc.getblockcount() + lock_blocks,
|
||
'amount_required_sat': amount_sat
|
||
}
|
||
|
||
async def verify_bond_funded(self, bond_address: str) -> dict:
|
||
"""Check bond address has required funding."""
|
||
|
||
utxos = await self.rpc.listunspent(0, 9999999, [bond_address])
|
||
|
||
total = sum(u['amount'] for u in utxos)
|
||
confirmations = min(u['confirmations'] for u in utxos) if utxos else 0
|
||
|
||
return {
|
||
'funded': total >= self.amount_required_sat,
|
||
'amount_confirmed_sat': total,
|
||
'confirmations': confirmations,
|
||
'mature': confirmations >= 6
|
||
}
|
||
5.4.2 Activity Proof: Sustained Payment History, Nostr Engagement
|
||
Activity metrics:
|
||
5.4.3 Social Graph Analysis: Common Connections, Clustering
|
||
Sybil resistance through graph structure:
|
||
def analyze_sybil_resistance(
|
||
new_identity: str,
|
||
existing_graph: nx.Graph,
|
||
min_mutual_connections: int = 3,
|
||
max_clustering_bonus: float = 2.0
|
||
) -> dict:
|
||
"""
|
||
Evaluate new identity's organic integration into graph.
|
||
|
||
High scores indicate genuine identity; low scores suggest
|
||
sybil attack or isolated account.
|
||
"""
|
||
|
||
# Mutual connections with existing users
|
||
mutuals = list(nx.common_neighbors(existing_graph, new_identity))
|
||
|
||
# Clustering coefficient of ego network
|
||
ego = nx.ego_graph(existing_graph, new_identity, radius=2)
|
||
clustering = nx.average_clustering(ego)
|
||
|
||
# Path length distribution to trusted seeds
|
||
path_lengths = []
|
||
for seed in TRUSTED_SEEDS:
|
||
try:
|
||
length = nx.shortest_path_length(existing_graph, new_identity, seed)
|
||
path_lengths.append(length)
|
||
except nx.NetworkXNoPath:
|
||
pass
|
||
|
||
score_components = {
|
||
'mutual_connections': len(mutuals),
|
||
'clustering_coefficient': clustering,
|
||
'avg_path_to_seeds': statistics.mean(path_lengths) if path_lengths else float('inf'),
|
||
'max_path_to_seeds': max(path_lengths) if path_lengths else float('inf')
|
||
}
|
||
|
||
# Composite score
|
||
if len(mutuals) < min_mutual_connections:
|
||
return {'status': 'rejected', 'reason': 'insufficient_mutuals', 'components': score_components}
|
||
|
||
score = (
|
||
len(mutuals) * 0.3 +
|
||
clustering * 10 * 0.3 +
|
||
(5 / (statistics.mean(path_lengths) + 1)) * 0.4
|
||
)
|
||
|
||
return {
|
||
'status': 'accepted' if score > 0.5 else 'review',
|
||
'score': score,
|
||
'components': score_components
|
||
}
|
||
5.5 DID Standard Integration
|
||
5.5.1 did:btc and did:ln Method Specifications
|
||
Proposed did:ln method:
|
||
did:ln:<method-specific-identifier>
|
||
|
||
Method-specific identifier: bech32-encoded Lightning node pubkey
|
||
Example: did:ln:ln1qgsyxj5r9dkgkg3z7vs9l8f9v2j4k8v3m9n4p5q6r7s8t9u0v1w2x3y4z5a6b7c8d9e0f
|
||
DID document resolution:
|
||
{
|
||
"@context": ["https://www.w3.org/ns/did/v1", "https://did-ln.example.com/v1"],
|
||
"id": "did:ln:ln1qgsyxj5r9dkgkg3z7vs9l8f9v2j4k8v3m9n4p5q6r7s8t9u0v1w2x3y4z5a6b7c8d9e0f",
|
||
"verificationMethod": [{
|
||
"id": "did:ln:ln1qg...#key1",
|
||
"type": "EcdsaSecp256k1VerificationKey2019",
|
||
"controller": "did:ln:ln1qg...",
|
||
"publicKeyHex": "02a1b2c3d4e5f6..."
|
||
}],
|
||
"authentication": ["did:ln:ln1qg...#key1"],
|
||
"service": [{
|
||
"id": "did:ln:ln1qg...#node",
|
||
"type": "LightningNode",
|
||
"serviceEndpoint": "https://1.2.3.4:9735"
|
||
}, {
|
||
"id": "did:ln:ln1qg...#nostr",
|
||
"type": "NostrRelay",
|
||
"serviceEndpoint": "wss://relay.example.com"
|
||
}]
|
||
}
|
||
5.5.2 DID Document Resolution from Nostr/Lightning Anchors
|
||
Resolution process:
|
||
5.5.3 Verifiable Credential Issuance and Verification
|
||
Platform-issued credentials:
|
||
{
|
||
"@context": ["https://www.w3.org/2018/credentials/v1"],
|
||
"type": ["VerifiableCredential", "AIInferenceReputation"],
|
||
"issuer": "did:ln:ln1qg...platform-node",
|
||
"issuanceDate": "2024-01-15T10:00:00Z",
|
||
"credentialSubject": {
|
||
"id": "did:nostr:npub1abc...",
|
||
"reputationScore": 850,
|
||
"totalPaidSat": 5000000,
|
||
"successfulJobs": 150,
|
||
"disputeRate": 0.0,
|
||
"memberSince": "2023-06-01"
|
||
},
|
||
"proof": {
|
||
"type": "EcdsaSecp256k1Signature2019",
|
||
"created": "2024-01-15T10:00:00Z",
|
||
"proofPurpose": "assertionMethod",
|
||
"verificationMethod": "did:ln:ln1qg...#key1",
|
||
"jws": "eyJhbGciOiJFUzI1Nksi..."
|
||
}
|
||
}
|
||
5.6 Platform Reference Implementations
|
||
5.6.1 Stacker News: Lightning-Linked Reputation and Moderation
|
||
Stacker News patterns:
|
||
5.6.2 Nostr Marketplaces: NIP-15 and NIP-99 Merchant Identity
|
||
Merchant verification flow:
|
||
5.6.3 RoboSats: Pseudonymous Trade Reputation with Bonding
|
||
RoboSats mechanisms:
|
||
5.7 Sovereign AI Platform Integration
|
||
5.7.1 User Registration: Nostr Key or Lightning Node Pubkey
|
||
Registration flow:
|
||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||
│ User │────►│ Platform │────►│ Nostr/ │
|
||
│ (Browser │ │ Gateway │ │ Lightning │
|
||
│ or CLI) │◄────│ │◄────│ Verify │
|
||
└─────────────┘ └─────────────┘ └─────────────┘
|
||
|
||
1. User presents: npub OR node_pubkey OR both
|
||
2. Platform generates challenge (nonce + timestamp)
|
||
3. User signs challenge with presented key(s)
|
||
4. Platform verifies signature, creates account
|
||
5. Cross-linkage proof optional for enhanced reputation
|
||
Implementation:
|
||
class SovereignRegistration:
|
||
def __init__(self):
|
||
self.pending_challenges = TTLCache(maxsize=10000, ttl=300)
|
||
|
||
async def initiate_registration(self, identity: dict) -> dict:
|
||
"""
|
||
Start registration with Nostr or Lightning identity.
|
||
"""
|
||
|
||
challenge = {
|
||
'nonce': secrets.token_hex(16),
|
||
'timestamp': int(time.time()),
|
||
'expires': int(time.time()) + 300
|
||
}
|
||
|
||
challenge_id = hashlib.sha256(
|
||
json.dumps(challenge, sort_keys=True).encode()
|
||
).hexdigest()[:16]
|
||
|
||
self.pending_challenges[challenge_id] = {
|
||
'challenge': challenge,
|
||
'identity': identity,
|
||
'attempts': 0
|
||
}
|
||
|
||
return {
|
||
'challenge_id': challenge_id,
|
||
'challenge_text': f"sovereign-ai:register:{challenge['nonce']}:{challenge['timestamp']}",
|
||
'expires_at': challenge['expires']
|
||
}
|
||
|
||
async def complete_registration(
|
||
self,
|
||
challenge_id: str,
|
||
proof: dict
|
||
) -> dict:
|
||
"""Verify proof and create account."""
|
||
|
||
pending = self.pending_challenges.get(challenge_id)
|
||
if not pending or time.time() > pending['challenge']['expires']:
|
||
raise ChallengeExpiredError()
|
||
|
||
# Verify based on identity type
|
||
identity = pending['identity']
|
||
|
||
if identity['type'] == 'nostr':
|
||
verified = self._verify_nostr_proof(
|
||
pending['challenge'],
|
||
proof,
|
||
identity['npub']
|
||
)
|
||
primary_id = identity['npub']
|
||
|
||
elif identity['type'] == 'lightning':
|
||
verified = self._verify_lightning_proof(
|
||
pending['challenge'],
|
||
proof,
|
||
identity['node_pubkey']
|
||
)
|
||
primary_id = identity['node_pubkey']
|
||
|
||
if not verified:
|
||
pending['attempts'] += 1
|
||
if pending['attempts'] >= 3:
|
||
del self.pending_challenges[challenge_id]
|
||
raise VerificationFailedError()
|
||
|
||
# Create account
|
||
account = await self._create_account(
|
||
primary_id=primary_id,
|
||
linked_identities=proof.get('linked_identities', []),
|
||
created_at=datetime.utcnow()
|
||
)
|
||
|
||
# Clean up
|
||
del self.pending_challenges[challenge_id]
|
||
|
||
return {
|
||
'account_id': account.id,
|
||
'primary_identity': primary_id,
|
||
'reputation_tier': self._initial_tier(proof),
|
||
'api_endpoint': f'https://api.inference.example.com/v1/{account.id}'
|
||
}
|
||
5.7.2 Session Authentication: Challenge-Response Signing
|
||
Session management:
|
||
5.7.3 Reputation-Aware Pricing and Service Tiering
|
||
Dynamic pricing by reputation:
|
||
5.7.4 Dispute Resolution Without Centralized Arbitration
|
||
Decentralized dispute options:
|
||
Bonded arbitrator selection:
|
||
async def select_arbitrator(
|
||
dispute: Dispute,
|
||
arbitrator_pool: List[Arbitrator]
|
||
) -> Arbitrator:
|
||
"""
|
||
Select arbitrator with relevant expertise and minimal conflict.
|
||
|
||
Filters: stake > dispute amount, no prior relation to parties,
|
||
availability, historical accuracy.
|
||
"""
|
||
|
||
qualified = [
|
||
a for a in arbitrator_pool
|
||
if a.bonded_sat >= dispute.amount_sat * 2
|
||
and a.user_id not in [dispute.plaintiff_id, dispute.defendant_id]
|
||
and a.available
|
||
]
|
||
|
||
# Score by historical accuracy in similar disputes
|
||
scored = []
|
||
for a in qualified:
|
||
similar = [d for d in a.history if d.category == dispute.category]
|
||
accuracy = sum(1 for d in similar if d.resolution_correct) / len(similar) if similar else 0.5
|
||
|
||
# Penalize recent activity (avoid gaming)
|
||
recency_penalty = 1.0
|
||
if a.last_dispute and (time.time() - a.last_dispute) < 86400:
|
||
recency_penalty = 0.5
|
||
|
||
scored.append((a, accuracy * recency_penalty))
|
||
|
||
scored.sort(key=lambda x: x[1], reverse=True)
|
||
|
||
# Random selection from top 3 (prevent prediction)
|
||
return random.choice([a for a, _ in scored[:3]])
|
||
|
||
Implementation Roadmap for Gitea Tickets
|
||
Priority 1: Core L402 Infrastructure
|
||
☐ Deploy LND node with Neutrino backend
|
||
☐ Implement /offers and /payment_request endpoints with l402-python
|
||
☐ Integrate macaroon minting with job parameter caveats
|
||
☐ Build invoice subscription settlement detection
|
||
Priority 2: Refund System
|
||
☐ Implement hybrid keysend/invoice refund engine
|
||
☐ Deploy job lifecycle state machine with persistence
|
||
☐ Configure dust thresholds and accumulation strategies
|
||
☐ Build abandoned refund reclamation workflow
|
||
Priority 3: Agent Wallet
|
||
☐ Deploy LND with minimal macaroon permissions
|
||
☐ Implement automated channel management (Loop Out, rebalancing)
|
||
☐ Integrate LNbits for sub-account isolation
|
||
☐ Configure spending anomaly detection and escalation
|
||
Priority 4: Cost Estimation
|
||
☐ Deploy token counters for all supported models
|
||
☐ Implement exponential decay calibrator with feedback
|
||
☐ Configure 95th percentile pricing with dynamic adjustment
|
||
☐ Build long-context and reasoning detection
|
||
Priority 5: Identity System
|
||
☐ Implement Nostr challenge-response authentication
|
||
☐ Deploy Lightning node pubkey verification
|
||
☐ Build reputation graph from payment history
|
||
☐ Integrate reputation-aware pricing tiers |