Inference Request (User Job)

Any user, DAO, or application can initiate a verifiable AI task by submitting an inference request. This request is cryptographically tied to:
A registered model,
The user’s input (hashed),
And a HUB token payment.
Once submitted, the request enters the execution pipeline and becomes a provable, revenue-generating event within the system.
🔁 Workflow:
User selects a model from the on-chain registry.
Prepares input data (off-chain JSON, hashed on client side).
Sends
InferenceRequest
transaction with:model_hash
input_hash
payment_amount
(in HUB or stablecoin)
Job is queued for executors.
📦 On-Chain Struct:
struct InferenceRequest {
bytes32 input_hash; // SHA-256 of user’s input
bytes32 model_hash; // Tied to registered zk-model
address user; // Sender address
uint256 payment_amount; // In HUB
uint64 timestamp; // Unix time
}
📥 Input Data Example (Off-Chain):
{
"APR": 6.42,
"stake_volume": 125000,
"risk_profile": "moderate"
}
This input is not stored on-chain, but its hash is used to validate the zk-proof.
🌍 RWA Integration:
Payment Tracking: Requests using RWA-tokenized models will trigger payout logic to token holders.
Subscription-Based Access: DAOs holding access tokens (NFTs) can submit requests without direct HUB payment (prepaid logic).
Auditable Revenue: Each request adds to the on-chain usage history of a model, powering dashboards, reputation, and financial reporting.
Every inference request is a cryptographic contract — linking a user’s intent, a model’s logic, and an on-chain monetization outcome.
Last updated