Hubic.ai
  • Hubic AI
  • EMBARK UPON
    • Introduction
      • Proof-of-Inference (PoI)
      • Proof-of-Weights (PoWg)
      • Why Hubic?
      • Main Actors and Their Roles
      • Architecture Overview
      • Use Case Examples
      • Hubic AI Hub – Model Registry
      • RWA Integration
    • Registry & System Architecture
      • Sovereign AI Agents (On-chain AI Logic Executors)
      • Liquid Strategy Engine (LSE)
      • Proof-of-Weights (PoW2)
      • Governance System
      • Hubic Intelligence Hub (Expanded)
      • Visual System Map
    • Economic Model
      • HUB Token Utility
      • Economic Actors & Reward Mechanics
      • Token Flow Diagram
      • Long-Term Sustainability
      • Optional Enterprise Layer
      • Security & Reputation Systems
      • Summary Table
      • Future Expansion Points
      • Final Notes
    • Program Flow Overview
      • Model Registration (One-Time)
      • Inference Request (User Job)
      • Execution Phase (Off-Chain)
      • Verification Phase
      • Rewards & Settlement
      • Optional Extensions
      • Key Takeaways
    • Real-World Use Case Example
      • Introduction
      • Problem Statement
      • System Actors
      • End-to-End Flow: DAO Delegation Automation
      • Benefits to DAO Operations
      • Extensions & Advanced Use
  • Hubic Economic Engine
    • Tokenomics
    • Roadmap
  • Links
    • Website
    • Twitter
    • Telegram
    • GitHub
Powered by GitBook
On this page
  1. EMBARK UPON
  2. Program Flow Overview

Inference Request (User Job)

PreviousModel Registration (One-Time)NextExecution Phase (Off-Chain)

Last updated 7 days ago

Any user, DAO, or application can initiate a verifiable AI task by submitting an inference request. This request is cryptographically tied to:

  • A registered model,

  • The user’s input (hashed),

  • And a HUB token payment.

Once submitted, the request enters the execution pipeline and becomes a provable, revenue-generating event within the system.


🔁 Workflow:

  1. User selects a model from the on-chain registry.

  2. Prepares input data (off-chain JSON, hashed on client side).

  3. Sends InferenceRequest transaction with:

    • model_hash

    • input_hash

    • payment_amount (in HUB or stablecoin)

  4. Job is queued for executors.


📦 On-Chain Struct:

struct InferenceRequest {
  bytes32 input_hash;     // SHA-256 of user’s input
  bytes32 model_hash;     // Tied to registered zk-model
  address user;           // Sender address
  uint256 payment_amount; // In HUB
  uint64 timestamp;       // Unix time
}

📥 Input Data Example (Off-Chain):

{
  "APR": 6.42,
  "stake_volume": 125000,
  "risk_profile": "moderate"
}

This input is not stored on-chain, but its hash is used to validate the zk-proof.


🌍 RWA Integration:

  • Payment Tracking: Requests using RWA-tokenized models will trigger payout logic to token holders.

  • Subscription-Based Access: DAOs holding access tokens (NFTs) can submit requests without direct HUB payment (prepaid logic).

  • Auditable Revenue: Each request adds to the on-chain usage history of a model, powering dashboards, reputation, and financial reporting.

Every inference request is a cryptographic contract — linking a user’s intent, a model’s logic, and an on-chain monetization outcome.