Hubic.ai
  • Hubic AI
  • EMBARK UPON
    • Introduction
      • Proof-of-Inference (PoI)
      • Proof-of-Weights (PoWg)
      • Why Hubic?
      • Main Actors and Their Roles
      • Architecture Overview
      • Use Case Examples
      • Hubic AI Hub – Model Registry
      • RWA Integration
    • Registry & System Architecture
      • Sovereign AI Agents (On-chain AI Logic Executors)
      • Liquid Strategy Engine (LSE)
      • Proof-of-Weights (PoW2)
      • Governance System
      • Hubic Intelligence Hub (Expanded)
      • Visual System Map
    • Economic Model
      • HUB Token Utility
      • Economic Actors & Reward Mechanics
      • Token Flow Diagram
      • Long-Term Sustainability
      • Optional Enterprise Layer
      • Security & Reputation Systems
      • Summary Table
      • Future Expansion Points
      • Final Notes
    • Program Flow Overview
      • Model Registration (One-Time)
      • Inference Request (User Job)
      • Execution Phase (Off-Chain)
      • Verification Phase
      • Rewards & Settlement
      • Optional Extensions
      • Key Takeaways
    • Real-World Use Case Example
      • Introduction
      • Problem Statement
      • System Actors
      • End-to-End Flow: DAO Delegation Automation
      • Benefits to DAO Operations
      • Extensions & Advanced Use
  • Hubic Economic Engine
    • Tokenomics
    • Roadmap
  • Links
    • Website
    • Twitter
    • Telegram
    • GitHub
Powered by GitBook
On this page
  1. EMBARK UPON
  2. Introduction

Proof-of-Weights (PoWg)

PreviousProof-of-Inference (PoI)NextWhy Hubic?

Last updated 7 days ago

Proof-of-Weights (PoWg) is a decentralized scoring framework used to evaluate and rank inference nodes in the Hubic network. Validators compute performance metrics for AI agents or inference nodes and must prove — via zero-knowledge proofs — that scoring was executed correctly.

This zk-verified scoring ensures a reliable reputation system, which is essential for both network health and RWA-linked models that generate income based on performance.

🔧 How It Works:

  1. Validator retrieves performance data for a model or inference node.

  2. Scoring function is executed locally, encapsulated in a zk-circuit.

  3. A zk-proof of execution is generated and submitted on-chain.

  4. Scores and proofs are recorded and validated via Ethereum smart contracts.

🔐 Security Guarantees:

  • Prevents score manipulation or duplication across validators.

  • Validates that scoring was computed honestly using on-chain verifiable circuits.

  • Publishes score hashes and proofs to Ethereum for auditability and downstream automation.

🔁 Flow:

Performance Metrics → Local Evaluation (zk-circuit) → zk-Proof + Scores → Ethereum Verifier → On-Chain Reputation Update

📊 Use Cases:

  • Node Reputation: Reliable scoring ensures high-performing nodes are prioritized for critical inference tasks.

  • Model Quality Tracking: Models with better usage metrics and inference success rates can receive more traffic and fees.

  • RWA Revenue Optimization: RWA-linked models or agents can be scored transparently, with zk-proven logic determining payouts to token holders based on usage and performance quality.

🌍 RWA Relevance:

In the RWA context, PoWg is the accountability engine. It provides cryptographic proof that a model or AI agent is performing as expected — forming the basis for royalty distribution, dynamic pricing or trust-weighted access. High-score agents can command premium rates, attract capital flows and participate in enterprise or institutional deployments.