Incentive Mechanism

The incentive mechanism constitutes the cryptoeconomic control layer of Subnet 87, governing how decentralized intelligence is evaluated, priced, and continuously improved.

Rather than relying on trusted intermediaries, the network embeds performance accountability directly into its economic structure through Bittensor’s native validation and reward distribution framework. This converts technical signals—accuracy, temporal consistency, and latency—into direct financial incentives, creating a closed feedback loop between engineering and economic outcomes.


4.1 Validation of Data Labeling

For tasks such as object detection, semantic segmentation, or activity classification, validators must ensure the accuracy and consistency of labeled outputs. Subnet 87 employs a comprehensive benchmark evaluation methodology.

Validators maintain a curated benchmark dataset $D_{benchmark}$ consisting of pre-labeled ground truth annotations across diverse scenarios. When evaluating a miner’s labeling capability, the entire benchmark is presented:

Devaluation=DbenchmarkD_{evaluation} = D_{benchmark}

The miner’s score $S_m$ is calculated by measuring prediction accuracy using a task-appropriate loss function $L$:

Sm=eαL(Labelminer,Labelground_truth)S_m = e^{-\alpha \cdot L(Label_{miner}, Label_{ground\_truth})}

  • $\alpha$: A scaling parameter controlling the sensitivity of the exponential decay.

  • Purpose: This prevents miners from selectively optimizing for specific data subsets, as every sample in the benchmark contributes to the final score.


4.2 Validation of Localization

Effective video intelligence requires models that understand geographic, cultural, and temporal context. The localization task evaluates a miner’s ability to adapt models to region-specific characteristics (e.g., Tokyo vs. Berlin).

Validators present miners with geographically and temporally tagged samples $V_{region,time}$. Let $C$ represent the set of contextual features:

C={traffic signs, vehicle types, behavioral norms, traffic rules}C = \{ \text{traffic signs, vehicle types, behavioral norms, traffic rules} \}

The localization score $S_l$ is computed as:

Sl=1CcCAccuracy(Predictionminerc,GroundTruthr,tc)S_l = \frac{1}{|C|} \sum_{c \in C} \text{Accuracy}(Prediction_{miner}^{c}, GroundTruth^{c}_{r,t})

This metric ensures miners can distinguish between regional infrastructure and social norms, incentivizing geographically adaptive models over monolithic systems.


4.3 Validation of Retroactive Timeline Construction

This task requires miners to reconstruct a subject’s movement by stitching together fragmented footage from multiple camera sources.

Asymmetric Verification

The methodology uses asymmetric verification, where computationally expensive backward reconstruction serves as the ground truth:

  1. Backward Flow (Ground Truth): Validators work backward from a known endpoint to trace a path. This is highly accurate but too slow for real-time use.

  2. Forward Flow (Miner Task): Miners must predict the trajectory moving forward in real-time based on appearance and spatiotemporal reasoning.

The reward $R$ is calculated using cosine similarity between the miner's path vector $\vec{P}{miner}$ and the validator's ground truth $\vec{P}{validator}$:

R=PminerPvalidatorPminerPvalidatorR = \frac{\vec{P}_{miner} \cdot \vec{P}_{validator}}{\|\vec{P}_{miner}\| \|\vec{P}_{validator}\|}


4.4 Overall Validation

All benchmark tasks within an evaluation epoch are jointly validated as a single competitive batch. Miner scores across localization, labeling, and timeline reconstruction are aggregated into a unified performance metric.

Winner-Takes-All Model

At the end of each cycle, the highest-performing miner receives 100% of the emissions allocated for miners for that epoch. This model enforces:

  • Strong competitive pressure.

  • Accelerated model iteration.

  • Guaranteed economic rewards for only the most reliable agents.

Last updated