TL;DR
TokenIntel classifies every protocol it tracks under one of six value-accrual mechanisms (direct-fee, buyback-burn, buyback-hold, ve-model, governance-only, hybrid). This page reports the per-cohort token-return distribution at 30-day, 90-day, 180-day, and 365-day windows, refreshed monthly. The point is to give a reader two things: (1) the mechanism class as context, and (2) whether that class has actually delivered returns in the recent past, both relative to other mechanisms and relative to the per-token outcome.
The structural rating tells you how the protocol can fail. The mechanism class tells you how cashflow is supposed to reach holders. The empirical layer is what tells you whether the architecture has worked. All three together is what an institutional allocator actually needs.
Because mechanism choice is necessary but not sufficient. A textbook mechanism on a protocol whose business is failing produces -82% returns (dYdX). A "minimalist" mechanism on a protocol whose business is growing produces +193% returns (Hyperliquid). Cohort empirics are how you isolate the mechanism question from the business question.
Live cohort data
Below: median 365-day return per cohort, with p25 and p75 to show distribution width. n is the number of TokenIntel-tracked protocols in each cohort. Empty cells indicate either the cohort has zero TI-tracked members (e.g. direct-fee, since TI does not currently cover dYdX) or insufficient price history.
Per-token returns vs. cohort median
Same dataset but at the token level. The "vs cohort" column is the protocol's return minus its cohort's median return — positive means outperformed peers, negative means underperformed peers. This is the lens that separates "mechanism worked but business failed" from "mechanism failed."
How to read this
Three questions readers should be able to answer after looking at the cohort table:
- Has my mechanism class outperformed? Compare the cohort median to the others. If buyback-hold cohort is at +20% and governance-only is at -50%, the mechanism is doing real work. If buyback-hold is at -55% and governance-only is at -65%, there is mechanism-level differentiation but not enough to save you from a cohort-wide drawdown.
- Has my specific token outperformed peers? Look at the per-token "vs cohort" column. If you hold a token that returned -40% but its cohort returned -55%, it outperformed peers by 15 points — the mechanism worked AND the protocol executed better than its peer group. If both numbers are similar, you are getting cohort-level returns from the protocol, no idiosyncratic alpha.
- Where is the cohort outlier? When a single token carries an entire cohort, the cohort median is misleading. Strip outliers and see what's left. The Hyperliquid effect on buyback-burn is the canonical example: the cohort median looks fine until you remove HYPE, at which point it tracks the broad alt drawdown like every other cohort.
Methodology
Cohort definition
TokenIntel-tracked protocols are grouped by the accrualMechanism field on data/defi-risk-scores.json. The taxonomy and per-protocol rationale notes are documented in TI's DeFi Risk Methodology page; this page consumes that classification rather than redefining it.
Return windows
Computed as percent change in CoinGecko-reported USD daily price from now − N days to now, for N in {30, 90, 180, 365}. Returns are unleveraged spot returns — they do not include staking yield, airdrops, ecosystem grants, or basis trades. A protocol with 50% staking APY and -60% spot return shows -60% here. The staking yield is captured in the underlying ERM analysis.
Cohort statistics
We report median, p25, p75, min, max per cohort × window. Median, not mean. Cohorts are too small to absorb extreme outliers cleanly — Hyperliquid's recent +138% would distort a mean of a 3-token cohort but barely moves the median. The p25 and p75 give the reader the cohort spread without us having to choose.
Window exclusion
Tokens with less than N days of CoinGecko price history are excluded from the N-day window. Per-token historyDays is reported alongside the score so a reader can see which tokens get excluded from longer windows. CoinGecko's free-tier API caps at 365 days of daily history, so the 365d window is approximate by ~5 days.
Survivor-bias caveat
TI's tracked roster is a snapshot of currently-active protocols. Tokens that died (went to zero, project shut down) before being added to the roster are not in the cohort. This biases cohort returns upward versus a true "all protocols ever launched" analysis. Reader should treat cohort medians as best-case for the surviving set, not as the unbiased return distribution of the mechanism class.
For comparison, Novora's April 2026 audit covered 159 protocols and reported median 1Y return of -66%. TI's tracked set is smaller (~14 protocols) and survivor-skewed; expect TI's medians to look better than Novora's broadest-cut numbers. The directional ranking across mechanism classes is what's robust, not the absolute medians.
Refresh cadence
The dataset is auto-refreshed monthly via a Netlify scheduled function (15th of each month, 14:00 UTC, mid-month, offset from ERM's 1st-of-month refresh to spread CoinGecko load). The script is at scripts/compute-value-accrual-empirics.js for local use and netlify/functions/value-accrual-empirics-worker.js for the cron. Output is committed to data/value-accrual-empirics.json via GitHub API and triggers a Netlify deploy.
How this fits with TI's other frameworks
This page is the empirical layer. Two structural layers complement it:
- Risk dimensions (7-pillar score): how the protocol can fail. DeFi Risk Methodology →
- Eligibility-Adjusted Revenue Multiples (ERM): the price you're paying for $1 of forward annual cashflow. ERM Valuation →
An institutional-grade investability call combines all three: structural risk passes, mechanism class is reasonable, ERM is acceptable, and the empirical cohort track record supports the thesis. Drop any one and the read goes from "high-conviction position" to "reasoned exposure" or worse.
Limitations to be honest about
- Small cohorts. Several mechanism cohorts have n < 4 inside TI's tracked roster. The signal is the directional ranking, not the absolute median. Where the cohort is too small to interpret, the table renders the count prominently.
- 365-day window is not a forecast. A trailing-year return is what already happened, not what will happen. Mechanisms that worked in 2025-2026 may stop working in 2027 if the macro / on-chain regime shifts. The empirical layer is a calibration check, not a target.
- USD spot returns only. Doesn't include staking yield, airdrops, basis trades. A ve-model protocol where lockers earn 30% APY in bribes would still show only the spot price change here. Read this alongside ERM.
- CoinGecko free-tier dependency. The data source has rate limits and occasional outages. The cron has retry logic but a single missed monthly refresh would leave the dataset stale. The page surfaces the
generatedAttimestamp prominently so a reader can see when the last successful refresh was. - Mechanism reclassification risk. A protocol's accrual mechanism can change (Sky/Maker cut its buyback from 75% to 7.5% on March 14, 2026). When TI updates the
accrualMechanismclassification on a protocol, that protocol moves cohorts. The cohort time-series is therefore not strictly comparable across long history windows when classifications shift mid-window. Each refresh uses the current classification.