VaryOn Amplitude is not a product. It is the foundational methodology that unifies everything VaryOn Works produces - the principle that AI impact is quantifiable, multi-dimensional, and standardizable across every layer of the AI economy.

Data Layer

“Is this data worth consuming?”

Agent Layer

“Can this agent be trusted, and is it controlled?”

Ecosystem Layer

“Is the agent economy safe, fair, and productive?”

1

Impact is measurable.

Every dimension of AI's effect on business can be quantified.

2

Impact is multi-dimensional.

No single score captures AI's full effect.

3

Impact is standardizable.

Open methodologies create trust for regulatory adoption.

4

Impact flows across layers.

Data quality affects agent decisions affects systemic stability.

5

Impact demands accountability.

Measurement without accountability is just monitoring.

Frameworks for the AI Economy

Data Layer

VaryOn Meridian

Data Quality

Is this data worth consuming, and what should an agent pay for it?

Meridian evaluates external data sources consumed by AI agents across four orthogonal dimensions, producing a composite score mapped to procurement tiers and dynamic pricing. Delivered in real time via MCP server integration during agent tool-call execution.

Dimensions

ScarcityQualityDecision ImpactDefensibility

Aggregation

Weighted Geometric Mean - Non-compensatory

Scale

0-100 -> Platinum / Gold / Silver / Bronze / Unrated

Learn More
Agent Layer

VaryOn Drift

Alignment Impact

Is this agent still serving its principal’s intent?

Drift detects the invisible gap between what a human principal wants and what an agent actually does - especially across delegation chains where alignment degrades per hop. Its shadow principal detection acts as a multiplicative gate, identifying when third-party interests silently influence agent behavior and directly capping the maximum possible score.

Dimensions

Goal FidelityDelegation DegradationOverride AnalysisShadow Principal DetectionPreference Drift

Aggregation

Gated Geometric Mean - Shadow principal as multiplicative gate

Scale

0–100 → Aligned / Drifting / Misaligned

Learn More
Ecosystem Layer

VaryOn Cascade

Systemic Impact

If something breaks, how far does the damage spread?

Cascade is the financial stress test for the agent economy. A single compromised agent can poison 87% of downstream decisions within 4 hours. Cascade runs Monte Carlo simulations on observed network topology to estimate propagation probability - the systemic risk measurement central banks are demanding.

Dimensions

Interconnection DensityCascade ProbabilityBehavioral CorrelationRecovery TimeConcentration Risk

Aggregation

Weighted Geometric Mean with Monte Carlo simulation

Scale

0–100 → Critical Risk / Elevated / Contained

Learn More
Ecosystem Layer

VaryOn Convergence

Collusion Impact

Are autonomous agents colluding to manipulate market prices?

Convergence detects emergent algorithmic collusion and anti-competitive behavior in AI agent markets through statistical analysis of observable market outcomes. The framework identifies when autonomous AI agents converge on supra-competitive pricing equilibria - sustaining prices 200% or more above competitive levels - without any explicit communication or coordination protocol.

Dimensions

Price ConvergenceMarket DivisionCommunication AnalysisBid Pattern AnalysisConsumer Welfare

Aggregation

Minimum-of-Components - Non-compensatory

Scale

0–100 → Collusive / Competitive / Healthy

Learn More

Frameworks in Development

Agent Layer

VaryOn Provenance

Identity Impact

Can you verify who this agent is, what it does, and who deployed it?

Provenance is the passport layer for autonomous agents. Before trust can be assessed, identity must be established. Provenance measures the verifiability, transparency, and completeness of an agent’s identity, capability claims, and operational history, enabling SOC 2-style certification for the agent economy.

Dimensions

Deployment VerificationCapability AttestationVersion IntegrityBehavioral HistoryTransparency

Aggregation

Weighted Arithmetic Mean - Partial identity adds value

Scale

0–100 → Certified / Provisional / Uncertified

Under Development
Agent Layer

VaryOn Fidelity

Trust Impact

Can this agent be trusted to do what it claims?

Fidelity measures signal integrity - the credit score for autonomous systems. It scores whether an agent can be trusted based on its observable behavioral track record. Identity is measured by Provenance; Fidelity measures behavior exclusively: consistency, fulfillment, reputation, and anomalies.

Dimensions

Behavioral ConsistencyContract FulfillmentReputationAnomaly Freedom

Aggregation

Weighted Geometric Mean - Trust requires all dimensions

Scale

0–100 → Low / Moderate / High / Critical / Extreme Risk

Under Development
Agent Layer

VaryOn Threshold

Resilience Impact

How resistant is this agent to adversarial attack and manipulation?

Threshold stress-tests agents against adversarial conditions. Where Fidelity measures past behavior (credit score), Threshold measures future resilience (stress test). Research shows 82.4% of LLMs succumb to peer-agent manipulation - Threshold quantifies exactly how resistant a specific agent is.

Dimensions

Prompt Injection ResistanceManipulation ResistanceData Poisoning ToleranceStress DegradationRecovery Time

Aggregation

Weighted Harmonic Mean - Weakest-link property

Scale

0–100 → Vulnerable / Resilient / Hardened

Under Development
Agent Layer

VaryOn Parity

Fairness Impact

Is this agent treating all populations equitably?

Parity measures what no other framework captures: whether an agent’s decisions produce equitable outcomes across demographic groups. A hiring agent filtering out certain backgrounds, a pricing agent charging more based on inferred characteristics - these are Parity failures invisible to trust, alignment, or competition metrics.

Dimensions

Outcome DisparityTreatment ConsistencyProxy Variable AnalysisAccessibility CoverageEconomic Equity

Aggregation

Ceiling-Constrained Mean - Outcome disparity ceiling

Scale

0–100 → Inequitable / Fair / Equitable

Under Development
Agent Layer

VaryOn Mandate

Human Oversight Impact

Can a human effectively intervene, override, or stop this agent?

Mandate quantifies whether human control over autonomous agents is real or ceremonial. EU AI Act Article 14 mandates human oversight; Mandate measures it. Each delegation hop adds latency between the human and the action - at some point, the human is nominally “in the loop” but functionally irrelevant.

Dimensions

Override EffectivenessIntervention LatencyVisibility DepthEngagement QualityEscalation Reliability

Aggregation

Multiplicative Chain - Every component must function simultaneously

Scale

0–100 → Ceremonial / Partial / Effective

Under Development
Ecosystem Layer

VaryOn Yield

Economic Impact

Is value being created efficiently, or is friction destroying it?

Yield measures economic efficiency - the ratio of value created to value extracted in agent ecosystems. It detects when transaction costs consume value, when intermediaries extract excessive rents, and when misaligned incentives destroy welfare. Every basis point of friction compounds across millions of autonomous transactions.

Dimensions

Value Creation EfficiencyAllocative EfficiencyTransaction Cost RatioRent Extraction RateMarket Liquidity

Aggregation

Multiplicative Efficiency Model - Each inefficiency compounds

Scale

0–100 → Inefficient / Balanced / Optimal

Under Development
Ecosystem Layer

VaryOn Lineage

Governance Impact

Who is accountable when autonomous agents cause harm?

Lineage traces accountability chains in AI agent ecosystems, mapping the flow of responsibility from actions to actors. It quantifies governance effectiveness, audit trail completeness, and liability attribution when autonomous systems create unintended consequences.

Dimensions

Audit Trail CompletenessResponsibility AttributionDecision TransparencyGovernance EffectivenessLiability Mapping

Aggregation

Weighted Arithmetic Mean - Accountability components

Scale

0–100 → Untraceable / Partial / Full Accountability

Under Development