PROTOCOL

Persisto Ergo Didici Protocol diagram showing temporal verification process from unverified AI-dependent performance through T+90 testing to PED-verified capability with transfer and cascade verification tiers

Persisto Ergo Didici Protocol v1.0

The Temporal Verification Standard for Learning in the Post-Behavioral Era

Protocol Status: Specification Final
Version: 1.0.0
Last Updated: January 2026
License: CC BY-SA 4.0 (Open Protocol)
Canonical URL: PersistoErgoDidici.org/protocol


Canonical Definition

Learning is verified if and only if an independently administered capability test in a novel context is passed after minimum temporal separation with all external scaffolds removed.

Verification Extensions:

  • Cascade Proof (Tier-2): Capability propagates through independent beneficiaries with multiplicative branching (non-linear growth)
  • Temporal Strength (Tier-3): Capability is maintained or increases across the separation period (decay-resistance)

The Problem: When Behavior Stopped Indicating Learning

(Non-normative rationale – informative)

For centuries, behavioral demonstration reliably indicated learning. When someone could solve problems, explain concepts, or apply knowledge—learning had occurred. The correlation between behavior and capability was sufficiently strong to build civilization’s entire educational verification system upon it.

Between 2023-2025, this correlation weakened fundamentally.

AI systems achieved behavioral equivalence with humans across:

  • Problem-solving (correct solutions)
  • Explanation generation (coherent teaching)
  • Creative synthesis (novel outputs)
  • Code production (functional implementations)

Result: Behavioral observation at T+0 no longer reliably distinguishes learning from AI-assisted performance.

The Credential Crisis:

  • Degrees prove course completion, not capability retention
  • Certificates verify participation, not independent function
  • Interview performance may reflect real-time assistance
  • Work samples may demonstrate collaboration rather than autonomous capability

Temporal separation increases the cost of sustaining dependency and reduces the predictive value of immediate performance.


I. Protocol Overview

Abstract

Learning verification currently operates through behavioral testing at moment of assessment (T+0). This methodology provides insufficient evidence when AI assistance during assessment cannot be reliably excluded.

Ontological Supersession: For learning verification in the post-behavioral era, T+0 behavioral assessment is insufficient under universal synthesis conditions. Persisto Ergo Didici establishes temporal persistence as the verification primitive, superseding behavioral observation as the epistemic foundation for learning verification.

Persisto Ergo Didici provides temporal verification infrastructure: testing whether capability persists after minimum separation from assistance, in novel contexts, without access to original materials or support systems.

Temporal separation reveals dependency patterns invisible at T+0. AI-dependent performance degrades predictably when tested independently after time passage. Genuine internalization demonstrates persistence.

This transforms learning verification from measuring immediate performance to measuring independent persistence.


Problem Statement

The Verification Insufficiency:

Current educational systems verify learning through:

  • Examinations (performance at T+0)
  • Assignments (completion with unknown assistance level)
  • Projects (output quality regardless of independence)
  • Credentials (participation, not persistence)

These methods do not distinguish:

  • Genuine capability (persists independently)
  • Tool-dependent performance (degrades without access)
  • Memorization (degrades rapidly)
  • Scaffolded understanding (collapses without structural support)

The Temporal Gap:

Learning implies capability outlasting the learning context. Traditional metrics assess T+0 state. Persisto provides infrastructure for assessing T+90, T+180, T+365 states.

The Infrastructure Gap:

No standardized verification protocol exists for:

  • Temporal persistence testing
  • Novel context capability assessment
  • Scaffold removal verification
  • Independence measurement

Persisto Ergo Didici provides this missing infrastructure.


Why Existing Approaches Cannot Solve This

(Non-normative rationale – informative)

T+0 Testing is Information-Theoretically Insufficient:

Any T+0 behavioral observation is non-identifying under conditions where assistance cannot be excluded. The measurement provides no information about persistence.

AI Detection:

Perfect synthesis is definitionally undetectable. Detection tools engage in arms race with evasion. As synthesis quality approaches 100% behavioral fidelity, detection approaches 0% reliability.

Honor Systems:

Incentive misalignment. Systematic pressures reward claimed capabilities over possessed capabilities. Honor-based verification cannot scale.

Surveillance:

Process monitoring rather than outcome verification. Resource intensive, privacy invasive, circumventable. Does not test what persists after monitoring ends.

Architectural Conclusion:

Only temporal separation provides falsifiable verification. Immediate testing optimizes for T+0 performance. Delayed testing optimizes for T+X persistence. These are incompatible optimization targets.


Solution Architecture

Persisto Ergo Didici provides three-tier verification framework:

Tier-1: Learning Verification (Core)

Requirements:

  1. Temporal Separation: Minimum T+90 between training and test
  2. Scaffold Removal: No access to externalization scaffolds; instrumentation tools only if explicitly enumerated by domain/profile
  3. Novel Context: Different problem class than training set
  4. Independent Administration: No optimization for specific test

Output: Binary (Pass/Fail) + Capability Score (0-100)

Tier-2: Transfer Verification (Optional)

Requirements:

  1. Tier-1 passed
  2. Beneficiary can teach concept to others
  3. Teaching occurs without original source assistance
  4. Branching factor > 1 (at least some beneficiaries enable multiple others)

Output: Transfer Verified (Yes/No) + Branching Factor (numerical)

Tier-3: Cascade Proof (Optional)

Requirements:

  1. Tier-2 passed
  2. Minimum 3 generations of independent transfer
  3. Multiplicative branching pattern (branching factor > 2 across multiple generations)
  4. Capability remains stable or improves across separation checkpoints (positive or non-negative retention gradient)

Output: Cascade Multiplier (numerical) + Retention Gradient


Key Innovations

Temporal Separation as Verification Primitive:

Time increases cost of maintaining dependency while genuine internalization remains stable. This asymmetry enables falsifiable testing.

Substrate Independence:

Protocol measures outcomes (does capability persist?) not processes (how was it learned?). Compatible with biological cognition, AI augmentation, neural interfaces, future technologies.

Open Protocol:

Specification anyone implements. No entity controls temporal verification standards. Universities, employers, individuals—all use same falsifiable framework.

Integration Ready:

Designed for interoperability with PortableIdentity (authentication), ContributionGraph (impact tracking), MeaningLayer (semantic infrastructure).


II. Technical Specification

Formal Definitions

Learning (Protocol Context)

Learning is verified when:

An individual passes an independently administered capability assessment in a novel context after temporal separation with all external scaffolds removed.

Operationalization:

IF capability_test(T+90, novel_context, no_scaffolds) >= threshold
THEN learning_verified = TRUE
ELSE learning_verified = FALSE

Persistence Ratio (Normative):

P(t) = C(t) / C_ind(0)

Where:

  • P(t) is the persistence ratio used for domain-agnostic comparison across time
  • C_ind(0) is baseline score measured under the same scaffold constraints as the verification profile

Independence-Calibrated Baseline: For profiles using P(t), the baseline assessment MUST include an independence-calibrated score C_ind(0) measured under the same scaffold rules as the T+X assessment. This ensures P(t) reflects genuine persistence rather than artificial degradation from scaffold removal.

All temporal comparisons MUST use P(t) rather than raw scores to normalize for baseline differences.

Temporal Separation

Minimum time period between training completion and capability test during which:

  • No access to training materials
  • No access to assistance systems
  • No structured review or reinforcement
  • No optimization for upcoming test

Standard Profiles:

  • Baseline: T+90 days
  • Strong: T+180 days
  • Ultimate: T+365 days

Scaffold

Any external support structure enabling performance. Scaffolds are classified into two categories:

Externalization Scaffolds (MUST be removed for all profiles):

Support that carries cognitive content:

  • AI reasoning systems (LLMs, expert systems)
  • Solution repositories (Stack Overflow, GitHub Copilot suggestions)
  • Human tutoring or guidance during assessment
  • Pre-solved examples or templates
  • Answer keys or worked solutions

Instrumentation Tools (MAY be permitted if explicitly enumerated):

Tools that execute but do not generate understanding:

  • Compilers, interpreters (programming domains)
  • Calculators (mathematics domains)
  • Reference documentation (API docs, syntax guides)
  • Dictionary, spell-checker (writing domains)
  • Measurement instruments (scientific domains)

Profile Requirements:

  • PED-CORE: All Externalization Scaffolds MUST be removed. Instrumentation Tools MUST be explicitly enumerated per domain if permitted.
  • PED-ASSURANCE: Same as CORE, with verification of tool restriction enforcement.
  • PED-LONGITUDINAL: Same as CORE, with consistent tool access across all temporal checkpoints.

Rationale: This distinction enables practical adoption in technical domains while preserving the core principle that understanding, not execution, must persist.

Novel Context

Test conditions where problems are sampled from a different problem class than training examples:

Requirements:

  • Problems MUST differ from training examples (not recognition-based)
  • Domain application MUST vary from instruction context
  • Problem class MUST differ from training set (novel constraints, different context manifold)
  • No recognition cues present
  • Solution MUST require applying understanding, not recalling procedure

Problem Class: A set of problems sharing structural characteristics (constraint types, solution strategies, context patterns). Testing within the same problem class as training enables pattern memorization rather than understanding verification.

Example:

  • Training: Sorting algorithms on integer arrays
  • Same class (insufficient): Sorting strings (same constraint type)
  • Different class (sufficient): Graph traversal problems (different constraint structure)

Falsifiability: Novel context is violated if learner could solve test problems through pattern recognition of training examples rather than understanding application.

Independent Administration

Test administration ensuring no optimization bias:

MUST Requirements:

  • Test administrator MUST NOT be the same entity that provided training (for high-stakes verification)
  • Test MUST be ”unoptimized”: no pre-known item types, no reused prompts from training
  • Test problems MUST be selected from a pool unknown to learner during training
  • Organizational Separation: Independent administration requires organizational separation sufficient to eliminate test optimization incentives; implementations MUST declare the independence boundary (entity, department, or vendor)

Falsifiability: Independent administration is violated if:

  • Trainer and tester are the same entity (without third-party verification)
  • Test items were disclosed or optimizable before assessment
  • Assessment was designed specifically for known learner weaknesses/strengths
  • Organizational structure creates conflict of interest that enables test optimization

Protocol Primitives

Primitive Input Test Method Minimum Separation Allowed Tools Output Falsifiability
Temporal Persistence Domain + Baseline Score Independent capability test in novel context T+90 days None (or explicitly defined minimal set) Binary Pass/Fail + Score (0-100) Does capability function without scaffolds?
Scaffold Independence Test environment Controlled assessment with verified absence of support structures None (verified at test time) Explicitly enumerated only Compliance: Yes/No Were scaffolds actually absent?
Novel Context Problem set Problems requiring application not recognition T+90 days Same as Temporal Persistence Performance measurement Can understanding transfer to new situations?
Transfer Capability (Tier-2) Tier-1 Pass Beneficiary taught independently T+180 days Teaching materials allowed Transfer: Yes/No + Branching Factor Can learner enable others? Does branching factor > 1 (at least some beneficiaries enable multiple others)?
Cascade Multiplication (Tier-3) Tier-2 Pass Network analysis of multi-generation propagation with sustained branching patterns T+365 days None Cascade Multiplier + Retention Gradient Does capability multiply with branching factor > 2 sustained across ≥3 generations?

Verification Protocol

Phase 1: Baseline Assessment (T+0)

Initial capability measurement establishing baseline scores.

Status: Non-verifying (may include assistance)
Purpose: Establishes reference point for comparison

Baseline Scores:

  • C₀: Raw baseline score (may include assistance)
  • C_ind(0): Independence-calibrated baseline measured under same scaffold constraints as verification profile (normative for P(t) calculations)

Note: For profiles using P(t), C_ind(0) MUST be measured. For profiles using raw scores only, C₀ is sufficient.

Pre-Commitment (MUST for High-Assurance and Ultimate profiles):

At T+0, a cryptographic commitment to the test specification MUST be created:

  • Domain boundary (scope of knowledge to be tested)
  • Difficulty band (expected problem complexity range)
  • Scoring function (how performance will be measured)
  • Problem class (type of novel contexts to be used)

The commitment MUST be:

  • Created before temporal separation begins
  • Cryptographically signed (hash-commit or equivalent)
  • Revealed only at T+X when test is administered

Purpose: Prevents test manipulation, collusive credentialing, and optimization for known test characteristics during separation period.

Phase 2: Temporal Separation (T+90 minimum)

Required separation period.

Constraints:

  • MUST NOT access training materials
  • MUST NOT use assistance systems (unless explicitly allowed by profile)
  • MUST NOT receive structured reinforcement
  • MUST NOT optimize for known test format

Verification: Attestation-based or environment-controlled depending on compliance profile

Phase 3: Independent Assessment (T+90)

Novel problems in same domain, different context.

Requirements:

  • MUST be independently administered
  • MUST use novel problem set (not training examples)
  • MUST occur in controlled environment (for high-assurance profiles)
  • MUST NOT provide scaffolding or recognition cues

Scoring:

capability_score = (problems_solved_correctly / total_problems) × 100
threshold = 70 (default, profile-dependent)

IF capability_score >= threshold
    learning_verified = TRUE
ELSE
    learning_verified = FALSE

Phase 4: Temporal Strength Analysis (Optional)

Compare C(T+90) to C_ind(0):

IF C(T+90) >= C_ind(0)
    temporal_strength = "Positive" (capability maintained or improved)
ELSE IF C(T+90) >= 0.7 × C_ind(0)
    temporal_strength = "Acceptable" (capability degraded within tolerance)
ELSE
    temporal_strength = "Degraded" (significant capability loss)

Note: Threshold values (e.g., 0.7) are indicative. Implementations may adjust based on domain requirements. Capability maintenance (stable performance) is considered positive evidence of genuine learning – skills do not need to improve over time to be verified. See Appendix A for measurement guidance. Protocol conformance is based on normative requirements (Sections II-VI), not specific threshold values.


Compliance Profiles

Profile: Standard (Default)

  • Separation: T+90 days
  • Environment: Attestation-based (honor system with consequences)
  • Threshold: 70%
  • Scaffolds: Externalization scaffolds: none; instrumentation: only if explicitly enumerated
  • Use case: General education, low-stakes verification

Profile: High-Assurance

  • Separation: T+180 days
  • Environment: Controlled (proctored, environment verified)
  • Threshold: 80%
  • Scaffolds: Explicitly enumerated minimal set only
  • Use case: Professional certification, high-stakes employment

Profile: Ultimate

  • Separation: T+365 days
  • Environment: Controlled with cryptographic attestation
  • Threshold: 85%
  • Scaffolds: None
  • Use case: Critical domains (medical, safety-critical, etc.)

Conformance Classes

Implementations MUST declare conformance to one or more of these classes:

PED-CORE

Requirements:

  • Tier-1 verification (temporal separation, scaffold removal, novel context)
  • Cryptographic signing (Portable  Identity integration)
  • Minimum security constraints from Threat Model
  • Binary Pass/Fail output with capability score

Label: ”PED-CORE Compatible”

PED-ASSURANCE

Requirements:

  • PED-CORE plus:
  • Proctored or controlled environment
  • Identity verification
  • Audit trail (timestamps, duration, conditions)
  • Problem set rotation

Label: ”PED-ASSURANCE Compatible”

PED-LONGITUDINAL

Requirements:

  • PED-CORE plus:
  • T+365 minimum separation
  • Multiple temporal checkpoints (T+90, T+180, T+365)
  • Persistence trajectory tracking

Label: ”PED-LONGITUDINAL Compatible”

Anti-Capture Principle: Any implementation that cannot enforce independence constraints MUST label itself ”Low-Assurance” and MAY NOT claim PED-CORE compatibility.


III. Architecture & Implementation

What Persisto Ergo Didici IS

  • Verification protocol for temporal capability testing
  • Measurement standard for learning persistence
  • Attestation framework for cryptographic proof
  • Open specification anyone implements
  • Infrastructure component integrating with Web4 protocols

What Persisto Ergo Didici IS NOT

  • NOT pedagogy (does not prescribe how to teach)
  • NOT grading system (measures persistence, not performance ranking)
  • NOT platform (does not require accounts or centralized control)
  • NOT motivation framework (does not address why people learn)
  • NOT surveillance system (does not monitor learning process)
  • NOT credential replacement (complements credentials with verification)

The Binary Choice

(Non-normative rationale – informative)

After temporal verification infrastructure exists, only two positions remain:

Position One: Temporal Verification

Learning claims verified through:

  • Cryptographic attestation (PortableIdentity)
  • Temporal persistence testing (Persisto)
  • Impact verification (ContributionGraph)

Capability becomes mathematically demonstrable across time.

Position Two: Credential-Based Verification

Learning claims verified through:

  • Institutional credentials (degrees, certificates)
  • Completion metrics (passed course, attended training)
  • Self-reported capabilities (résumé claims)

Verification depends on institutional trust rather than individual proof.

Clarification: Verification is binary within a conformance class (PED-CORE, PED-ASSURANCE, PED-LONGITUDINAL). Different conformance classes represent different assurance levels and MUST be labeled accordingly. There is no ”partial verification within a class”—either the class requirements are met or they are not. Claims of verification without conformance class labeling are invalid.


Integration Patterns

Input/Output Specification

Input to Persisto:

  • learner_id (PortableIdentity reference)
  • domain (what was learned)
  • baseline_score_raw (C₀ from T+0, may include assistance)
  • baseline_score_independent (C_ind(0) measured under profile scaffold constraints, required for P(t))
  • test_profile (Standard/High-Assurance/Ultimate)
  • training_completion_date (T=0 timestamp)

Output from Persisto:

  • verification_status (PASS/FAIL)
  • capability_score (0-100)
  • temporal_strength (Positive/Acceptable/Degraded)
  • test_date (T+X timestamp)
  • cryptographic_signature (PortableIdentity signed)
  • profile_used (which compliance profile)

University Integration

Training Phase:
├── Course delivery (any pedagogy)
├── T+0 baseline assessment → C₀ (may include tools/assistance)
├── T+0 independence assessment → C_ind(0) (under profile constraints)
├── Issue pending credential
│
T+90 Separation:
├── No contact with materials/instructors
│
T+90 Verification:
├── Novel problem set
├── Controlled environment (per profile)
├── Score → Pass/Fail (compared to C_ind(0) via P(t))
│
Credential Finalization:
└── IF Pass → Credential activated with temporal attestation
    ELSE → Credential remains pending, retest available

Employer Integration

Hiring Phase:
├── Candidate claims capability
├── T+0 baseline test (optional, may include assistance)
├── Conditional hire OR
├── Request existing temporal verification
│
T+90 Verification (if no prior proof):
├── Novel tasks in actual work context
├── No AI assistance (or explicitly defined tools only)
├── Independent assessment
│
Employment Decision:
└── IF Pass → Permanent hire with verified capability proof
    ELSE → Extended probation or separation

Individual Self-Verification

Self-Study Phase:
├── Learn with any resources (AI, books, videos)
├── T+0 self-assessment → C₀
│
T+90 Separation:
├── No reference to materials
│
T+90 Self-Test:
├── Use standardized problem sets (published)
├── Environment: Honor system with cryptographic attestation
├── Submit results + PortableIdentity signature
│
Credential:
└── Portable proof of persistent capability
    (Accepted by employers/platforms recognizing Persisto)

Cross-Protocol Integration

With Portable Identity

Portable Identity provides: Cryptographic authentication (WHO)
Persisto provides: Learning verification (WHAT was learned)
Together: Provable capability attribution

Integration:

  • All Persisto attestations MUST be signed with Portable Identity private key
  • Verification results become cryptographically owned proof
  • Portable across all platforms recognizing both protocols

With Contribution Graph

Contribution Graph provides: Impact persistence tracking
Persisto provides: Learning persistence verification
Together: Verified capability created verified impact

Integration:

  • Persisto-verified capabilities feed into ContributionGraph as proven capacity sources
  • ContributionGraph impact measurements validate whether Persisto-verified learning created real effects
  • Combined: Proof that learning occurred AND mattered

With MeaningLayer

MeaningLayer provides: Semantic infrastructure for what was learned
Persisto provides: Temporal verification that learning persisted
Together: Complete semantic understanding + persistence proof

Integration:

  • MeaningLayer provides semantic addressing for learned content
  • Persisto verifies semantic understanding persisted temporally
  • Combined: Proof of what was learned and that understanding survived

With LearningGraph

Learning Graph provides: Capability development topology mapping
Persisto provides: Verification standard for topology nodes
Together: Verified capability development over time

Integration:

  • Each Learning Graph node can reference Persisto verification
  • Graph edges represent verified capability dependencies
  • Topology reveals verified learning pathways

Web4 Ecosystem Position

(Non-normative rationale – informative)

Complete protocol stack for post-behavioral verification:

Persisto provides missing verification layer enabling other protocols to prove capability rather than activity.


IV. Security Considerations

Threat Model

Persisto verification is vulnerable to several attack vectors. Protocol implementations MUST address these threats through compliance profiles.

Attack Vector 1: Covert Material Access

Threat: Learner accesses training materials during separation period

Mitigation:

  • Standard Profile: Attestation with consequences (honor violation)
  • High-Assurance: Cryptographic commitment at T=0, re-verification of novel problem sets
  • Ultimate: Controlled environment with verified absence

Attack Vector 2: Hidden Assistance During Test

Threat: Real-time AI assistance, human tutoring, or collaboration during assessment

Mitigation:

  • Standard Profile: Attestation-based
  • High-Assurance: Proctored environment, network isolated
  • Ultimate: Faraday environment, device-free, biometric verification

Attack Vector 3: Test Bank Memorization

Threat: Learner obtains and memorizes test problems in advance

Mitigation:

  • MUST use randomized problem generation
  • MUST NOT reuse test items across administrations
  • SHOULD use procedural generation when possible
  • SHOULD rotate problem domains

Attack Vector 4: Scaffolded Prompts

Threat: Test questions provide excessive structure enabling recognition over understanding

Mitigation:

  • MUST use open-ended problems requiring application
  • MUST NOT include solution templates
  • MUST NOT provide step-by-step decomposition
  • SHOULD require novel synthesis

Attack Vector 5: Domain Shift Manipulation

Threat: Test made trivially easy through narrowed scope

Mitigation:

  • Test domain MUST match training domain scope
  • MUST NOT narrow to memorizable subset
  • SHOULD test breadth and depth
  • Compliance profiles MUST define domain boundaries

Attack Vector 6: Proxy Test Completion

Threat: Someone other than learner completes assessment

Mitigation:

  • Standard Profile: Attestation + identity verification
  • High-Assurance: Biometric verification
  • Ultimate: Continuous identity verification throughout test

Compliance Requirements

MUST Requirements (All Profiles)

  • Novel problem class (not training examples, different structural constraints)
  • No-scaffold prompts (no externalization support)
  • Test items MUST be non-reusable and non-disclosed prior to administration (seal by design, not necessarily by physical facility)
  • Randomized problem selection (prevents memorization)
  • Identity verification (prevents proxy completion)

SHOULD Requirements (High-Assurance+)

  • Cryptographic attestation of test conditions
  • Environment verification (network isolation, device restrictions)
  • Sealed test environment (physical or virtual isolation)
  • Temporal audit trail (when test occurred, how long it took)
  • Problem set publication (after completion, for transparency)

MAY Requirements (Ultimate Profile)

  • Biometric continuous verification
  • Faraday environment (complete electromagnetic isolation)
  • Procedurally generated problems (unique per test)
  • Multi-factor identity verification

V. Use Cases & Failure Modes

Primary Use Cases

Use Case 1: Post-AI Education Credentialing

Context: Students complete courses with AI assistance. Traditional credentials prove completion, not capability.

Persisto Application:

  • Course ends → T+0 baseline
  • T+90 separation (no access to materials/AI)
  • T+90 novel assessment → Pass/Fail
  • Credential issued only if Pass

Benefit: Employers trust credentials represent persistent capability, not AI-assisted completion.


Use Case 2: Employment Capability Verification

Context: Résumés and interviews can be AI-optimized. Claimed skills may not persist independently.

Persisto Application:

  • Interview → Conditional offer
  • T+90 on-job assessment (novel tasks, defined tool access)
  • Pass → Permanent hire
  • Fail → Extended probation or separation

Benefit: Companies hire genuinely capable employees, not AI-dependent performers.


Use Case 3: Individual Skill Portability

Context: Workers change employers. New company needs capability proof beyond résumé.

Persisto Application:

  • Worker maintains PortableIdentity
  • Previous employer issued Persisto attestations (T+90 verified)
  • New employer accepts cryptographically signed proof
  • No need for redundant testing

Benefit: Capability proof portable across employment contexts.


Use Case 4: Foundation Model Training Data

Context: AI models currently learn ”learning = completion metrics.” If trained on Persisto data, models learn ”learning = temporal persistence.”

Persisto Application:

  • Persisto verification data used in training sets
  • Models learn to distinguish persistent capability from temporary performance
  • Next-generation AI tutors optimize for T+90 success, not T+0 completion

Benefit: AI systems aligned with genuine learning, not proxy metrics.


Failure Modes Without Persisto

(Non-normative rationale – informative)

Failure Mode 1: Systemic Credential Inflation

Without Persisto:

  • Students graduate with AI-assisted degrees
  • Employers hire based on credentials
  • T+90: Capability proves non-existent
  • Massive productivity loss + hiring waste

Cost: $800B+ annually in education spending producing non-persistent capability


Failure Mode 2: Hidden AI-Dependency

Without Persisto:

  • Workforce becomes AI-dependent without awareness
  • Network outage / API changes → Productivity collapse
  • No independent capability reserve

Cost: Civilizational brittleness. System-wide fragility.


Failure Mode 3: Foundation Model Mis-Training

Without Persisto:

  • AI models learn ”learning = completion”
  • Next decade of AI optimizes toward wrong objective
  • Path-dependent lock-in after training complete

Cost: Cannot retrain foundation models after $100M+ training runs. Wrong definition embeds for decades.


Failure Mode 4: Learning Investment Misallocation

Without Persisto:

  • Society invests in education producing temporary performance
  • T+90 testing reveals 70-80% degraded to insufficient levels
  • Massive capital misallocation

Cost: Resources spent on performance theater, not capability development


VI. Governance & Evolution

Protocol Governance

Persisto Ergo Didici is open protocol maintained through transparent community process ensuring no entity captures learning verification standards.

Governance Principles

  1. Open Specification: Anyone may implement without permission
  2. Transparent Evolution: All changes proposed publicly, discussed openly
  3. Consensus Adoption: Changes require community agreement
  4. Non-Capturable: No corporation or institution owns protocol evolution
  5. Interoperability: All implementations must maintain compatibility

Version Control

  • Major versions (X.0.0): Breaking changes requiring implementation updates
  • Minor versions (1.X.0): Backward-compatible enhancements
  • Patch versions (1.0.X): Clarifications and corrections

Current Version: 1.0.0 (Specification Final)


Backward Compatibility

Guarantee: Minor and patch versions maintain compatibility with 1.0.0

Breaking Changes: Require major version increment (2.0.0) with minimum 12-month notice

Deprecation Policy:

  • Features marked deprecated in version X.Y.0
  • Removed no earlier than version (X+1).0.0
  • Minimum 12 months between deprecation notice and removal

Interoperability Requirement

All implementations of Persisto MUST:

  • Accept attestations from other implementations using same profile
  • Produce attestations verifiable by other implementations
  • Maintain semantic compatibility across platforms
  • Respect cryptographic signatures from any PortableIdentity provider

Non-Compliance: Implementation claiming ”Persisto-compatible” without interoperability is protocol violation.


Compliance Profiles Governance

Adding New Profiles

  1. Proposal published on PersistoErgoDidici.org/proposals
  2. Community review period (minimum 60 days)
  3. Implementation trial by minimum 3 independent parties
  4. Consensus adoption via governance process
  5. Addition to canonical specification

Profile Requirements

New profiles MUST:

  • Maintain compatibility with Tier-1 core verification
  • Define clear threat model and mitigations
  • Specify implementation requirements
  • Provide falsifiability criteria

Open Licensing

License: Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)

Rights Granted:

  • Anyone may implement, adapt, translate, or build upon specifications
  • Derivative protocols explicitly encouraged
  • Commercial implementations permitted
  • Educational use unrestricted

Requirements:

  • Attribution to Persisto Ergo Didici
  • Derivative works under same license (ShareAlike)
  • No exclusive licenses granted
  • No proprietary ownership claims

Rationale: Learning verification is civilizational infrastructure, not commercial product. The ability to prove capability cannot become intellectual property.


Roadmap

Phase 1 (2026 Q1): Current

  • Specification finalization ✓
  • Reference implementation development
  • Early adopter partnerships
  • Integration guides publication

Phase 2 (2026 Q2-Q3)

  • University pilot programs
  • Employer verification trials
  • Foundation model training integration
  • Community feedback incorporation

Phase 3 (2026 Q4-2027)

  • Platform adoption (LearningGraph, ContributionGraph)
  • Standardization efforts
  • Compliance profile expansion
  • Global education system integration

Phase 4 (2027+)

  • Universal temporal verification as default standard
  • Credential system evolution
  • Cross-protocol ecosystem maturity
  • Multi-decade persistence studies

VII. Conclusion

Time as Verification Primitive

(Non-normative rationale – informative)

When AI achieved behavioral equivalence with humans, civilization lost its primary learning verification method. Examinations, credentials, interviews—all became potentially AI-assisted performance rather than proof of capability.

Persisto Ergo Didici provides missing infrastructure:

Not by rejecting AI assistance (futile and counterproductive), but by measuring what AI-assisted performance cannot fake: genuine capability persisting independently across time when tested in novel contexts without support.

This is measurement protocol, not philosophical position.

Testable. Falsifiable. Replicable. Open.

Learning either persists or it does not. Capability either survives temporal separation or it collapses. The protocol provides infrastructure for distinguishing performance from persistence.


The Implementation Window

(Non-normative rationale – informative)

Foundation models complete training in 2026. They will internalize whatever definitions exist during training.

If ”learning = completion metrics” embeds during training, decades of AI optimization toward wrong objective.

If ”learning = temporal persistence” embeds during training, AI helps humans develop genuine capability.

The training window is finite. The path dependency is permanent.


Final Statement

Persisto Ergo Didici is not vision. It is protocol.

Open. Neutral. Necessary.

When behavior stopped proving capability, time became the last unfakeable dimension. This protocol provides standardized infrastructure for temporal verification.

Anyone may implement. Anyone may integrate. Anyone may improve.

But no one may capture. No one may monopolize. No one may own.

Learning verification is civilizational infrastructure. It must remain free.


VIII. Appendices

Appendix A: Measurement Profiles (Non-Normative)

CRITICAL: The thresholds and parameters in this appendix are indicative examples only. Implementations MUST NOT claim protocol conformance based solely on Appendix A thresholds. Conformance requires meeting the normative requirements in Sections II-VI.

Cascade Mathematics

For implementations choosing to measure cascade multiplication:

Branching Factor:

B = (Direct Beneficiaries Enabling Others) / (Total Direct Beneficiaries)

Generation Count:

G = Maximum depth of independent propagation chain

Cascade Multiplier:

M = B^G

Interpretation:

  • M > 8: Strong multiplicative cascade (high branching sustainability)
  • 2 < M ≤ 8: Moderate cascade (consistent branching)
  • M ≤ 2: Weak cascade (may indicate dependency or linear transfer)

Note: Cascade multiplication refers to multiplicative branching patterns (each node enables multiple others), not strict exponential growth (e^kt). The requirement is branching factor > 1 sustained across generations, indicating genuine capability transfer rather than dependency chains.

Retention Coefficient (Optional):

k = ln(C(T+X) / C_ind(0)) / X

Where X is days since baseline, and C_ind(0) is the independence-calibrated baseline.

Interpretation:

  • k > 0.02: Capability improving (positive growth)
  • k ≈ 0: Capability stable (maintenance – acceptable for verified learning)
  • k < -0.02: Capability degrading (significant decay)

Note: Capability maintenance (k ≈ 0) indicates successful internalization. Not all learned skills improve over time; many stabilize at competent levels. These are indicative thresholds, not normative requirements. Implementations may adjust based on domain and use case.


Appendix B: Implementation Checklist

Minimum Viable Implementation

  • [ ] Temporal separation mechanism (T+90 minimum)
  • [ ] Novel problem set generation or selection
  • [ ] Scaffold removal verification
  • [ ] Independent administration capability
  • [ ] Binary Pass/Fail threshold
  • [ ] Cryptographic signing (PortableIdentity integration)
  • [ ] Attestation format (machine-readable)

High-Assurance Implementation

All Minimum Viable, plus:

  • [ ] Environment verification (proctored or controlled)
  • [ ] Identity verification (biometric or equivalent)
  • [ ] Audit trail (when, how long, conditions)
  • [ ] Problem set rotation (prevent memorization)
  • [ ] Compliance profile documentation

Ultimate Implementation

All High-Assurance, plus:

  • [ ] Continuous identity verification
  • [ ] Network isolation verification
  • [ ] Device restriction enforcement
  • [ ] Procedural problem generation
  • [ ] Multi-factor security

Appendix C: Integration Examples

JSON Attestation Format (Informative)

{
  "protocol": "PersistoErgoDidici",
  "version": "1.0.0",
  "learner_id": "did:portable:abc123...",
  "domain": "Python Programming",
  "baseline_score_raw": 85,
  "baseline_score_independent": 78,
  "baseline_date": "2026-01-01T00:00:00Z",
  "test_date": "2026-04-01T10:00:00Z",
  "separation_days": 90,
  "profile": "Standard",
  "capability_score": 76,
  "persistence_ratio": 0.97,
  "verification_status": "PASS",
  "temporal_strength": "Acceptable",
  "signature": "0x...",
  "issuer": "UniversityName or SelfIssued"
}

Note: baseline_score_raw may include assistance; baseline_score_independent measured under profile constraints. Use persistence_ratio (P(t)) for temporal comparisons.


Implementation Resources

  • Canonical Specification: PersistoErgoDidici.org/protocol
  • Reference Implementation: PersistoErgoDidici.org/reference
  • Integration Guide: PersistoErgoDidici.org/integrate
  • Testing Methodology: PersistoErgoDidici.org/testing
  • Community Forum: PersistoErgoDidici.org/community
  • Proposals: PersistoErgoDidici.org/proposals

Related Infrastructure

Complete Web4 Protocol Stack:

Together these form architecture for civilization’s transition from behavioral verification (compromised by AI) to temporal verification (unfakeable by AI).


Protocol Version: 1.0.0
Status: Specification Final
License: CC BY-SA 4.0 (Open Protocol)
Last Updated: January 2026
Maintained By: Web4 Protocol Community
Canonical URL: PersistoErgoDidici.org/protocol


End of Specification