
The Last Reliable Signal: What Humans Can Verify That Machines Cannot
When AI can generate any content—text, images, video, audio—perfectly, what remains verifiable?
Not much. But not nothing.
Identifying the last reliable signals—the things AI cannot fake and humans can verify—is essential for preserving trust in an era of epistemic drift. These signals become the foundation for whatever coordination remains possible.
The Verification Hierarchy
What AI Can Already Fake Perfectly
Text: AI generates text indistinguishable from human writing. Detection tools are losing the arms race.
Images: AI-generated images fool human observers. Forensic detection is possible but not scalable.
Audio: Voice cloning is mature. Real-time fake audio is possible. Authentication is difficult.
Video: Deepfakes are approaching undetectable quality for short clips. Longer, complex video is harder but coming.
For all of these, the current trajectory is toward perfect undetectability. Relying on content analysis to determine authenticity is a losing strategy.
What AI Can Fake But With Difficulty
Consistent long-term personas: Maintaining a consistent identity across years of content, with appropriate development and change, is harder. But not impossible.
Interactive knowledge: Real-time conversation requiring deep domain expertise catches gaps in training data. But those gaps are shrinking.
Physical world prediction: AI struggles with physical intuition. But this is a current limitation, not a permanent one.
These represent temporary advantages that are eroding.
What AI Cannot Fake (For Now)
Physical presence: Being somewhere in the physical world, at a specific time, verified by independent observers.
Cryptographic provenance: Content signed at creation time with keys controlled by known entities.
Stake and consequence: Actions that carry real cost if dishonest—financial stake, reputational stake, legal liability.
Verified history: Long track records established before the capability to fabricate them existed.
Independent replication: Multiple independent parties reaching the same conclusion through different methods.
These are the last reliable signals. They are not perfectly reliable, but they are more reliable than content analysis.
*In-person meetings**: If you meet someone physically, you know they exist and were in that place at that time.
The Reliable Signal Taxonomy
Physical Presence Signals
In-person meetings: If you meet someone physically, you know they exist and were in that place at that time.
Live events with witnesses: Public appearances, conferences, performances—verified by many independent observers simultaneously.
Physical artifacts: Objects that exist in the world and can be independently examined.
Synchronized physical actions: Multiple people performing coordinated actions that require physical presence.
Limitations: Physical presence does not verify what someone says, only that they exist. And remote manipulation of willing proxies is possible.
Cryptographic Signals
Signed content: Content cryptographically signed by keys established before fabrication capability existed.
Blockchain attestation: Timestamped, tamper-evident records on distributed ledgers.
Hardware attestation: Signatures from secure hardware that is difficult to compromise.
Limitations: Key compromise, coercion, and the problem of mapping keys to identities remain. Cryptography proves key control, not identity.
Stake and Consequence Signals
Financial stake: Putting money at risk based on claims. If wrong, you lose.
Reputation stake: Making claims under a long-established identity that will suffer if wrong.
Legal liability: Claims made under penalty of perjury or contract.
Career consequence: Professional stakes that would be damaged by falsity.
Limitations: Stakeholders can be indifferent (wealthy don't care about fines) or irrational (willing to sacrifice reputation). But in general, stake signals truth.
Track Record Signals
Pre-AI verification: Content and claims that were verified before AI could fake them.
Consistent history: Long track records where early predictions were verified by later events.
Established reputation: Identities that were known and trusted before fabrication was easy.
Limitations: Track records can be fabricated retroactively if records are not secured. And past accuracy does not guarantee future honesty.
Independent Replication Signals
Scientific replication: Multiple labs reaching the same result independently.
Journalistic confirmation: Multiple reporters with different sources confirming a story.
Adversarial verification: Parties with opposing interests agreeing on facts.
Distributed observation: Many independent observers reporting consistent observations.
Limitations: Coordination among fabricators, or systematic biases affecting all observers.

Building Systems on Reliable Signals
Verification Infrastructure
Society needs infrastructure that generates and validates reliable signals at scale.
Notarization services: Entities that attest to physical presence, document authenticity, and identity verification.
Cryptographic registries: Systems for establishing and managing key-to-identity mappings.
Stake mechanisms: Platforms where claims must be backed by real stake.
Replication networks: Infrastructure for coordinating independent verification.
These exist in rudimentary forms. They need massive expansion.
Signal Preservation
Signals that are reliable now may not be reliable later.
Pre-AI content archives: Secure preservation of content created before fabrication capability.
Key ceremonies and records: Secure creation and documentation of cryptographic identities.
Physical evidence chains: Maintaining chain of custody for physical artifacts.
The window for creating reliable historical records is closing as AI capability advances.
Signal-Based Trust Networks
When content itself is unverifiable, trust must flow through verified entities.
Trusted attesters: Entities whose reliable signals have been verified, who can then vouch for others.
Reputation systems: Mechanisms for tracking consistent reliable signaling over time.
Web of trust: Networks where trust propagates through chains of verified attestation.
These are not new ideas. They need new implementation for AI-era conditions.
The New Epistemics
From Content to Source
The shift in verification strategy:
Old model: Evaluate content. Is this text accurate? Is this image real?
New model: Evaluate source. Is this entity reliable? Are their signals verifiable? Do they have stake?
Content becomes less important than provenance. What matters is not what is said but who is saying it and what they have at stake.
From Detection to Prevention
Old model: Detect fabricated content after the fact.
New model: Only trust content created with reliable signals from the start.
If content was not signed at creation, was not attested by physical presence, carries no stake—assume it may be fabricated. The default shifts from trust to distrust.
From Universal to Local
Old model: Shared epistemic standards across society.
New model: Trust bubbles centered on verified entities and their attestations.
This is fragmentation, but it may be the only stable equilibrium. Universal shared reality may be a temporary historical condition, now ending.
What This Means Practically
For Individuals
- Build reputation and track record now, while it can still be verified
- Establish cryptographic identity and use it consistently
- Prefer physical presence for important relationships
- Distrust content without verified provenance
For Organizations
- Implement cryptographic signing for all official content
- Maintain chain of custody and attestation for important claims
- Build relationships on physical verification where possible
- Invest in verification infrastructure
For Society
- Create public infrastructure for identity and attestation
- Preserve pre-AI records before they can be contaminated
- Develop legal frameworks that recognize reliable signals
- Build education around the new epistemics

Physical presence requires travel and time. Cryptographic infrastructure requires investment. Stake requires resources at risk. Track records require years.
The Underlying Tragedy
The last reliable signals are expensive.
Physical presence requires travel and time. Cryptographic infrastructure requires investment. Stake requires resources at risk. Track records require years.
This means verification becomes a luxury. Those with resources can verify. Those without cannot.
The alternative—cheap verification through content analysis—is failing. But expensive verification creates its own inequities.
There may be no good solution. Only less bad ones.
Implications
In an era of epistemic drift and semantic collapse, the last reliable signals become the foundation for whatever truth and trust remain.
Identifying these signals, building infrastructure around them, and teaching people to rely on them is essential work.
The alternative is a world where nothing is verifiable. Where every claim is equally suspect. Where coordination collapses because no one can trust anyone.
We are not there yet. But we are heading there unless we build the alternative.
The last reliable signals are the building blocks. The time to build is now.
This is a domain impact page showing constructive responses to Epistemic Drift. For the underlying problem, see Semantic Collapse. For practitioner implications, see For Executives: Scarcity Inversion.