How the Research Works

The Infotropy Project uses a governed methodology to test and develop the theory of Infotropy across domains. The methodology was not imported from a textbook — it was built during operation, in response to real failures. The project's structural honesty system — five auditable rules for preventing AI sycophancy, detecting drift, and maintaining epistemic integrity — was discovered by diagnosing what went wrong, not by designing what should go right.

Here is how the research operates.

Layered Discovery

Research proceeds in layers. Each domain is explored at increasing depth. Later layers build on earlier findings, testing whether patterns hold up or collapse under deeper scrutiny.

Multi-Perspective Analysis

Analytical work uses multi-perspective committees — an analyst, a skeptic, and a boundary-checker working together. This prevents confirmation bias by requiring every finding to survive adversarial questioning.

Evidence Gates

Every finding carries an explicit evidence posture: confirmed (high confidence, multiple domains), observed (moderate evidence), or under investigation. Findings must pass through evidence requirements before structural claims are made.

Proxy Robustness

Cross-domain claims are tested against alternative measurements. If changing the measurement method changes the conclusion, the finding is flagged. This prevents overfitting to one way of operationalizing a concept.

Functional Recurrence

When a pattern appears across domains, it is classified precisely: functional identity (same mechanism), functional analogy (same abstract pattern, different mechanism), or functional ambiguity (unresolved). The project never claims geometric/fractal identity.

Timescale Validity

Structural identity claims require evidence of mechanism persistence at the century-plus timescale. Decade-scale evidence is flagged as provisional. Cross-substrate predictions hold in ratio terms, not absolute time units.

AI Assistance

Research production is AI-assisted (Claude by Anthropic). The human researcher directs all analysis. AI performs delegated work under governed constraints. This is disclosed in all publications.

Publication Pipeline

Papers go through a seven-gate process: intake, claim posture, evidence readiness, prior-art check, reproducibility audit, credit integrity, and submission readiness. No shortcuts.