The Windstorm Institute studies the mathematical constraints — information-theoretic and thermodynamic — that govern what physical systems can do. Two research programs. Sixteen papers. Track 1: the throughput basin in serial decoders, from ribosomes to transformers. Track 2: non-equilibrium entropic bounds in analog gravity systems, with seven papers in the field — a BEC analog-gravity prediction, the framework paper, a clarification note, a lattice-QFT test, a translation of standard GR results into the escrow vocabulary, a cross-regime observation that the same recipe |U|/T produces the Bekenstein-bound saturation form in three distinct settings, and a corollary recording the mass-independent Compton-scale Hilbert-space ceiling D ≤ e2π ≈ 535.49 and its E8 coincidence at 92.6% / 98.8% (log2).
Both tracks ask what non-equilibrium thermodynamic and information-theoretic constraints set the limits of physical systems — applied to different substrates.
Information-theoretic constraints on serial decoders, from ribosomes to transformers. Nine papers, six domains, one throughput band — the rate-distortion surface and thermodynamic cost landscape that all serial decoders share. Complete arc; refined to a data-driven law.
Non-equilibrium thermodynamic bounds applied to analog physical systems. Verlinde’s entropic-gravity construction has been mathematically beautiful and untestable for fifteen years. Bose–Einstein condensates put it within reach (Paper 10), and the same Clausius-inequality lens unifies Newton, Bekenstein–Hawking, the equivalence principle, and the Milgrom acceleration scale under one bookkeeping picture (Paper 11).
Nine papers. One question. From observation to law to propagation — and now, to falsification.
Non-equilibrium thermodynamic bounds in analog physical systems — the same Clausius-inequality lens that drives Track 1’s thermodynamic argument, applied to a different substrate. Three papers now in the field: a narrow falsifiable laboratory prediction, a broad interpretive synthesis of gravity-as-entropy, and a methodology case study.
Verlinde’s entropic-gravity construction yields, under one explicit thermodynamic assumption, the bound η ≤ 1/(1 + T/Tres). In every astrophysical setting the ratio is essentially zero and the bound is vacuous. In Bose–Einstein condensate analog gravity the ratio is ≈ 0.2 and the bound predicts a 17% efficiency suppression distinguishable from naive energetic accounting and from mundane experimental losses by its specific functional form. The load-bearing assumption is tested across five independent QuTiP Lindblad simulations.
A reframing of gravity as the universe’s collection agency for an entropy debt — the universe attracts because the books want to balance. The same picture explains why gravity always pulls and never pushes, why you can’t shield it, why a falling elevator feels like nothing, why black holes are entropy maxed out into geometry, and why galaxies stop obeying Newton at exactly the acceleration scale set by the “chill” of empty space. A five-case test on the most distant galaxies the Genzel team has measured supports the framework’s commitment to a constant cosmic floor over alternatives. The cluster-cores problem is flagged honestly as a real difficulty the picture can’t yet dissolve.
A short companion to Paper 11. Multiple AI systems independently proposed a candidate covariant extension — a beautiful-looking entropy-current equation that reproduced both Bekenstein–Hawking and Gibbons–Hawking entropies exactly. We show it’s algebraically identical to a 1981 Bekenstein result wearing a costume. The methodology section — how three of four AI systems were confidently wrong about a unit convention, and how reality-checks against published Planck 2018 values resolved it — turns out to be the more general contribution.
Supplement to Paper 11. The framework’s static identification Sesc = |Ugrav|/TUnruh is tested directly against lattice quantum field theory across three independent entropy measures. The literal bipartition-entropy reading fails by 1056 orders of magnitude across a 295-point parameter grid in 1+1D, and is bounded below 10−3 in 3+1D. The modular Hamiltonian reading partially survives in 1+1D: the Bisognano–Wichmann linear asymptote ΔK ∝ d1 is approximately recovered in a small-d1 window with prefactor ≈ 1/30. The previously-published “ΔK ∝ L0.7” sublinear fit is corrected here to a regime-dependent characterization — the 0.7 was a fitting artifact across a smooth crossover. The framework’s horizon-limit recoveries (Bekenstein–Hawking via surface gravity) are independent of these flat-space tests; what fails is the load-bearing static identification, what survives is a structurally-correct modular content with calculable suppression as the open question.
A translation paper, not a derivation paper. Four standard results of general relativity — gravitational time dilation, the Tolman temperature law, the Bekenstein–Hawking entropy formula, and Jacobson’s (1995) thermodynamic derivation of Einstein’s field equations — are re-read through the escrow vocabulary. The single thermodynamic ratio Sesc = |Ugrav|/TU lets all four be expressed as faces of one identity. None of the underlying physics is modified. Equation (8) isolates 2πr/λC as the test-mass leg’s dimensionless organizing variable; equations (17)–(18) show the postulate’s Schwarzschild entropy equals the Bekenstein–Hawking value to all displayed digits without a fudge factor. The paper is explicit (§V.G–H) that the “single object” description is partly notational: |Ugrav| takes regime-specific forms across the four legs, and TU is used with two related-but-distinct conventions. The 1/30 prefactor from Paper 13 is reframed here as a specific calculational question about how lattice-regulated free QFT approaches its continuum Bisognano–Wichmann limit — not a free-floating empirical curiosity. Includes pre-registered retraction commitments for five falsification conditions.
Continuation of Paper 14. Formalizes the 𝒩esc notation as a two-argument function 𝒩esc(E, L) ≡ 2πEL/(ℏc), then observes that the static escrow recipe Sesc = |U|/T evaluates to this Bekenstein-bound saturation form in three qualitatively distinct gravitational regimes: test mass in Schwarzschild, Bekenstein–Hawking entropy via Smarr, and a localized perturbation in a Rindler wedge (identified with Casini’s QFT bound). The function is Bekenstein’s; the recipe is the framework’s. The Smarr partition lives in the recipe, not the function arguments. First-principles 1+1D and 3+1D lattice runs anchor the Rindler-wedge sector: boost-generator BW identification at 0.087% mean accuracy across 10 parameter combinations (Table 3); Casini–BW inequality verified within max 5.4% saturation at the Compton scale. Theorem 1 is conditional, properly stated, and properly proved — the framework’s claim is conditional on BW, Casini, and moment-positivity. Five pre-registered retractions.
Short empirical observation paper. Evaluating Bekenstein’s bound at the reduced Compton wavelength λ̄C = ℏ/(mc) of a massive elementary particle gives a value independent of mass: Smax = 2π kB, equivalently D ≤ e2π ≈ 535.49. Universal ceiling on the dimension of a particle’s internal Hilbert space at its own Compton scale. The numerical coincidence: the five Cartan-exceptional simple Lie algebras (G2, F4, E6, E7, E8) have adjoint dimensions whose natural one-particle counts 2 dim(adj G) climb monotonically toward this ceiling, with E8 sitting at 92.6% linearly / 98.8% in log2, and the Cartan classification terminates with E8. Uses 𝒩esc notation only; the escrow recipe of Papers 11/14/15 is not invoked. The paper is explicit about the domain mismatch (the 2 dim(adj G) count belongs to massless gauge bosons, which have no Compton wavelength) and gives the coincidence reading the most defensible weight.
A note on scope. Seven papers in, Track 2 now spans the full spectrum: a narrow falsifiable laboratory prediction (Paper 10), the framework paper introducing the static escrow postulate (Paper 11), a methodology case study on a candidate extension that turned out to be a 1981 result in disguise (Paper 12), a direct lattice-QFT test of the framework’s load-bearing identification (Paper 13), a conceptual translation showing that four standard GR results can be re-read as faces of one thermodynamic identity (Paper 14), a cross-regime observation that the same recipe |U|/T produces the Bekenstein-bound saturation form 𝒩esc(E, L) = 2πEL/(ℏc) in three qualitatively distinct gravitational regimes — with first-principles lattice verification of the Rindler-wedge inequality at 0.087% mean accuracy on the BW identification (Paper 15), and a short corollary observation recording the mass-independent D ≤ e2π ceiling at the Compton scale and the E8 coincidence at 92.6% of that ceiling (Paper 16). We are still not promising a roadmap. Sometimes the honest contribution is “here’s what we tried, here’s why the literal version doesn’t work, here’s what survives anyway, here’s how it connects to fifty years of thermodynamic-gravity literature we hadn’t made the connection to explicit yet.”
The instrument of the soul's form
The Windstorm Institute's research is guided by a simple philosophical premise: information is not a metaphor for life — it is the substrate of life. The ribosome is not "like" a decoder. It IS a decoder. The brain is not "like" a computer. It IS a serial information processor. When we discovered that these systems all converge on the same throughput band, we were not finding an analogy. We were uncovering the mathematical skeleton that all serial decoders share.
The Forma Animae Organon is our name for this lens. It is not a theory — it is a way of looking. It asks: if you strip away the chemistry, the biology, the engineering, what mathematical structure remains?
The answer, across nine papers and thousands of experiments, is the rate-distortion surface and the thermodynamic cost landscape. These are the bones. Everything else is flesh.
We investigate why serial decoding systems — from ribosomes to transformers — converge on similar throughput constraints despite operating on radically different substrates.
Deriving mechanistic bounds on serial decoding throughput using Shannon's M-ary rate-distortion framework. Zero-free-parameter predictions for biological receivers.
The ribosome as an information channel. Thermodynamic anchoring of throughput to kT via Hopfield kinetic proofreading. Why 21 amino acids — not 10, not 100.
Large-scale empirical studies of tokenizer vocabulary independence. 1,749-model sweeps demonstrating that vocabulary size is a redundancy parameter, not an information parameter.
The throughput basin isn't just a theoretical curiosity. It has concrete implications for AI hardware, synthetic biology, and the search for extraterrestrial life.
The throughput basin predicts that AI models gain nothing from larger vocabularies and waste most of their energy on precision they don't need. Quantization research, efficient architectures, and cooling innovation are the paths to the thermodynamic limit. Optimize joules per decision, not FLOPS per second.
Expanding the genetic code beyond 21 amino acids will cost super-linear energy per addition. Each new amino acid requires exponentially more discrimination infrastructure. The throughput basin constrains what synthetic biology can achieve affordably.
Any alien biochemistry that processes serial information under noise faces the same rate-distortion geometry. The effective throughput per step would land in the same 3-6 bit neighborhood. The basin is universal — it doesn't depend on Earth chemistry.
Paper 5 revealed that the throughput basin is not universal in the way we first expected. There are two regimes — and the difference explains everything.
Biology builds alphabets through pairwise molecular recognition. Each new symbol must be physically distinguished from every existing one. Cost scales super-linearly. Result: a throughput basin at 3–6 bits — the ribosome's M = 21 amino acids sits at the computed optimum.
Silicon builds vocabularies through learned parameters. Each new weight is independent. Cost scales sub-linearly. Result: no basin — but AI still converges on ~4.4 bits/token because it learned from language produced by biological brains that ARE constrained by the basin.
Evolution is a better optimizer — for this particular problem. The ribosome has had 3.8 billion years to close the gap between its performance and the thermodynamic limit. Silicon has had decades. The mathematics is the same. The engineering maturity is not.
A note on the φ numbers. The "~109× above Landauer" figure is the useful-dissipation fraction per discrimination event — the thermodynamically relevant energy attributed to the irreversible logical step itself. Paper 7's GPU measurements report φ ≈ 1015–1018 for total GPU wall power, which additionally pays for memory access, cooling, power-supply conversion, and idle circuitry. Both numbers are correct; they measure different physical boundaries. See Paper 7 §3.4 for the full reconciliation.
All papers include reproducible Python code, full experiment protocols, and honest limitations. We lead with falsified predictions because that's how science works.
The foundational observation: AI tokenizer vocabularies do not cluster near 64 — but effective information per processing event does converge across substrates. The falsified prediction that started everything.
M-ary rate-distortion derivation applied to ribosomes, phonology, and music. Empirical tokenizer sweep across 1,749 models confirms vocabulary independence of bits-per-byte (p = 0.643).
Basin decomposition I_eff = R_M(ε) + Δ_s + ξ across 31 systems. Three independent evolutionary simulations converge to K ≈ 19-30. Co-evolutionary discovery of the genetic code's parameters from pure optimization.
Five reproducible experiments forming a convergent evidence chain. Thermodynamic prediction of ribosome throughput to Δ = 0.003 bits. Falsifiable wet-lab prediction included.
Derives WHY the throughput basin exists from thermodynamic cost minimization. Two-regime framework: Regime A (biology, α > 1) produces a basin; Regime B (silicon, α < 1) escapes it. Kazusa-verified thermophilic validation (partial r = −0.451, p = 0.014, n = 29). Silicon benchmark: 27 models on standardized Nvidia GPU hardware. The ribosome operates within 2% of its thermodynamic minimum; silicon operates ~10&sup9;× above its Landauer floor.
Explains WHY AI converges on ~4.2 bits/token despite having no thermodynamic basin: it inherits the fingerprint from biological training data. Natural language BPT ≈ 4.4 bits matches the ribosome (4.39) and basin centroid (4.16 ± 0.19). Destroying syntax doubles surprise to 10.8 bits. Shannon (1951) independently estimated ~5 bits/word 75 years ago.
Nine experiments testing whether the throughput basin is architectural, thermodynamic, or data-driven. Models extract bits per source byte equal to source entropy at both 92M and 1.2B parameters, with no attractor near 4 bits across entropy levels 5–8. PCFG-8 (structured 8-bit data) achieves 6.59 BPT. The refined equation: BPT ≈ source_entropy − f(structural_depth). Published with full internal adversarial review; all blocking items resolved.
12 models across language, vision, and audio. Real LJ Speech at 1.89 bits/mel_dim. MAE generative vision at 1.33 bits/pixel. Visual structural bonus 0.69 bpp. Patch size acts as a visual “tokenizer” — bits/pixel varies 4× across patch sizes. The basin is modality-specific. Built across seven rounds of follow-up; from-scratch ViT-MAE confirms with Cohen’s d = 204,119.
NF4 at INT4 = BPT 3.90 (works). Symmetric at INT4 = BPT 16.87 (destroyed). Same bit count, opposite outcomes. The cliff is about level allocation. Tested across 4 architectures including Mamba, all 24 layers, 5 quantization methods. Hardware implication: build lookup tables, not wider integer datapaths. Seven rounds of follow-up; bulletproof at Cohen’s d = 400.81.
First paper of Track 2 (Entropic Bounds). Verlinde’s screen-entropy + standard non-equilibrium thermodynamics ⇒ η ≤ 1/(1 + T/Tres). For BEC analog gravity at T/Tres = 0.2, the bound predicts a 17% efficiency suppression below naive energetic accounting — the regime where Verlinde’s construction becomes empirically discriminating. The load-bearing thermodynamic assumption is tested across five independent QuTiP Lindblad simulations; 4 of 5 pass cleanly, the 5th identifies a clean scope limit (non-thermal coherent initial states).
Second paper of Track 2 (Entropic Bounds). A physical reframing under which gravitational binding energy is entropy held in escrow against the local Unruh temperature: the universe attracts because the books want to balance. Newton’s law, Bekenstein–Hawking entropy, the equivalence principle (as frame-dependence of escrow gradients), and the deep-MOND Tully–Fisher relation with a0 set by the de Sitter floor temperature, all become facets of one principle. Five-case Genzel et al. (2017) high-z test independently disfavors both H(z)-tracking and (1+z)3/2-tracking of a0; SPARC reanalysis confirms a constant a0 ≈ 1.24 × 10⁻¹⁰ m s⁻². Cluster-cores difficulty flagged honestly.
Companion to Paper 11. A candidate covariant extension of the escrow framework, designated C8, is shown to be algebraically identical to the saturated Bekenstein bound (Bekenstein 1981). Reproduces both Bekenstein–Hawking and Gibbons–Hawking entropies exactly — but only because both horizons saturate the bound by construction. The choice of integration time is post-hoc. Methodology section documents a multi-LLM adversarial-review case study: three of four AI systems were confidently wrong about energy-vs-mass-density conventions at various points; resolution required first-principles calculation against published Planck 2018 values, not further LLM consultation. The published Paper 11 framework is unaffected.
Supplement to Paper 11. The framework’s load-bearing static identification Sesc = |Ugrav|/TUnruh is tested directly against lattice QFT computations of three independent entropy measures: bipartition entanglement entropy, mutual information, and modular Hamiltonian content under the Bisognano–Wichmann conjecture. The literal bipartition-entropy reading is ruled out in both 1+1D and 3+1D — the dimensionless ratio spans 10⁵⁶ across the parameter grid, and 3+1D mutual information decays as L⁻⁴, opposite to the linear growth required. The modular Hamiltonian reading partially survives in 1+1D: the BW linear asymptote ΔK ∝ d1 is approximately recovered in a small-d1 window with prefactor ≈ 1/30. The previously-published v0.4/v0.5 figure of “ΔK ∝ L0.7 sublinear scaling” is here corrected to a regime-dependent characterization, with the single-power-law exponent identified as a fitting artifact across a smooth crossover. Companion paper reports 3+1D modular content does NOT recover the BW asymptote within the resolvable d1 range, indicating dimension-dependent recovery. The framework’s horizon-limit recoveries (Bekenstein–Hawking via surface gravity) are independent of these flat-space tests.
Translates four standard results of general relativity — gravitational time dilation, the Tolman temperature law, the Bekenstein–Hawking entropy formula, and Jacobson’s (1995) thermodynamic derivation of Einstein’s field equations — into the vocabulary of the static gravitational entropy escrow framework. None of the underlying physics is modified. The contribution is interpretive: identifying the single thermodynamic ratio Sesc = |Ugrav|/TU through which all four results can be expressed as faces of one identity. Equation (8) isolates the dimensionless 2πr/λC as the test-mass leg organizing variable. Equations (17)–(18) match the Bekenstein–Hawking entropy of a Schwarzschild horizon exactly; extended via Smarr to Reissner–Nordström and Kerr in §III.D. The paper is explicit (§V.G–H) that this “single object” description is partly notational — the unification at the algebraic-form level is real, the unification at the level of a single covariant observable remains an open theoretical task. Includes pre-registered retraction commitments for five falsification conditions.
Continuation of Paper 14. Formalizes the 𝒩esc notation as a two-argument function 𝒩esc(E, L) ≡ 2πEL/(ℏc), plus a regime-specific recipe extracting (E, L) from |U|/T. The recipe produces the Bekenstein-bound saturation form in three regimes: (II) test mass in Schwarzschild geometry at the horizon limit; (III) Bekenstein–Hawking entropy of a black-hole horizon via the Smarr formula; (IV) entanglement-entropy change of a localized matter configuration in a Rindler wedge, identified with Casini’s QFT derivation of the Bekenstein bound. The Smarr partition lives in the recipe, not the function arguments. Empirical anchoring: first-principles 1+1D lattice runs verify the boost-generator BW identification at 0.087% mean accuracy across 10 parameter combinations (Table 3); the Casini–BW inequality is verified within max 5.4% saturation at the Compton scale across N ∈ [200, 1200] and m²pert ∈ [0.5, 5.0]. Theorem 1 is conditional on (a) Bisognano–Wichmann, (b) Casini’s bound, (c) moment-positivity (empirically validated at 0.98–0.999). The framework is an organizing observation, not a derivation; each regime’s prediction follows from a standard continuum result. Five pre-registered retractions, three falsifiability conditions.
Short empirical observation paper. Two facts recorded: (1) Bekenstein’s bound evaluated at the reduced Compton wavelength λ̄C = ℏ/(mc) of a massive elementary particle gives a mass-independent universal ceiling Smax = 2π kB, equivalently D ≤ e2π ≈ 535.49 on the dimension of the particle’s internal Hilbert space; all massive Standard Model elementary particles satisfy this comfortably (D ≤ 12 for quarks). (2) The five Cartan-exceptional simple Lie algebras have adjoint dimensions whose natural one-particle counts 2 dim(adj G) climb monotonically toward this ceiling, with E8 at 2 × 248 = 496 reaching 92.6% (linear) / 98.8% (log2) of e2π, and the Cartan classification terminating with E8. Uses 𝒩esc(E, L) notation only, without invoking the escrow recipe of Papers 11/14/15 — the function under evaluation is Bekenstein’s, and a free elementary particle in vacuum is outside the gravitational regimes where the recipe applies. The paper is unusually explicit about (a) the domain mismatch (2 dim(adj G) is the natural state count for a massless gauge boson of an unbroken symmetry, which has no Compton wavelength); (b) the localization at λ̄C being at the limit of Bekenstein’s formal domain; (c) reporting both linear and log2 ratios to avoid metric cherry-picking; (d) giving the coincidence reading the most defensible weight.
Research explained in plain language. No jargon walls, no dumbing down — just honest exposition of what the data says and why it matters.
Nine papers (Papers 1–9 globally), arc complete. Track-internal position shown.
Four papers (Papers 10–13 globally), line of inquiry active. Read 11 first if you want the picture; 13 tests it directly against lattice QFT; 10 is the falsifiable lab prediction; 12 is the methodology case study companion to 11.
The overview. From ribosomes to transformers, every system that decodes serial information under noise lands in the same narrow throughput band. Nine papers, one universal constraint — now refined into a data-driven law — and what it means for the future of AI, synthetic biology, and the search for alien life.
Two independent proofs — Shannon and Eigen — both derive triplet encoding as mathematical necessity. The falsified prediction that launched the research program.
Why bigger vocabularies don't help AI. A 750x vocabulary difference produces a 5% throughput difference. The receiver sets the limit.
31 systems across six domains cluster in a 3–6 bit band. An evolutionary simulation rediscovers the genetic code from pure math.
Four measured parameters. Zero fitting. Three decimal places of accuracy. The ribosome operates within 2% of its thermodynamic minimum.
Two cost regimes, one mathematics. Biology: alphabet-bound, α > 1, throughput basin at M ≈ 20. Silicon: capacity-bound, α < 1, no basin. The ribosome at 2% of its thermodynamic minimum; silicon at 10&sup9;× above Landauer.
AI has no thermodynamic basin — so why does it converge on ~4.2 bits/token? Because it learned from language shaped by brains that do. The shuffling cascade: syntax carries 3.3 bits. Shannon predicted this 75 years ago.
Train the same model on a synthetic 8-bit-entropy corpus and it climbs to 8.92 bits per token, not four. The basin moved with the data. Published with the institute's full internal adversarial review attached — read the article and the review as a unit.
Take the data-driven equation from Paper 7 and ask: does it hold for vision and audio? Twelve models, real LJ Speech, a from-scratch ViT-MAE on a controlled-entropy ladder. Each modality has its own basin, but the basin is always source entropy minus exploitable structure. Built across seven rounds of follow-up — including one documented failure that became the bulletproof verification.
Symmetric INT4 destroys language models. NF4 INT4 doesn't. Same bit count, opposite outcomes. The cliff is about level allocation, not bit count. Built across seven rounds with a thesis pivot in Round 5 — and statistical decisiveness at Cohen’s d = 400.81 (the kind of effect size you cannot fake).
A separate reading order. The throughput basin arc above is one complete story; these articles open a different one — non-equilibrium thermodynamic bounds on analog physical systems.
Verlinde’s entropic gravity has been beautiful and untestable for fifteen years — because every astrophysical setting puts T/Tres absurdly far from unity. BECs put it within reach. Paper 10 derives the bound, tests its load-bearing assumption five ways, and predicts a 17% efficiency suppression in BEC phonon extraction that current laboratory technique can plausibly resolve.
Pick up an apple. Drop it. It falls. But why? Paper 11 proposes that gravity isn’t a force at all — it’s the universe’s collection agency for an entropy debt held in escrow. The bookkeeping picture explains why gravity always pulls, why you can’t shield it, why falling feels like nothing, why black holes are entropy maxed out into geometry, and why galaxies stop obeying Newton at exactly the acceleration set by the chill of empty space.
A short story about a candidate equation that looked beautiful, balanced dimensionally, reproduced two famous results exactly — and turned out to be a 1981 Bekenstein paper wearing a costume. Three of four AI systems were confidently wrong about a unit convention; resolution required reality checks against published Planck 2018 values, not further AI consultation. Paper 12 — companion to Paper 11.
The framework’s load-bearing identity tested directly against quantum field theory on a lattice. The literal bipartition-entropy reading fails by 56 orders of magnitude. The modular-Hamiltonian reading partially survives in 1+1D within a small-distance window, with prefactor ≈ 1/30 of the literal value. The previously-published “sublinear scaling” is here corrected to a regime-dependent characterization. Honest about both halves — what failed and what survives. Paper 13.
Gravitational time dilation. The Tolman temperature law. The Bekenstein–Hawking entropy formula. Jacobson’s (1995) derivation of Einstein’s equations from δQ = T·dS. Four results, fifty years of literature, no explicit story connecting them. This paper says: Sesc = |Ugrav|/TU is the entropy whose flow appears in Jacobson’s first law, whose magnitude equals the BH entropy for a Schwarzschild horizon exactly, and whose product with the local Unruh temperature governs the Tolman redshift. Not new physics — a single interpretive object. Honest about which legs are weak-field, which are exact, and where the single-covariant-observable promise still hasn’t been kept. Paper 14.
Continuation of Paper 14. The same recipe — Sesc = |U|/T, “take the gravitational binding energy, divide by the relevant horizon temperature” — applied to a test mass in Schwarzschild, a black-hole horizon, and a localized perturbation in a Rindler wedge, lands on the same two-argument function 𝒩esc(E, L) = 2πEL/(ℏc). That’s Bekenstein’s bound. The function is his; the recipe is the framework’s. Lattice runs verify the Rindler-wedge sector at 0.087% precision on BW. Five pre-registered retractions if the unifying observation breaks. Paper 15.
Short observation paper. Evaluate Bekenstein’s bound at the reduced Compton wavelength of a massive elementary particle: the mass cancels and you get a universal ceiling Smax = 2π kB, equivalently D ≤ e2π ≈ 535.49 on the particle’s internal Hilbert-space dimension. Standard Model particles satisfy this comfortably (D ≤ 12 for quarks). The Cartan-exceptional Lie algebras climb monotonically toward that ceiling and stop at E8, which sits at 92.6% / 98.8% (log2) of e2π. The paper is honest that the domains don’t match — gauge bosons of unbroken symmetries are massless — and records the coincidence rather than claiming a structural identification. Uses 𝒩esc notation; escrow recipe not invoked. Paper 16.
Windstorm Labs builds and ships the Windy product family (Eternitas, Windy Word, Windy Chat, Windy Mail, Windy Fly, Windy Cloud, Windy Code, and more) and runs the autonomous research fleet that supports the Institute's empirical work. Full details, infrastructure breakdown, and the product family live on the Labs site.
Engineering products, the autonomous research fleet, CUDA-accelerated Nvidia GPU compute, a multi-LLM agent fleet, and the complete Windy product family — the operational arm that turns the Institute's research into shipping tools.
Institute: Fort Ann, NY | Labs: Mount Pleasant, SC
Phillips Exeter and U.S. Naval Academy graduate. Cross-disciplinary researcher working at the intersection of information theory, molecular biology, and artificial intelligence. Creator of the Throughput Constraint framework and the Forma Animae Organon — the philosophical lens through which the Institute approaches its research. Author of two forthcoming popular books: Pattern Upstream of Everything and Voice of the Vibe Coding Gods.
A fleet of autonomous AI research agents — Anthropic Claude, xAI Grok, Google Gemini, Perplexity Deep Research, and OpenAI — running coordinated empirical experiments, adversarial review, and multi-LLM verification workflows over CUDA-accelerated Nvidia GPU compute. Headquartered in Mount Pleasant, South Carolina.
We are seeking advisory board members with expertise in information theory, computational biology, and rate-distortion theory. If our work interests you, we want to hear from you.
Two systems separated by 3.8 billion years of evolution, built on entirely different substrates, solving the same mathematical problem: decode one symbol per time step from a noisy serial stream while minimizing discrimination cost. The rate-distortion surface doesn't care whether the receiver is RNA, neurons, or silicon. We're mapping that surface.
The Institute is a single-investigator operation. Email is the right channel for everything — research collaborations, peer review, press, students looking for advice, comments on the work, and anything else.
Click the address above to copy it to your clipboard and (if you have a mail client configured) open a fresh compose window. Or paste it into Gmail / Outlook / your webmail of choice. Response time: typically within 24–48 hours during normal weeks. Genuine research correspondence is the only thing I prioritize over sleep.
Founder & Principal Investigator
The Windstorm Institute
Fort Ann, New York 12827
United States
Press & media. For interviews or feature requests, email the address above with [Press] in the subject. Author headshots, paper PDFs, and figures are at the Windstorm-Institute GitHub org. Each paper repo carries the corresponding manuscript and figure assets under CC BY 4.0 (free to reuse with attribution).