
We introduce Linear Networks, a gradient-free neural architecture that builds knowledge through multi-space relational density estimation rather than loss minimization, realizing the AI Implicit paradigm's commitment to epistemic awareness as an architectural property. The core contribution is a four-space relational density framework Local Density Matrix, Directed Cauchy Affinity Matrix, Spurious Feature Matrix, and Compositional Relational Density whose convergence structure produces a calibrated Epistemic Confidence signal $\Psi$ in a single inference pass. Evaluated on a controlled Colored MNIST benchmark, Linear Networks achieve an Epistemic AUROC of 0.897, Selective Prediction Quality of 0.986, calibrated ECE of 0.020, 100% out-of-distribution rejection, and Cross-Distribution Retention of 0.995 substantially exceeding KNN and prototype baselines on all epistemic metrics. These results establish a proof of concept that gradient-free relational density estimation is a viable inductive bias for learning systems that prioritize calibrated uncertainty and structural retention under distributional shift over raw predictive optimality.
Authors: Momen Ghazouani
DOI: https://doi.org/10.5281/zenodo.19824752
Publish Year: 2026
Download PDF