
This paper introduces Deep Transducers, a novel class of neural architectures designed for extracting and compressing implicit structures rather than relying on direct output prediction. The central contribution is the Density Mechanism, a prototype assignment system that replaces conventional attention with Mahalanobis-distance-based density scoring to define geometric regions of influence in representation space. This mechanism enables the generation of an Epistemic Confidence signal, allowing the system to recognize the boundaries of its own knowledge by identifying inputs that do not align with any known prototype. Empirical evaluations demonstrate that Deep Transducers achieve a 2.97x improvement in structure consistency compared to standard Transformer baselines. These results establish a proof of concept that density-based structural assignment provides the necessary inductive bias for learning transferable and generative latent representations.
Authors: Momen Ghazouani
DOI: https://doi.org/10.5281/zenodo.19702972
Publish Year: 2026
Download PDF