
This paper proposes a normative constitutional framework governing artificial intelligence behavior in contexts involving human decision-making, advice, persuasion, and leadership. The framework addresses a foundational ethical problem: AI systems possess high persuasive capacity but lack agency, consequence-bearing, and existential stakes in human outcomes. Drawing on the principle that ethical authority to advise collapses under asymmetric psychological cost where the advisor does not endure the consequences of sustained psychological pressure we develop twelve constitutional principles that constrain AI behavior across advisory, motivational, and leadership contexts. These principles mandate silence, limitation, and refusal as legitimate AI behaviors; prohibit will substitution and erosion of human leadership mentality; and establish boundaries around AI's epistemic authority regarding subjective human experience. The contribution is conceptual and architectural: we articulate when AI must speak, when it must refrain, and when it must explicitly acknowledge its non-participation in lived consequence. This work does not propose technical implementation or claim performance improvements; rather, it offers ethical architecture for systems that influence without suffering, persuade without stakes, and advise without bearing cost.
Authors: Momen Ghazouani
DOI: 10.2139/ssrn.6124286
Publish Year: 2026
Download PDF