(1)
Architecting Trustworthy LLMs: A Unified TRUST Framework for Mitigating AI Hallucination. JCSFT 2025, 1 (3), 1-15. https://doi.org/10.63313/JCSFT.9019.