The integration of artificial intelligence into regulated industries such as finance, healthcare, and telecommunications has introduced a new class of engineering challenges centered on security, compliance, and trust. While AI technologies offer significant benefits in terms of automation, predictive analytics, and operational efficiency, their deployment in regulated environments requires strict adherence to legal, ethical, and technical constraints. These systems must not only perform accurately but also operate transparently, securely, and in full alignment with regulatory requirements. This paper explores the architectural and engineering principles required to design secure and compliant software systems that integrate AI capabilities within highly regulated domains. It examines how traditional secure software engineering practices must be extended to address AI-specific risks, including model opacity, data sensitivity, and adversarial vulnerabilities. By combining principles from software engineering, cybersecurity, and AI governance, organizations can develop systems that balance innovation with regulatory compliance. The study analyzes the regulatory landscape across finance, healthcare, and telecommunications, highlighting common requirements such as data protection, auditability, and decision traceability. It discusses how these requirements influence system architecture, necessitating the inclusion of control points, monitoring mechanisms, and policy enforcement layers within AI-driven systems. A significant focus is placed on data governance and privacy engineering, where sensitive data must be managed throughout its lifecycle in a secure and compliant manner. Techniques such as access control, data anonymization, and secure data pipelines are examined as essential components of system design. The paper also addresses model governance, emphasizing the importance of explainability, validation, and lifecycle management to ensure that AI systems remain transparent and accountable. Security engineering considerations are explored in detail, including threat modeling, adversarial risks, and the protection of model and data assets. The integration of DevSecOps and MLOps practices is analyzed as a means of embedding security and compliance into continuous development and deployment processes. Through the examination of industry-specific use cases, the paper demonstrates how these principles are applied in real-world systems, illustrating the practical challenges and solutions associated with deploying AI in regulated environments. It also discusses future directions, including the evolution of regulatory frameworks and the emergence of automated compliance systems. By providing a comprehensive framework for engineering secure and compliant AI systems, this research offers guidance for organizations seeking to deploy AI technologies responsibly within regulated domains. The findings highlight the importance of integrating security, governance, and compliance into every stage of system design and operation.
Secure AI Systems, Regulatory Compliance, AI Governance, Data Privacy, Explainable AI, DevSecOps, MLOps, Financial Systems, Healthcare Systems, Telecom Systems
IRE Journals:
AMIL USLU "Engineering Secure and Compliant Software Systems: Integrating AI into Regulated Domains (Finance, Healthcare, and Telecom)" Iconic Research And Engineering Journals Volume 8 Issue 9 2025 Page 2008-2020 https://doi.org/10.64388/IREV8I9-1716619
IEEE:
AMIL USLU
"Engineering Secure and Compliant Software Systems: Integrating AI into Regulated Domains (Finance, Healthcare, and Telecom)" Iconic Research And Engineering Journals, 8(9) https://doi.org/10.64388/IREV8I9-1716619