Current Volume 9
The integration of large language models (LLMs) into mobile and edge devices heralds a paradigm shift in personalized, low latency AI experiences. Yet on the other side, a plethora of concerns arise regarding user privacy, regulation, and adversarial robustness. Till date, there remain certain shortcomings inherent to methodologies assessing privacy risks (e.g., red teaming, privacy probes) that could not be adequately extended to resource constrained environments, such as smartphones or embedded systems. We introduce an encompassing multilayer framework for evaluating and mitigating privacy risks of on device LLMs. This includes synthetic data injection, differential audit trails, and consent aware tracing mechanisms that go hand in hand with developments in privacy regulations such as the EU AI Act and data protection laws at the U.S. state level. Based on real-world case studies ranging from messaging to voice assistant platforms, we investigate the challenges of implementation and regulatory incoherence and technical approaches that can help bridge the chasm between theoretical compliance and actual deployment. The hybrid result of these studies emphasizes that embedding regulatory principles within LLM system architectures is the initiative toward responsible AI in real-time privacy-sensitive settings.
On-device LLMs, Privacy centric AI, AI regulation, EU AI Act, Data protection, Consent aware tracing, Red teaming, Edge AI, Regulatory alignment, Privacy leakage
IRE Journals:
Deepak Kejriwal , Saurabh Kansal
"Toward Responsible Deployment of On Device Language Models: A Framework for Privacy Centric Evaluation and Regulatory Alignment" Iconic Research And Engineering Journals Volume 7 Issue 7 2024 Page 701-714
IEEE:
Deepak Kejriwal , Saurabh Kansal
"Toward Responsible Deployment of On Device Language Models: A Framework for Privacy Centric Evaluation and Regulatory Alignment" Iconic Research And Engineering Journals, 7(7)