AIGuardXplore: Intelligent Model Vulnerability Inspector
AIGuardXplore: Intelligent Model Vulnerability Inspector
Authors:
S.Saravana Kumar,
Department of CSE(Cyber Security),
Dr.Mahalingam College of Engineering
and Technology, Coimbatore, India.
K.S.Kavishvar,
Department of CSE(Cyber Security),
Dr.Mahalingam College of Engineering
and Technology, Coimbatore, India.
V.Dharanivasan,
Department of CSE(Cyber Security),
Dr.Mahalingam College of Engineering
and Technology, Coimbatore, India.
dharanivasanv2@gmail.com
.
R.Banushree,
Department of CSE(Cyber Security),
Dr.Mahalingam College of Engineering
and Technology, Coimbatore, India.
Abstract- As enterprises continue to adopt Large Language Models (LLMs) in their infrastructure, the lack of transparency in automation security guardrails creates a significant weakness. This is compounded by existing solutions primarily focusing on automation blocking while also providing little granular visibility to security operators who must audit incidents or modify their defensive policies. To address this gap, the AI Guard Explore framework was designed as a high-fidelity visual observability solution enabling real-time monitoring of AI security events.Through a reactive, full-stack architecture built with Vite, React, and Drizzle ORM, AI Guard Explore aggregates information into one central location allowing for consolidated access to evaluate prompt injections, data leaks, and policy violations. AI Guard Explore creates a unique UI paradigm designed with high contrast colours and reduce cognitive load so that Security Operations Centre (SOC) personnel can recognize incident data quickly as it occurs and easily differentiate between automated guardrail logic and actual incidents.
Keywords: AI Guardrails, LLM Security, Real-time Observability, Prompt Injection Defense, Visual Analytics, Full-stack Telemetry.