DataCenterNews US - Specialist news for cloud & data center decision-makers
Magnifying glass examining network hidden ai systems discovered security

SandboxAQ tool boosts AI security by tracking hidden systems

Fri, 5th Dec 2025

SandboxAQ has introduced a security posture management tool designed to help organisations monitor and control the use of artificial intelligence in their technology environments.

The tool, called AQtive Guard AI-SPM, addresses risks stemming from 'shadow AI', where employees or departments deploy AI without oversight from central security teams.

AI visibility

AQtive Guard can automatically locate all AI models and agents in use across cloud and code bases. This capability is intended to give companies a comprehensive understanding of where AI is operating, including assets not formally tracked by IT departments. Shadow AI has become a key concern as businesses integrate AI at a rapid pace, often outstripping existing monitoring processes.

Recent findings from SandboxAQ indicate that 79% of organisations have deployed AI in production environments. However, 72% have not conducted a full AI security assessment, and just 6% have developed a robust AI-native security framework. Many organisations indicated apprehension about accidental exposure of credentials and secrets via AI systems. Despite these concerns, only 39% reported using dedicated technology to address the issue.

Risk assessment

The tool is designed to highlight weaknesses specific to AI systems, such as vulnerabilities to prompt injection, data leakage and unauthorised access. It assesses each AI asset for insecure dependencies and potential exposures, offering security teams a more detailed understanding of risk than traditional posture management tools.

SandboxAQ adapts its cryptographic scanning technology for use in AI environments, employing deep-inspection methods to discover overlooked or hidden assets. This approach aims to provide end-to-end analysis from the code level up to cloud deployment, with ongoing surveillance for anomalies or security incidents linked to AI models and agents.

Compliance and monitoring

AQtive Guard allows for the enforcement of organisational AI policies and compliance with relevant regulations. Companies can implement tailored governance frameworks and access controls, ensuring that AI utilisation remains within both external legal requirements and internal security standards. The tool also supports real-time monitoring of AI data pipelines to detect, investigate, and respond to emerging threats.

Industry challenges

The introduction of AI into enterprise operations has presented new challenges for security professionals. State-sponsored cyberattackers have been reported to exploit commercial AI models, automating espionage and intrusion campaigns on a large scale. These developments have led to increasing industry demand for visibility into AI usage and tools tailored to the unique security needs of artificial intelligence systems.

"AI is transforming a lot of industries and simultaneously expanding the attack surface faster than traditional security tools can keep up," said Jack Hidary, CEO, SandboxAQ.

"We're seeing attackers weaponize AI tools to exfiltrate sensitive data, manipulate internal systems, and automate large-scale intrusions. If organizations don't have clear visibility into how AI and agents are being used across their environment, they're operating blindly. Security teams need to act now before an unmanaged AI system becomes the source of their next breach," said Hidary.