January 2026 was a wake-up month for enterprise security teams. In a single week, CERT-In released three high-severity ...
Two high-severity vulnerabilities in Chainlit, a popular open-source framework for building conversational AI applications, ...
Researchers found the popular model context protocol (MCP) servers, which are integral components of AI services, carry ...
High-severity flaws in the Chainlit AI framework could allow attackers to steal files, leak API keys & perform SSRF attacks; ...
Chainlit is widely used to build conversational AI applications and integrates with popular orchestration and model platforms ...
Familiar bugs in a popular open source framework for AI chatbots could give attackers dangerous powers in the cloud.
Threat actors have been performing LLM reconnaissance, probing proxy misconfigurations that leak access to commercial APIs.
The assessment, which it conducted in December 2025, compared five of the best-known vibe coding tools — Claude Code, OpenAI ...
Researchers with Cyata and BlueRock uncovered vulnerabilities in MCP servers from Anthropic and Microsoft, feeding ongoing security worries about MCP and other agentic AI tools and their dual natures ...
AI-generated code can introduce subtle security flaws when teams over-trust automated output. Intruder shows how an AI-written honeypot introduced hidden vulnerabilities that were exploited in attacks ...
Founded by elite offensive security and AI research leaders * AI pentesting platform thinks like a real attacker, uncovers ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results