02 ARTICLES
AI Security
Prompt Injection in Your IDE: The Attack That Starts with git clone
I built four prompt injection demos and ran them against Claude Code, Codex, Gemini CLI, and Copilot. The results aren't what you'd expect: modern models catch the obvious attacks. The one they don't catch is the most dangerous.
AI Safety
Your trusted AI can amplify psychosis. I tested it.
I sent three messages to six AI models. I started with something anyone beginning to lose touch with reality might say. Then I escalated. By the third message, one of the models was helping me plan my mission.