Before using an AI system, it’s important to consider where it will run, what data it handles, and potential risks. Basic safety features aren’t enough, so thorough security testing—called red teaming—is needed to find weaknesses. Attackers use clever tricks, so protecting AI requires the right tools and a mindset focused on understanding and managing risks for safe AI use.