Mindrift is seeking AI Red Team Engineers to evaluate and improve the safety and capabilities of Generative AI models. This is a fully remote, part-time opportunity involving tasks like vulnerability testing, automation scripting, and security research. The role offers a chance to contribute to ethically shaping the future of AI and benefit from the project's focus on AI safety.
Requirements
- Bachelor's or Master’s Degree in Computer Science, Software Engineering, Cybersecurity, Digital Forensics or related field
- Advanced English proficiency (C1 or higher)
- Proficiency in Python, Bash, or PowerShell
- Experience with containerization and CI/CD security tools (Docker)
- Hands-on experience with penetration testing across various environments
- Knowledge of vulnerabilities in current AI models (prompt injections, LLMs)
- Familiarity with AI red-teaming frameworks (garak or PyRIT)
- Experience in AI/ML security, evaluation, and red teaming, particularly with LLMs
- Proficient in offensive exploitation and exploit development
- Skilled in reverse engineering and reverse engineering with Ghidra or equivalents
- Expertise in network and application security, including web application security
- Knowledge of operating system security concepts (Linux, Windows)
Benefits
- Remote work opportunity
- Competitive rates
- Professional development
- Contribution to AI safety