Security Engineer, AI Agent Security

Google

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 5 years of experience with security assessments or security design reviews or threat modeling.
  • 5 years of experience with security engineering, computer and network security and security protocols.
  • 5 years of coding experience in one or more general purpose languages.
  • 1 year of experience leading teams in a technical capacity or leading technical risk analysis in an enterprise environment.

Preferred qualifications:

  • Experience in AI/ML security research.
  • Experience in programming language suitable for security research and prototyping (e.g., Python).

About the Job

Google's software engineers develop the next-generation technologies that change how billions of users connect, explore, and interact with information and one another. Our products need to handle information at massive scale, and extend well beyond web search. We're looking for engineers who bring fresh ideas from all areas, including information retrieval, distributed computing, large-scale system design, networking and data storage, security, artificial intelligence, natural language processing, UI design and mobile; the list goes on and is growing every day. As a software engineer, you will work on a specific project critical to Google’s needs with opportunities to switch teams and projects as you and our fast-paced business grow and evolve. We need our engineers to be versatile, display leadership qualities and be enthusiastic to take on new problems across the full-stack as we continue to push technology forward.

The Core team builds the technical foundation behind Google’s flagship products. We are owners and advocates for the underlying design elements, developer platforms, product components, and infrastructure at Google. These are the essential building blocks for excellent, safe, and coherent experiences for our users and drive the pace of innovation for every developer. We look across Google’s products to build central solutions, break down technical barriers and strengthen existing systems. As the Core team, we have a mandate and a unique opportunity to impact important technical decisions across the company.

Responsibilities

  • Conduct research to identify, analyze, and understand novel security threats, vulnerabilities, and attack vectors targeting AI agents and underlying LLMs (e.g., advanced prompt injection, data exfiltration, adversarial manipulation, attacks on reasoning/planning).
  • Design, prototype, evaluate, and refine innovative defense mechanisms and mitigation strategies against identified threats, spanning model-based defenses, runtime controls, and detection techniques.
  • Develop proof-of-concept exploits and testing methodologies to validate proposed defenses.
  • Collaborate with engineering and research teams to translate research findings into practical, security solutions deployable across agent ecosystem.
  • Stay in AI security, adversarial Machine Learning (ML), and related security fields through literature review, conference attendance, and community engagement.
Read Full Description
Confirmed 18 hours ago. Posted 3 days ago.

Discover Similar Jobs

Suggested Articles