עדיין מחפשים עבודה במנועי חיפוש? הגיע הזמן להשתדרג!
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.
Who we’re looking for
Are you driven to break AI systems at their deepest level—before anyone even realizes an attack class exists?
We are looking for an AI Security Researcher with a strong grasp of the algorithmic foundations of adversarial attacks across all modalities (text, vision, audio, multimodal, agentic systems). This role is for someone who goes beyond known techniques, actively discovers AI zero-days, and invents new attack primitives that expose previously unknown failure modes in AI systems.
You will work closely with Founders, security researchers, engineers, product teams to explore, weaponize novel vulnerabilities in AI applications and autonomous agents—including ones that do not yet have names, frameworks, or taxonomies.
Culture & Startup Reality Check
Please read this before applying to decide if you’ll love working with us.
https://adversa.notion.site/Culture-Startup-Reality-Check-2b6617e526f680cd9261c655578055e2
Responsibilities
Research, design, and prototype novel attacks against AI systems across modalities
Discover and demonstrate AI zero-day vulnerabilities
Analyze and explain known attacks
Build proof-of-concept exploits that demonstrate real-world impact
Translate research findings into offensive testing modules, red-teaming methodologies, publications
Collaborate with engineering and product teams to convert research into platform capabilities
- Continuously explore non-obvious attack surfaces
Background & Experience
5+ years of experience in security research, adversarial ML, vulnerability research, or advanced offensive security
Strong understanding of machine learning internals,
Demonstrated experience finding novel vulnerabilities or zero-days (AI systems or complex software)
Strong programming ability with experience building research prototypes and exploits
Bachelor’s degree or higher in Computer Science, Cybersecurity, Mathematics, or a related field
- Proven interest or experience specifically in AI Security research
Skills & Attributes
• Adversarial Research Mindset
Deep intuition for how and why AI systems fail under adversarial pressure
• Mathematical & Algorithmic Thinking
Comfortable reasoning about optimization, linear algebra, probability, information leakage, and latent representations
• Creative, Out-of-the-Box Thinking
Ability to invent attack classes rather than only applying known ones
• Autonomy & Ownership
Capable of defining research directions and driving them from idea to exploit
• Systems Thinking
Ability to attack AI as a system—models, agents, memory, tools, workflows—not just single promptsAdditional Advantages
- Experience with adversarial ML (evasion, poisoning, extraction, inversion)
- Experience with multimodal or agentic AI attacks
- Background in Application Security
- Background in red teaming, exploit development, or offensive research
- Familiarity with AI safety, alignment failures, or emergent behaviors
- Publications, Speaking engagemnets, disclosed vulnerabilities, or public research write-ups
- Core tech stack experience: Python, Docker, open-source research and security tools
Why join us?
Work on defining the future of AI security, not reacting to it
Collaborate directly with founders who are recognized pioneers in AI Red Teaming and Agentic AI Security
Freedom to explore uncharted attack surfaces without artificial constraints
Unlimited access to GPUs and AI tooling for research and experimentation
Equity for early hires and long-term impact on the platform
- A culture that values original thinking over checklists
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.