עדיין מחפשים עבודה במנועי חיפוש? הגיע הזמן להשתדרג!
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.
We are seeking a sharp, innovative, and hands-on Architect to help shape the future of LLM inference at scale. Join our dynamic E2E Architecture group, where we build cutting-edge systems powering the next generation of generative AI workloads. In this role, you will work across software and hardware domains to design and optimize inference infrastructure for large language models running on some of the most advanced GPU clusters in the world.
Youll help define how AI models are deployed and scaled in production, driving decisions on everything from memory orchestration and compute scheduling to inter-node communication and system-level optimizations. This is an opportunity to work with top engineers, researchers, and partners across us and leave a mark on the way generative AI reaches real-world applications.
What Youll Be Doing:
Design and evolve scalable architectures for multi-node LLM inference across GPU clusters.
Develop infrastructure to optimize latency, throughput, and c
דרישות:
What We Need to See:
Bachelors, Masters, or PhD in Computer Science, Electrical Engineering, or equivalent experience.
8+ years of experience building large-scale distributed systems or performance-critical software.
Deep understanding of deep learning systems, GPU acceleration, and AI model execution flows and/or high performance networking.
Solid software engineering skills in C++ and/or Python, preferably demonstrate strong familiarity with CUDA or similar platforms.
Strong system-level thinking across memory, networking, scheduling, and compute orchestration.
Excellent communication skills and ability to collaborate across diverse technical domains.
Ways to Stand Out from the Crowd:
Experience working on LLM - training or inference pipelines, transformer model optimization, or model-parallel deployments.
Demonstrated success in profiling and optimizing performance bottlenecks across the LLM training or inference stack.
AI Accelerators and distributed communication patt
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.