עדיין מחפשים עבודה במנועי חיפוש? הגיע הזמן להשתדרג!
במקום לעבור לבד על אלפי מודעות, Jobify מנתחת את קורות החיים שלך ומציגה לך רק משרות שבאמת מתאימות לך.
מעל 80,000 משרות • 4,000 חדשות ביום
חינם. בלי פרסומות. בלי אותיות קטנות.
Job Requisition ID
JR2016624
Job Category
Engineering
Time Type
Full time
NVIDIA is hiring exceptional software engineers to build and optimize the core inference infrastructure for large language models. Join the TensorRT‑LLM team - the group defining how generative AI performs at global scale on NVIDIA GPUs. We’re looking for engineers who love squeezing every drop of throughput, memory efficiency, and scalability out of modern model runtimes. Your work will directly shape the frameworks behind state‑of‑the‑art LLM inference used across NVIDIA and the AI community. Join us to redefine what “fast” means for LLM inference - building the frameworks that power the next generation of generative AI at scale.
What You'll Be Doing
- Design, implement, and optimize high‑performance inference pipelines for large language models running on GPUs
- Profile and tune model execution across the stack - from scheduler design to kernel fusions and everything in-between
- Design and experiment with memory management strategies for improved memory bandwidth optimization and cache efficiency
- Innovate and Implement cutting-edge techniques such as Speculative Decoding, Context Caching, and FP8/INT4 quantization to push the boundaries of tokens-per-second-per-watt
- Develop and maintain benchmarking and testing systems that quantify latency, utilization, and efficiency
- Bachelor's, Master's, or higher degree in Computer Engineering, Computer Science, Applied Mathematics, or related computing-focused degree (or equivalent experience)
- 5+ years of relevant software development experience.
- Excellent Python programming skills, software design, and software engineering skills
- Experience working with deep learning frameworks like PyTorch and HuggingFace
- Experience profiling and debugging performance at all levels - Python runtime, PyTorch internals, and GPU utilization metrics
- Awareness of the latest developments in LLM architectures and LLM inference techniques
- Proactive and able to work without supervision
- Excellent written and oral communication skills in English
- Contributions to inference frameworks such as TensorRT‑LLM, vLLM, SGLang, or similar systems
- Demonstrated expertise in performance modeling, memory optimization, distributed model execution or GPU execution workflows
- Hands‑on experience with NVIDIA profiling tools (Nsight Systems, PyTorch Profiler, custom benchmarking harnesses)
- Strong grasp of the trade‑offs shaping inference efficiency: compute vs. memory, scheduling vs. batching, latency vs. throughput
במקום לעבור לבד על אלפי מודעות, Jobify מנתחת את קורות החיים שלך ומציגה לך רק משרות שבאמת מתאימות לך.
מעל 80,000 משרות • 4,000 חדשות ביום
חינם. בלי פרסומות. בלי אותיות קטנות.