עדיין מחפשים עבודה במנועי חיפוש? הגיע הזמן להשתדרג!
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.
One of our clients, an innovative deep-tech startup in the field of robotics and autonomous systems, is looking to hire a Computer Vision / Deep Learning (Perception) Algorithm Engineer to join their R&D team.
The role is deeply focused on applied Deep Learning, including training, optimizing, and deploying learning-based perception algorithms that operate in real-world, production-grade autonomous systems.
The company develops cutting-edge robotic platforms for inspection and maintenance in complex industrial environments.
Position Overview
As a Perception Algorithm Engineer, you will play a key role in developing and implementing the core 3D perception algorithms that power the company’s autonomous capabilities. You will work on multidisciplinary challenges involving sensor fusion, 3D reconstruction, depth estimation, and real-time scene understanding, integrating your solutions into robotic systems operating in the field.
Key Responsibilities
- Design and implement advanced algorithms for 3D perception, including Reconstruction, Depth Estimation, Object Detection & Tracking, and Scene Understanding.
- Process and fuse data from multiple sensors (LiDAR, stereo/RGB-D cameras, IMU, GNSS).
- Optimize algorithms for real-time performance and deploy them on embedded and robotic platforms.
- Collaborate closely with navigation, control, and hardware teams to deliver an integrated end-to-end perception system.
- Conduct simulations, field experiments, and benchmarking to validate accuracy, robustness, and scalability.
Requirements:
• Proven hands-on experience in Computer Vision algorithms, including geometric vision, multi-view geometry, camera models, and image processing.
• Strong applied Deep Learning experience for vision tasks – including training neural networks (CNNs / Transformers), loss design, optimization, debugging, and performance tuning.
• Experience developing learning-based perception pipelines end-to-end: data collection, labeling strategy, training, evaluation, and deployment.
• Practical experience with PyTorch or TensorFlow in production or near-production environments.
• Ability to bridge classical Computer Vision methods with Deep Learning approaches to solve real-world perception problems.
• Experience working with real sensor data (cameras; advantage for depth / 3D sensors).
• Solid software engineering skills, including clean code, version control, testing, and debugging.
• B.Sc. / M.Sc. in Computer Science, Electrical Engineering, or a related technical field – or equivalent practical experience.
Nice to have (advantage):
• Experience with perception in robotics / autonomous systems.
• Experience deploying models to edge or embedded platforms.
• Familiarity with ROS / ROS2 and real-time system constraints.
• Experience with 3D vision, SLAM, or sensor fusion.
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.