עדיין מחפשים עבודה במנועי חיפוש? הגיע הזמן להשתדרג!
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.
Today’s world is crime riddled. Criminals are everywhere, invisible, virtual, and sophisticated. Traditional ways to prevent and investigate crime and terror are no longer enough…
Technology is changing incredibly fast. The criminals know it, and they are taking advantage. We know it too.
For nearly 30 years, the incredible minds at Cognyte around the world have worked closely together and put their expertise to work, to keep up with constantly evolving technological and criminal trends, and help make the world a safer place with leading investigative analytics software solutions.
We are defined by our dedication to doing good and this translates to business success, meaningful work friendships, a can-do attitude, and deep curiosity.
We’re looking for a skilled and driven Data Engineer to join our growing team. This role is perfect for someone who thrives in complex environments, loves solving real-world problems with data, and enjoys working hands-on with cutting-edge tools and infrastructure, both in the cloud and on-prem.
We are a team of hands-on engineers building robust, scalable, and high-performance data pipelines that transform how information flows across systems. We build with technologies such as Java, Python, Kafka, Flink, ElasticSearch, Airflow, Docker, and Kubernetes, often in hybrid environments, including on-premise deployments.
As Cognyter you’ll make an impact on:
- Design and develop real-time and batch data pipelines using modern frameworks and distributed systems.
- Build robust, scalable, and fault-tolerant ETL workflows to handle high-volume structured and unstructured data.
- Model and maintain efficient, query-optimized data structures across SQL, NoSQL, and search engines.
- Collaborate with customers, field engineers, and delivery teams to deploy and support data systems in real environments — both cloud-based and on-premise.
- Own and maintain the reliability, observability, and performance of data flows and infrastructure.
- Stay up to date with data engineering trends and recommend best-fit solutions for evolving business needs.
For that mission you’ll need:
- 5+ years of hands-on experience in Java or Python in data engineering or backend systems.
- Solid knowledge of Docker, Kubernetes, and containerized architecture.
- Strong understanding of multithreaded, concurrent, and distributed system design.
- Experience building and orchestrating ETL pipelines using tools like Airflow, Apache NiFi, or similar.
- Expertise in data modeling, both for SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra).
- Experience with search engines like ElasticSearch and/or graph databases.
- Proficiency with streaming platforms like Kafka, Flink, or Spark Streaming.
- Understanding of pipeline observability and data quality best practices.
- Excellent communication and problem-solving skills, especially in customer-facing or cross-team contexts.
- Fluency in spoken and written English.
Bonus Points For
- Experience with real-time event processing at scale.
- Previous work in data architecture, including data lake design, schema evolution, and partitioning strategies.
- Proven experience managing and deploying on-premise data infrastructure (Kafka, Elasticsearch, Airflow, NiFi, etc.).
- Familiarity with hybrid setups that integrate cloud and on-prem environments.
- Industry knowledge in cybersecurity, telecom, or other data-heavy domains.
- Background in consulting, customer support, or solution delivery roles.
Availability
- Must be available for at least one flight per 2 months to collaborate with customers on-site.
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג לכם רק הזדמנויות שבאמת שוות את הזמן שלכם מתוך מאגר המשרות הגדול בישראל.
השימוש חינם, ללא עלות וללא הגבלה.