עדיין מחפשים עבודה במנועי חיפוש? הגיע הזמן להשתדרג!
במקום לעבור לבד על אלפי מודעות, Jobify מנתחת את קורות החיים שלך ומציגה לך רק משרות שבאמת מתאימות לך.
מעל 80,000 משרות • 4,000 חדשות ביום
חינם. בלי פרסומות. בלי אותיות קטנות.
About Explorium
Explorium is a leading provider of B2B data foundations for AI agents. We offer go-to-market data and infrastructure designed to power context-aware AI products and strategies. Our platform harmonizes diverse data sources to deliver high-quality, structured, and trustworthy insights - empowering businesses to build intelligent systems that drive real growth.
We're at the forefront of applied AI - leveraging LLMs, Generative AI, and modern data engineering practices to solve hard, real-world data problems at scale.
About the Team
Atlas is a data engineering team that owns Explorium's core data products end-to-end - from ingestion and enrichment through transformation, quality, and serving. We build and operate the pipelines, data models, and platform services that power the product.
We work closely with our customers and external data providers to assess, integrate, and enhance third-party data assets.
The Role
We're looking for a Senior Data Engineer to own high-impact data products from architecture through production deployment, monitoring, and continuous improvement. This isn't a pure infrastructure role - you'll combine strong engineering with product thinking, operational excellence, and awareness of data quality, cost, and business impact.
You will design, implement, test, deploy, and maintain production-grade data products - pipelines, transformation layers, data quality and reliability systems - using tools like DBT (on Spark) and Databricks. You'll apply best practices in Python and SQL to build scalable and maintainable data transformations, and leverage technologies like LLMs and GenAI to create innovative solutions for real business problems.
This role is ideal for someone who wants technical leadership responsibilities in an AI-first engineering culture - we use LLMs, GenAI, and AI-native development tools as core parts of our daily workflow.
Key Responsibilities
Act as a technical leader within the team - raise engineering standards, drive strong architectural choices, and improve how we build
Own data products end-to-end: design, development, deployment, monitoring, and iteration
Work closely with senior leadership to translate strategic goals into scalable data solutions
Develop and maintain production ETL/ELT pipelines using DBT (on Spark) and orchestrated workflows in Databricks
Build monitoring, alerting, and testing pipelines to ensure reliability and performance in production
Evaluate and introduce new technologies - including AI-native development tools - and integrate the ones that create real impact
Collaborate with customers and external data providers - gathering requirements and making product decisions.
Mentor team members through code reviews, pairing, and knowledge sharing
Requirements:
Must haves
4+ years of experience in production-level data engineering or similar roles
Deep proficiency in SQL and Python
Proven track record of owning and scaling production-grade data pipelines, including versioning, testing, and monitoring
Strong understanding of data modeling, normalization/denormalization trade-offs, and data quality management
Experience with the modern data stack: DBT, Databricks, Spark, Delta Lake
Strong analytical skills - ability to design and evaluate data-driven hypotheses and KPIs
Product and business awareness - you think about the impact of what you build, not just the implementation
Preferred Qualifications
Experience with GenAI and LLM applications — particularly extracting structure from unstructured data at scale
Experience working with external data sources and vendors
Familiarity with Unity Catalog and data governance at scale
Familiarity with Terraform or similar infrastructure-as-code tools
Experience with cost optimization on Databricks (DBU analysis, cluster policies)
Familiarity with cloud-native platforms (AWS preferred)
BSc/BA in Computer Science, Engineering, or a related technical field — or graduation from a top-tier IDF tech unit
במקום לעבור לבד על אלפי מודעות, Jobify מנתחת את קורות החיים שלך ומציגה לך רק משרות שבאמת מתאימות לך.
מעל 80,000 משרות • 4,000 חדשות ביום
חינם. בלי פרסומות. בלי אותיות קטנות.
שאלות ותשובות עבור משרת Senior Data Engineer
כמהנדס/ת נתונים בכיר/ה ב-Explorium, תהיה/תהיי אחראי/ת על בעלות מקצה לקצה על מוצרי נתונים בעלי השפעה גבוהה, החל מארכיטקטורה ועד לפריסה, ניטור ושיפור מתמיד. התפקיד כולל הובלה טכנית בצוות Atlas, פיתוח ותחזוקה של צינורות ETL/ELT באמצעות DBT (על Spark) ו-Databricks, בניית מערכות ניטור ובדיקה, והערכה ושילוב טכנולוגיות חדשות, כולל כלי פיתוח מבוססי AI.
משרות נוספות מומלצות עבורך
-
Senior/Staff Data Platform Engineer - (Bangkok based, relocation provided)
-
תל אביב - יפו
Agoda
-
-
Senior Software Engineer- Data Engineering
-
תל אביב - יפו
Axonians
-
-
Senior data Engineer - Snowflake לארגון מוביל
-
תל אביב - יפו
Ingima Israel
-
-
Senior/Staff Data Platform Engineer - (Bangkok based, relocation provided)
-
ירושלים
Agoda
-
-
Senior data Engineer - Snowflake לארגון מוביל
-
תל אביב - יפו
Ingima
-
-
Senior Data Engineer
-
חולון
Peak Innovation
-
25,000-35,000 ₪