עדיין מחפשים עבודה במנועי חיפוש? הגיע הזמן להשתדרג!
במקום לחפש לבד בין מאות מודעות - תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג
רק הזדמנויות שבאמת שוות את הזמן שלכם.
חינם, מהיר, מותאם אישית.
Job Description
Does building the next generation of AI/ML platforms excite you?
Do Big Data challenges and open-source innovation speak your language?
Join our Aegis team
Aegis is an end-to-end Big Data AI/ML platform built on top of Akamai's Public Cloud (Linode). It bridges the gap between research and production, enabling teams to innovate faster and deliver ML products more efficiently. We leverage open-source tools and AI accelerators to avoid reinventing the wheel – integrating across platforms where it matters most.
Make a difference in your own way
You’ll join our growing Engineering group, working hands-on to shape a powerful, flexible ML platform that supports real-world, large-scale data and AI workflows. You’ll be a key contributor to an environment that spans Kubernetes, Spark, MLflow, JupyterHub, and PyTorch – and that pushes the boundaries of performance and collaboration.
As a Data Platform Engineer, you will be responsible for:
- Designing and implementing scalable data and ML pipelines using Apache Spark or other distributed processing frameworks
- Building platform components in Python or Scala to connect research and production environments
- Integrating with orchestration tools such as Argo Workflows (or equivalents)
- Supporting a hybrid-cloud platform, including Azure, Akamai's public cloud (Linode), and more
- Collaborating with researchers and MLOps engineers to support the ML lifecycle from exploration to production
- Ensuring basic observability across components, including monitoring and alerting tools such as Prometheus or Grafana
- Working with open table formats such as Delta Lake, Iceberg, or similar a plus
To be successful in this role you will:
- Have 4+ years of experience in backend or data platform development
- Be proficient in Python or Scala (Java)
- Have significant hands-on experience with Docker and Kubernetes, or with equivalent containerization and cluster management technologies
- Have strong understanding of Big Data principles, with hands-on experience in distributed data processing using Apache Spark (on Databricks, or similar platforms) or equivalent frameworks such as Ray or Dask
- Have hands-on experience working with data scientists throughout the ML lifecycle - from experimentation to deployment - using tools such as MLflow, Jupyter Notebooks, and PyTorch or TensorFlow, and integrating these into scalable production pipelines
- Have experience with at least one cloud provider (Azure, AWS, or GCP)
- Have experience with monitoring and observability tools such as Prometheus, Grafana, OpenTelemetry, or ELK/Opensearch
- Have experience with Argo Workflows or similar orchestration tools
- Be proactive, motivated, curious, and thrive in a collaborative and high-ownership environment
Our ability to shape digital life today relies on developing exceptional people like you. The kind that can turn impossible into possible. We’re doing everything we can to make Akamai a great place to work. A place where you can learn, grow and have a meaningful impact.
With our company moving so fast, it’s important that you’re able to build new skills, explore new roles, and try out different opportunities. There are so many different ways to build your career at Akamai, and we want to support you as much as possible. We have all kinds of development opportunities available, from programs such as GROW and Mentoring, to internal events like the APEX Expo and tools such as Linkedin Learning, all to help you expand your knowledge and experience here.
Learn more
Not sure if this job is the right match for you or want to learn more about the job before you apply? Schedule a 15-minute exploratory call with the Recruiter and they would be happy to share more details.
במקום לחפש לבד בין מאות מודעות – תנו ל-Jobify לנתח את קורות החיים שלכם ולהציג רק הזדמנויות שבאמת שוות את הזמן שלכם.
חינם, מהיר, מותאם אישית.