Job Informationen
Location: Zurich Workload: Full-time Your tasks: Own, Design & Operate Data Pipelines – Take full responsibility for all pandas- and Spark-based pipelines, from development through production and monitoring. Advance our ML Models – Improve and productionise models for AdTech use-cases such as look-a-like modelling, audience expansion, and campaign measurement. Engineer for the Invisible – Because data inside confidential enclaves is literally invisible (even to root), build extra-robust validation at the data source, exhaustive test coverage, and self-healing jobs to guarantee reliability. Collaborate Cross-Functionally – Work closely with data scientists, backend engineers (Rust), and product teams to ship features end-to-end. AI-Powered Productivity – Leverage LLM-based code assistants, design generators, and test-automation tools to move faster and raise the quality bar. Share your workflows with the team Drive Continuous Improvement – Profile, benchmark, and tune Spark workloads, introduce best practices in orchestration & observability, and keep our tech stack future-proof. Your profile: (Must have) Bachelor/Master/PhD in Computer Science, Data Engineering, or a related field and 5+ years of professional experience. (Must have) Expert-level Python plus solid hands-on experience with pandas, PySpark/Scala Spark, and distributed-data processing. (Must have) Proven track record building resilient, production-grade data pipelines with rigorous data-quality and validation checks. (Must have) Experience running workloads in Databricks, Spark on Kubernetes, or other cloud/on-prem big-data platforms. (Plus) Working knowledge of ML lifecycle and model serving; familiarity with techniques for audience segmentation or look-a-like modelling is a big plus. (Plus) Exposure to confidential computing, secure enclaves, homomorphic encryption, or similar privacy-preserving tech. (Plus) Rust proficiency (we use it for backend services and compute-heavy client-side modules). (Plus) Data-platform skills: operating Spark clusters, job schedulers, or orchestration frameworks (Airflow, Dagster, custom schedulers).
Benötigte Skills
- Senior
- Testing
- Monitoring
- Python
- Machine Learning
- Scala
- Rust
- Bachelor
- Master
Job Details
-
Job Status Aktiv
-
Pensum Vollzeit