EMEA Data Scientist - Düsseldorf
Role details
Job location
Tech stack
Job description
You will spend most of your time (60%) focused on MLOps and model deployment, and (40%) on data science and model development. You will work across the entire ML lifecycle-from building and maintaining pipelines to deploying and monitoring models, exploring data, and developing machine learning solutions. As a well-rounded ML practitioner, you will balance engineering and data science. As you collaborate with your peers, you will learn and continually improve your work.
You will work with businesses to understand their data needs and translate those into data structures and model requirements. You will define standards for monitoring data model integrity during projects and lead data modeling and testing. Additionally, you conduct quality assurance on analytics tools and methods used. You will apply expertise in machine learning, data mining, and information retrieval to design, prototype, and develop next-generation analytics. You will develop best practices for analytics, including models, standards, and tools. Furthermore, you will have metrics to assess the impact of analytics on the business.
The role:
This role, reporting directly to the EMEA AI Development & BI Manager, is a great opportunity for technical talents passionate about driving meaningful innovation for our business.
Key Responsibilities:
-
Data Engineering:
-
Build and maintain data pipelines for structured and unstructured data.
-
Work with SQL/NoSQL databases and data warehouses (e.g., Snowflake, BigQuery, Redshift).
-
Ensure data quality, integrity, and availability.
Data Science:
- Explore, clean, and analyze datasets to derive insights.
- Train, evaluate, and fine-tune machine learning models.
- Assist in developing proof-of-concepts and production-ready ML solutions.
MLOps:
- Support deployment of ML models into production (batch and real-time).
- Implement model monitoring, retraining, and CI/CD workflows.
- Work with cloud platforms (AWS, GCP, or Azure) and containerization tools (Docker, Kubernetes)., * A compelling career opportunity to lead impactful, innovative initiatives within the EMEA region.
- Opportunity to work on end-to-end data projects.
- Mentorship from senior data scientists and engineers.
- A collaborative, learning-focused environment.
- A coaching culture environment focusing on your success and development!
Requirements
- Bachelor's or Master's in Computer Science, Data Science, Engineering, or related field.
- 4-5 years of professional experience in data science, data engineering, or MLOps.
- Strong programming skills in Python (pandas, scikit-learn, PySpark preferred).
- Experience with SQL and familiarity with database design.
- Exposure to cloud platforms (AWS, GCP, or Azure).
- Knowledge of ML lifecycle tools (MLflow, Kubeflow, Airflow, Prefect, etc.).
- Familiarity with Git, CI/CD pipelines, and containerization (Docker, Kubernetes).
- Good problem-solving skills and eagerness to learn new technologies.
Nice to Have:
- Hands-on experience with deep learning frameworks (TensorFlow, PyTorch).
- Experience with data visualization tools (Tableau, Power BI, or Looker).
- Knowledge of distributed computing frameworks (Spark, Dask).
- Prior internship or project work in MLOps.