Sr Specialist Scientific Data Engineering
Role details
Job location
Tech stack
Job description
Within our Computational Sciences department, as a Senior Specialist Scientific Data Engineering you will be responsible for delivering advanced data engineering capabilities to support Nestlé Research projects. Your main objective will be to design, build, and optimize robust data pipelines and architectures that enable efficient data access, integration, and analysis across diverse scientific domains. You will play a key role in enabling data-driven research by ensuring high-quality, scalable, and secure data infrastructure.
A Day in the Life of a Senior Specialist Scientific Data Engineering
- Design and implement end-to-end scientific data pipelines, e.g. for Bioinformatics or Clinical or Omics, including data ingestion, transformation, storage, and analytics layers, tailored to scientific research use cases.
- Develop and deploy scalable data architectures on premise (Linux) or cloud platforms, ensuring performance, reliability, and compliance with data governance standards.
- Onboard and drive external developers as well as team up with other internal data specialists (data architects, data scientists, AI engineers, software developers…) to accelerate delivery in priority projects.
- Work in cross-functional project teams in a research environment to define project objectives and deliverables, evaluate the need and identify the right technical approaches to solve business need.
- Partner with other teams to gather functional requirements, and improve data quality, metadata management, and data discoverability.
Requirements
Do you have experience in Scrum?, Do you have a Master's degree?, * Bachelors or Master's or PhD in Bioinformatics, or Computer Science combined with life sciences or a related field
- Significant professional experience '- 7+ years - in designing and implementing data pipelines and architectures, in a research or scientific context, ideally in food or pharma industry
- Familiarity with software engineering practices and development frameworks (Scrum, Agile, DevOps).
- You have the following expertise and technical skills:
- Solid foundations in data modeling, ETL/ELT processes, and distributed data systems.
- Proficiency in Python and SQL for data manipulation and pipeline development.
- Experience with DevOps tool stacks (e.g., Git, CI/CD).
- Experience with cloud platforms (e.g., Azure, AWS) and orchestration tools (e.g., Airflow, Azure Data Factory).
- Experience of working with data lake and data warehouse technologies - Snowflake, Databricks
- Experience working with Linux and container technologies such as Openshift or Docker or Podman.
- Experience with Bioinformatics or Clinical or Omics data pipelines
- Familiarity with Data analysis and Machine learning capabilities and awareness of agentic architecture frameworks
- Experience collaborating with external teams and driving project streams through others to deliver results
- Experience in mentoring and guiding junior internal resources and interns
- Excellent problem-solving, communication skills, strong stakeholder management skills and strong knowledge sharing skills
- Fluent spoken and written English