DevOps Engineering
Role details
Job location
Tech stack
Job description
Sanofi Ghent leads the development of NANOBODY® molecules - proprietary therapeutic proteins based on single-domain antibody fragments that can be engineered into multi-valent and multi-specific formats. These innovative molecules hold significant potential in treating serious and life-threatening diseases. Our portfolio spans major therapeutic areas including inflammation, hematology, immuno-oncology, and oncology., We are strengthening our scientific computing team and looking for an experienced, creative Cloud Engineer, Data Scientist, or Bioinformatician eager to thrive in a fast-paced, patient-committed environment. The ideal candidate brings expertise spanning back-end and front-end technologies, playing a key role in developing, streamlining, and automating workflows, managing integration pipelines, and connecting systems seamlessly. Based in Sanofi Ghent, you will collaborate closely with teams in the UK and France, contributing to our global Large Molecule Research (LMR) platform and building strong partnerships across the digital organization.
Main responsibilities:
-
Integrate NANOBODY® data pipelines into the company's central data systems
-
Design, adapt, and/or co-develop digital applications to automate scientific data processing, enabling data-driven decision-making in drug discovery
-
Maintain and evolve in-house code repositories with proper documentation using company-approved project management tools
-
Support the development and maintenance of a cloud-based platform - including databases and bioinformatics applications - for NANOBODY® molecule analysis and design, with a view to expanding to broader antibody-based modalities
-
Implement and manage computational infrastructure and data management policies to support AI/ML workflows and optimal data storage
-
Ensure adherence to Sanofi governance, security, compliance, and data privacy requirements - establishing new processes where needed
Requirements
-
Proficient in Linux with strong scripting skills in Python, Java, Unix Shell, and SQL; knowledge of Go and/or Rust is a plus
-
Hands-on expertise with Docker/Podman and OpenShift/Kubernetes for containerized deployments and OCI-compliant image management (ECR, JFrog Artifactory)
-
Experience deploying and maintaining diverse databases (SQL, NoSQL, columnar, array), data Lakehouse platforms, and federated query engines (e.g. Trino, PuppyGraph)
-
Practical cloud experience with AWS, Azure, or Google Cloud; DevOps/GitOps proficiency with IaC tools (GitHub, Terraform, Helm, ArgoCD)
-
Solid understanding of API management (REST, GraphQL), authentication/authorization protocols, and IAM tools
-
Knowledge of data integration concepts (ETL, ELT, data federation, wrangling) and data warehousing principles
-
Full Stack web development experience is highly desirable
-
Experience in agile teams with code-review practices (pull requests); life sciences/pharma industry background is a plus
Soft skills:
-
Innovative mindset with the ability to drive transformation across a global organization
-
Strong communicator - able to bridge technical and non-technical audiences (biologists, engineers, business stakeholders)
-
Collaborative team player with proven experience in cross-functional, international environments
-
Adaptable and effective in a matrixed, multicultural organization
Languages:
- Excellent English communication skills, both verbal and written, Dutch language would be nice to have, Better is out there. Better medications, better outcomes, better science. But progress doesn't happen without people - people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let's be those people.