Compute Platform Engineer II
Role details
Job location
Tech stack
Job description
We are a full-stack shop consisting of product and portfolio leadership, data engineering, infrastructure and DevOps, data / metadata / knowledge platforms, and AI/ML and analysis platforms, all geared toward:
- Building a next-generation, metadata- and automation-driven data experience for GSK's scientists, engineers, and decision-makers, increasing productivity and reducing time spent on "data mechanics"
- Providing best-in-class AI/ML and data analysis environments to accelerate our predictive capabilities and attract top-tier talent
- Aggressively engineering our data at scale, as one unified asset, to unlock the value of our unique collection of data and predictions in real-time
Purpose of Onyx
Our Compute Platform Engineering team is building a first-in-class platform of toolchains and workflows that accelerate application development, scale up computational experiments, and integrate all computation with project metadata, logs, experiment configuration and performance tracking over abstractions that encompass Cloud and High-Performance Computing (HPC). This metadata-forward, CI/CD-driven platform represents and enables the entire application and analysis lifecycle including interactive development and explorations (notebooks), large-scale batch processing, observability and production application deployments.
A Compute Platform Engineer II is a technical contributor who can consistently take a poorly defined business or technical problem, work it to a well-defined problem / specification, and execute on it at a high level. They have a strong focus on metrics, both for the impact of their work and for its inner workings / operations. They are a model for the team on best practice for software development in general (and their specialization in particular), including code quality, documentation, DevOps, and testing. They ensure robustness of our services and serve as an escalation point in the operation of existing services, pipelines, and workflows.
A Compute Platform Engineer II should be familiar with the tools of their specialization and of their customers and engaged with the open-source community surrounding them - potentially, even to the level of contributing pull requests.
In this role you will
- Designs, builds, and operates tools, services, workflows that deliver high value through the solution to key business problems.
- Responsible for development of key components of a hybrid on-prem/cloud compute platform for both interactive and scalable batch computing and establishing of processes and workflows to transition existing HPC users and teams to this platform.
- Responsible for code-driven environment, applications, and container/image builds as well as CI/CD driven application deployments.
- Consult science users on application scalability to PBs of data by having a deep understanding of software engineering, algorithms, and underlying hardware infrastructure and their impact on performance.
- Confidently optimizes design and execution of complex solutions within large-scale distributed computing environments
- Produces well-engineered software, including appropriate automated test suites, technical documentation, and operational strategy
- Ensure consistent application of platform abstractions to ensure quality and consistency with respect to logging and lineage
- Fully versed in coding best practices and ways of working, and participates in code reviews and partnering to improve the team's standards
- Adhere to QMS framework and CI/CD best practices and helps to guide improvements to them that improve ways of working
Requirements
We are looking for professionals with these required skills to achieve our goals:
- Bachelor's degree in Data Engineering, Computer Science, Software Engineering or another relevant area.
- Significant experience within industry within a technical role
- Experience with Python
- Experience with Cloud
- Experience with High Performance Compute (HPC)
Preferred Qualifications & Skills:
If you have the following characteristics, it would be a plus:
-
Master's degree or PhD in Data Engineering, Compute Science, Software Engineering or another relevant area.
-
Knowledge and use of at least one common programming language: e.g., Python, Go, C++, Scala, Java, including toolchains for documentation, testing, and operations / observability
-
Expertise in modern software development tools / ways of working (e.g. git/GitHub, devops tools, metrics / monitoring, …)
-
Cloud expertise (e.g., AWS, Google Cloud, Azure), including infrastructure-as-code tools and scalable compute technologies, such as Google Batch and Vertex
-
Experience with CI/CD implementations using git and a common CI/CD stack (e.g., Azure DevOps, CloudBuild, Jenkins, CircleCI, GitLab)
-
Expertise with Docker, Kubernetes, and the larger CNCF ecosystem including experience with application deployment tools such as Helm
-
Experience with low level application builds tools (make, CMake) as well as automated build systems such as spack or easybuild
-
Application experience of CI/CD implementations using git and a common CI/CD stack (e.g., Jenkins, CircleCI, GitLab, Azure DevOps)
-
Experience in workflow orchestration with tools such as Argo Workflow, Airflow, and scientific workflow tools such as Nextflow, Snakemake, VisTrails, or Cromwell
-
Experience with application performance tuning and optimization, including in parallel and distributed computing paradigms and communication libraries such as MPI, OpenMP, Gloo, including deep understanding of the underlying systems (hardware, networks, storage) and their impact on application performance.
-
Demonstrated excellence with agile software development environments using tools like Jira and Confluence
-
Familiarity with the tools, techniques, optimizations in high-performance applications space, including engagement with the opensource community (and potentially making contributions to such tools)