Senior DevOps Engineer (HPC)
Xebia
Municipality of Valencia, Spain
8 days ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Municipality of Valencia, Spain
Tech stack
Bash
Cloud Computing
Cloud Storage
Data Transmissions
Software Debugging
Linux
DevOps
File Systems
Job Scheduling
Python
OpenMP
Package Management Systems
Parallel Computing
Performance Tuning
Ansible
Scripting (Bash/Python/Go/Ruby)
Google Cloud Platform
Spark
Amazon Web Services (AWS)
Data Management
Slurm
Cloud Migration
Terraform
Docker
Job description
As a Senior DevOps-HPC Engineer at Xebia, you will join a dynamic Engineering team in a high-energy and collaborative environment. This role is ideal for a seasoned HPC engineer with deep expertise in SLURM, Linux, and cloud migration expertise in SLURM, Linux, and cloud migrations, who thrives on leading complex projects, designing robust architectures, and implementing high-performance solutions in Google Cloud., * Lead the migration of on-premises SLURM-based HPC clusters to Google Cloud Platform.
- Design, implement, and manage scalable and secure HPC infrastructure solutions on GCP.
- Optimize SLURM configurations and workflows to ensure efficient use of cloud resources.
- Manage and optimize HPC environments, focusing on workload scheduling, job efficiency, and scaling SLURM clusters.
- Automate cluster deployment, configuration, and maintenance tasks using scripting languages (Python, Bash) and automation tools (Ansible, Terraform).
- Integrate HPC software stack using tools like Spack for dependency management and easy installation of HPC libraries and applications.
- Deploy, manage, and troubleshoot applications using MPI, OpenMP, and other parallel computing frameworks on GCP instances.
- Collaborate with engineering, support teams, and stakeholders to ensure smooth migration and ongoing operation of HPC workloads.
- Provide expert-level support for performance tuning, job scheduling, and cluster resource optimization.
- Stay current with emerging HPC technologies and GCP services to continually improve HPC cluster performance and cost efficiency.
- Expertise on HPC
Requirements
Basics:
- Minimum 5 years of experience with HPC environments, including SLURM workload manager, MPI, and other HPC-related software.
- Extensive hands-on experience managing Linux-based systems, including performance tuning and troubleshooting in an HPC context.
- Proven experience migrating and managing SLURM clusters in cloud environments, preferably GCP.
- Proficiency with automation tools such as Ansible and Terraform for cluster deployment and management.
- Experience with Spack for managing and deploying HPC software stacks.
- Strong scripting skills in Python, Bash, or similar languages for automating cluster operations.
- In-depth knowledge of GCP services relevant to HPC, such as Compute Engine (GCE), Cloud Storage, and VPC networking.
- Strong problem-solving skills with a focus on optimizing HPC workloads and resource utilization.
Recommended:
- Google Cloud Professional DevOps Engineer or similar GCP certifications.
- Familiarity with GCP's HPC-specific offerings, such as Preemptible VMs, HPC VM images, and other cost-optimization strategies.
- Experience with performance profiling and debugging tools for HPC applications.
- Advanced knowledge of HPC data management strategies, including parallel file systems and data transfer tools.
- Understanding of container technologies (e.g., Singularity, Docker) specifically within HPC contexts.
- Experience with Spark or other big data tools in an HPC environment is a plus.
- Focusing now on modules/libs migration, so expertise on Spack containers,
- Pipelines (GHA)