DevOps / Software Automation Engineer - CPU Performance Benchmarking

Saicon Consultants Inc.
Austin, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English

Job location

Austin, United States of America

Tech stack

Java
API
Artificial Intelligence
System Configuration
Continuous Integration
Software Debugging
Linux
DevOps
Github
Monitoring of Systems
Python
Linux System Administration
Performance Tuning
Power BI
Ansible
Prometheus
Server Virtualization
Shell Script
Systems Integration
Management of Software Versions
Zabbix
Rust
Performance Testing
Grafana
Spark
Backend
Containerization
Kubernetes
Infrastructure Automation Frameworks
Information Technology
Bare Metal
Kafka
Machine Learning Operations
Software Performance
New Relic (SaaS)
Software Version Control
Docker
Server Operating Systems & Platforms
Jenkins
Databricks

Job description

We are looking for a hands-on DevOps / Software Automation Engineer to design, build, and operate an end-to-end automated CPU performance benchmarking platform. This role will work closely with CPU performance engineers to automate manual benchmarking workflows, enable repeatable and scalable performance runs, and deliver fast, reliable performance insights across multiple benchmark suites. You will be a critical force multiplier for performance engineers-owning automation, CI/CD, infrastructure, execution workflows, monitoring, and troubleshooting-so performance teams can focus on analysis rather than operational overhead., Performance Benchmarking Automation

  • Design and implement fully automated workflows for CPU performance benchmarks (setup, execution, data collection, validation, and reporting).
  • Translate manual performance engineering processes into scalable automation pipelines.
  • Enable one-click or CI-triggered benchmark execution with standardized, repeatable results.
  • Automate log parsing, metrics extraction, and data structuring for downstream analysis.

CI/CD & Execution Orchestration

  • Build and maintain CI/CD pipelines (Jenkins/GitHub) for benchmark execution and infrastructure workflows.
  • Integrate automation with versioned benchmark configurations, scripts, and artifacts.
  • Ensure reproducibility, traceability, and auditability of performance runs.

Infrastructure & Platform Engineering

  • Automate bare-metal and virtual server provisioning, OS deployment, and system configuration at scale.
  • Manage Linux-based environments optimized for CPU performance testing.
  • Containerize services (Docker) and orchestrate where applicable (Kubernetes).

Reliability, Monitoring & Support

  • Monitor platform health, benchmark execution, and infrastructure using observability tools.
  • Actively unblock performance engineers during automated runs by debugging failures, identifying root causes, and applying quick fixes or workarounds.
  • Perform capacity planning and scale systems to support increasing benchmark demand.

Data & Insights Enablement

  • Process and structure benchmark data using Python, Spark, or Databricks.
  • Support dashboards and reporting (e.g., Power BI) that provide quick performance insights to engineering stakeholders.

Collaboration & Documentation

  • Work day-to-day with CPU performance engineers to understand workflows and continuously improve automation.
  • Document architectures, workflows, execution guides, and troubleshooting procedures.
  • Comfort engaging with external stakeholders (customers/partners) and presenting technical findings is a plus.
  • Partner with internal IT teams as needed for networking, hardware, and security alignment.

Requirements

  • Bachelor's degree in computer science, Engineering, or equivalent practical experience.
  • Demonstrated curiosity and a continuous learning and knowledge sharing mindset.Strong Python and Linux shell scripting skills.
  • Hands-on experience with Jenkins, CI/CD pipelines, and GitHub.
  • Solid understanding of Linux systems, OS tuning, and server environments.
  • Experience automating infrastructure using Ansible or similar tools.
  • Ability to debug complex system, automation, or execution issues independently.
  • Strong communication skills-able to work closely with non-software performance engineers.

Preferred / Nice-to-Have:

  • Experience with CPU or system performance benchmarking (SPEC, internal benchmarks, stress tools, etc.).
  • Familiarity with Spark, Kafka, Databricks, or large-scale log processing.
  • Experience with Docker and Kubernetes.
  • Knowledge of monitoring and observability tools (Prometheus, Grafana, Zabbix, New Relic).
  • Exposure to data visualization and reporting tools (Power BI).
  • Strong architecture and systems thinking-able to reason about tradeoffs (performance, reliability, scalability), debug cross-component issues, and propose pragmatic designs.
  • Experience building backend services and full-stack systems in Go (preferred) or comparable languages (e.g., Python, Java, Rust), with clean APIs and maintainable service boundaries.
  • Hands-on experience with MCP development (building or integrating MCP servers/tools, defining schemas/contracts, and validating end-to-end workflows).
  • Familiarity with A2A protocols and service-to-service integration patterns (auth, retries, versioning, backward compatibility, observability).Exposure to MLOps / ML-enabled development (experiment tracking, model/version management, reproducible pipelines, evaluation/validation, and production monitoring), and integrating ML/AI components into reliable automated workflows.

Apply for this position