Sr Software Engineer

Dynatrace
Linz, Austria
4 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Paris, France

Tech stack

Airflow
Amazon Web Services (AWS)
Azure
Cloud Computing
Continuous Integration
Data Validation
Information Engineering
Data Governance
ETL
Github
Python
Machine Learning
Performance Tuning
TensorFlow
Software Engineering
SQL Databases
Data Streaming
Management of Software Versions
PyTorch
Large Language Models
Snowflake
Caching
FastAPI
Scikit Learn
Kubernetes
Low Latency
Kafka
Data Management
Machine Learning Operations
Data Pipelines
Dynatrace
Serverless Computing
Jenkins

Job description

Dynatrace provides software intelligence to simplify cloud complexity and accelerate digital transformation. With automatic and intelligent observability at scale, our all-in-one platform delivers precise answers about the performance and security of applications, the underlying infrastructure, and the experience of all users to enable organizations to innovate faster, collaborate more efficiently, and deliver more value with dramatically less effort. That's why many of the world's largest organizations trust Dynatrace to modernize and automate cloud operations, release better software faster, and deliver unrivalled digital experiences.

Dynatrace makes it easy and simple to monitor and run the most complex, hyper-scale multicloud systems. Dynatrace is a full stack and completely automated monitoring solution that can track every user, every transaction, across every application., We're looking for a Senior Machine Learning Engineer (MLOps) to build and scale production ML services for our Business Insights products. You will be responsible for driving delivery of major projects across both LLM and traditional ML domains, including data pipeline design, model training, deployment, and monitoring, collaborating with Data Science and Software Engineering to uphold standards for reliability, latency, and cost., * Design and implement robust data and ML pipelines for training, deployment, and inference at scale, ensuring reliability, performance, and cost efficiency across cloud environments.

  • Deliver production ML services using cloud-native patterns (e.g., managed services, serverless, container orchestration) optimized for low latency and high throughput.
  • Establish MLOps practices: dataset and model versioning, experiment tracking, promotion gates from development to production, and safe rollback or canary strategies.
  • Build ETL/ELT workflows with clear schema management, data validation, reproducibility, and performance tuning for large-scale datasets.
  • Implement strategies for scalable inference, including caching, batching, autoscaling, and hardware-aware optimizations to meet service-level objectives.
  • Set technical direction for ML service architecture and pipeline design, ensuring scalability and portability across platforms.

Operations, Reliability, and Governance

  • Instrument services with metrics, logs, and traces; maintain dashboards and alerts for latency, throughput, errors, drift, and cost.
  • Run offline and online evaluations for accuracy, drift, stability, and cost; maintain golden datasets and automated promotion gates.
  • Own lifecycle management: training/retraining schedules, deployment procedures, incident playbooks, and post-incident reviews.
  • Implement robust access controls, secrets management, data governance, and auditability across platforms.

Requirements

Do you have a Master's degree?, * Professional Python: 5+ years writing production-quality code with testing/packaging and ML/DS libraries (MLflow, FastAPI, scikit-learn, PyTorch or TensorFlow).

  • MLOps: 3+ years with model registries, experiment tracking, promotion gates, and safe deployment strategies.
  • Data engineering: 3+ years building reliable ETL/ELT, schema evolution, data validation, and performance tuning on large-scale datasets.
  • CI/CD and IaC: 3+ years designing and owning build/test/deploy pipelines, plus infrastructure automation.
  • Containers and orchestration: 3+ years operating ML services on Kubernetes or equivalent.
  • Communication: clear design docs, ability to explain trade-offs to technical and non-technical stakeholders.
  • Education: Master's degree or equivalent practical experience in CS/Engineering/Math or related field.

Preffered Requirements:

  • Experience with SQL-centric data platforms (e.g., Snowflake) or cloud ML workloads (AWS/GCP/Azure).
  • Observability and monitoring integration (Dynatrace or similar).
  • Workflow orchestration (Prefect, Airflow) and CI tools (Jenkins, GitHub Actions).
  • Streaming and near real-time patterns (Kafka, Kinesis).
  • Security and privacy: PII handling, audit trails, policy enforcement.
  • Domain: telemetry and observability, time-series modelling, anomaly detection.

About the company

Dynatrace (NYSE: DT) is the world-leading AI-powered observability platform. 

We’re advancing observability for today’s digital businesses, and helping to transform the complexity of modern digital ecosystems into powerful business assets. By leveraging AI-powered insights, Dynatrace enables organizations to analyze every transaction, automate at the speed of AI, and innovate faster and without limits to drive their business forward. 

Our culture, fueled by curiosity, openness, and authenticity, drives our relentless pursuit of innovation and excellence in crafting the Dynatrace platform.

Apply for this position