Senior DataOps Engineer F/M
Role details
Job location
Tech stack
Job description
- Innovation: Design cutting-edge data solutions with a seasoned team, on projects involving cloud-native, serverless, and real-time streaming architectures.
- Impact: Support Data Scientists and Analysts in deploying algorithms via APIs and scalable cloud architectures, while guiding Data Engineers in building robust pipelines for high data volumes.
- Automation: Optimize workflows using Infrastructure as Code (IaC) and CI/CD pipelines with Terraform and GitHub Actions.
- Security: Ensure security best practices in collaboration with our Cloud and Cybersecurity experts, while managing FinOps and system architecture principles.
- Performance: Maintain and monitor our ecosystem (Data Lake, Data Streaming, APIs) using Datadog to ensure 24/7 availability, while driving technological improvements to anticipate business needs.
- Knowledge Sharing: Write clear documentation, mentor junior team members, and nurture team spirit.
TECHNICAL ENVIRONMENT
- Languages/Tools: Python, Scala, Rust
- Cloud: AWS (Lambda, Kinesis, DynamoDB, Fargate, etc.)
- Data: Airflow, Flink, Snowflake, SQL, dbt
- DevOps: GitHub Actions, Terraform, Datadog
At Betclic, we rely on a modern, cloud-first tech stack designed to handle critical data volumesefficiently. If you're passionate about technical challenges and emerging technologies, this role is for you!
Requirements
At Betclic, the European leader in gaming and online betting, technology is at the heart of our DNA. As a Senior DataOps Engineer, you'll join our Data Platform team, made up of DataOps, MLOps experts and architects, renowned for their technical excellence and ability to handle massive real-time data volumes.
Your mission? Contribute to a high-performance, innovative data platformthat processes millions of daily transactions-all in a collaborative, dynamic, and friendly environment.
With 4 to 5 years of experience, you'll play a key role in evolving our data ecosystem, working with modern technologies and a skilled team, with opportunities to make a direct impact on the business., * 4 to 5 years of experience working on large-scale data projects (Data Lakes, Streaming, Pipelines) in a public cloud environment (AWS, GCP, or Azure).
- Proven ability to design complex, scalable, and cost-efficient data systems.
- A passion for technical excellence and a rigorous mindset.
- Autonomy combined with a strong team spirit and a desire to share knowledge.