Sr Data Engineer
Role details
Job location
Tech stack
Job description
As a Senior Data Engineer, you'll champion data quality, automation, and performance, while fostering a culture of collaboration, observability, and continuous improvement within our Data Product team. What you'll be doing As a Senior Data Engineer, you will:
- Participate in the development and administration of the Data Platform as a product.
- Support Product and Engineering teams in leveraging the Data Platform.
- Manage and monitor all Data Platform components running on Kubernetes, ensuring high availability and scalability.
- Apply best practices in Continuous Integration and Delivery (CI/CD).
- Contribute to the overall data quality strategy within Cobee by Pluxee.
- Develop tools and best practices to enable other teams to easily ingest new data tables, apply quality checks, and self-serve.
- Be part of the Data Product Team, empowering engineers and data analysts to work autonomously with data and extract valuable insights.
- Design and maintain batch and real-time data pipelines to support product and engineering teams.
- Collect, process, and clean data from multiple sources using SQL, Python, or any fit-for-purpose language., * Video call with a member of the People team to get to know you and your motivations.
- Discussion with the Product Manager and Engineering Manager about your experience, ambitions, and fit within Cobee by Pluxee Data Product Team.
- Technical conversation with team members , including an architectural challenge to explore your technical skills and problem-solving approach.
Your team You'll be part of the Data Product Team, while working with Product, Data and Engineering teams across the globe. Your Location: Madrid (SP)
- ️ Happy at work
Requirements
Do you have experience in Scalability?, * Proven experience working with Kubernetes and Helm charts for deploying data workloads.
- Basic knowledge of AWS services.
- Strong understanding of monitoring, data observability, and quality metrics.
- Familiarity with data orchestration frameworks (Dagster, Airflow, etc.).
- Hands-on experience with Kafka (Kafka Connect, Schema Registry, etc.) for streaming data ingestion.
- Experience building data tools to democratize data access across teams.
- Experience in building and maintaining analytical databases, and in designing high-performance data models. Experience with ClickHouse is a plus.
- Experience integrating diverse data sources (APIs, databases, files) using tools such as Airbyte or custom-built pipelines.
- Programming proficiency in Python, Go, or another relevant language.
- Understanding of DBT or similar transformation frameworks used by analysts to build and maintain models.
You'll thrive in this role if you're comfortable communicating in English and Spanish, enjoy collaborating with multicultural teams, and are motivated by solving complex data challenges in a global environment.