Data Quality Engineer - TV Insight
Role details
Job location
Tech stack
Job description
As Senior Data Quality Engineer, you take full co-ownership of TV-Insight's data infrastructure, ensuring that all data relevant for measurement, extrapolation, ad targeting, and reporting is accurate, timely, and efficiently processed.
You'll design, operate, and continuously improve robust and scalable data pipelines, playing a central role in the evolution of the TV-Insight technology ecosystem. Working closely with the Technology Division and data science teams, you'll help shape a data platform that enables reliable insights and self-service analytics across the organization.
RESPONSIBILITIES
Areas that play to your strengths
All the responsibilities we'll trust you with:
- You'll lead the operation, monitoring, and continuous improvement of data pipelines across all data domains. You guarantee data availability and correctness within production systems and implement robust scheduling, monitoring, and alerting solutions for all data workflows.
- You'll automate ETL workflows and manual maintenance processes, developing CI/CD-based automation for data pipelines and deployments. You introduce standardized data quality checks and reconciliation frameworks to ensure scalable, efficient, and reliable data operations.
- You define and enforce data validation rules and performance metrics, proactively identifying, analyzing, and resolving data inconsistencies. You drive continuous improvements in data integrity and completeness, ensuring stakeholders can rely on data for critical decisions.
- You partner with data scientists and analysts to ensure data reliability for modeling, experimentation, and reporting. Where needed, you contribute to internal documentation and knowledge sharing, helping teams understand and effectively use data assets and pipelines.
- You evolve TV-Insight's data platform towards self-service capabilities and high scalability. You participate in architecture design for future data products and act as a subject-matter expert for pipeline automation, data observability, and best practices in data engineering.
Requirements
- University degree in Computer Science, Software Engineering, Data Science, Data Engineering, Information Systems or a related field
- 5+ years of experience in data engineering, data quality, or pipeline automation
- Proven experience with ETL frameworks (e.g., Airflow, Prefect, dbt)
- Strong knowledge of SQL, Python, and data warehouse concepts (e.g., BigQuery, Snowflake)
- Experience with CI/CD, Docker, and cloud-based deployment (e.g., GCP, AWS)
- Solid understanding of data validation, monitoring, and alerting frameworks
- Analytical mindset with strong attention to detail
- Strong communication skills and a team-oriented attitude
- Self-driven, reliable, and able to manage complex data operations independently