Data Ops Engineer
Role details
Job location
Tech stack
Job description
1.strong multi project experience in several the following or similar in a Data Engineering: 2.This role will act as a Continuous Integration/Continues Delivery (CI/CD) expert for the Data Office, helping Data Engineering teams automate as much of their work as possible, to reduce waste and improve quality. 3.Continually challenging and improving our processes, tools, and methodologies. 4.Undertaking review and assurance activity, providing other team members with guidance on design, build and test activity.
Requirements
1.AWS data tooling such as S3/Glue/Redshift/SageMaker. 2.Familiarity with containerization (eg, Docker/ec2), Orchestration in enterpirse environment (Airflow), Infrastructure automation (Terraform), and CI/CD platform (Github Actions & Admin), Password/Secret management (hashicorp vault). 3.Strong Data related programming skills SQL/Python/Spark/Scala 4.Database technologies in relation to Data Warehousing/Data Lake/Lake housing patterns and relevant experience when handling structured and non-structured data
Desirable Skills/Experience 1.Experience of working in an Agile Team; preferably Safe. 1. 2.Experience in specific tooling Qlik Replicate/Qlik Compose/DataBricks/Informatica/SAS 3.An understanding of data modelling methodology (Kimball, Data Vault, Lakehouse) 4.Understanding of Data Science, AI and Machine Learning ways of working