Data Engineer
Role details
Job location
Tech stack
Job description
Data is a vital part of our business and with your help, we can improve the services that we provide to our customers through engineering excellence of our data products. You'll be working on exciting data products which support TV, Broadband, Digital Voice and market leading technologies like Unbreakable Wi-Fi. We deliver these critical services using the latest cloud technologies on AWS, GCP and Cloudera.
When you join us as a lead data engineer, you'll have the opportunity to be part of our Data & AI Enablement Team where you will help to design, build, and support high quality engineering products from our extensive data sources and multi-cloud environment.
As a team we believe in the agile principles of openness, transparency and continuous improvement, which underpin strong relationships with our business stakeholders.
There will be lots of opportunities to explore new technologies, develop new skills, innovate and grow as an engineer., * Design, develop, and maintain end-to-end data pipelines using Ab Initio (GDE, PDL, Conduct>IT, Express>IT) .
- Build scalable batch and streaming workloads using Apache Spark (PySpark/Scala) .
- Develop and orchestrate data ingestion flows using Apache NiFi for real-time and near-real-time use cases.
- Optimize ETL/ELT jobs for scalability, throughput, and resource efficiency .
- Perform bottleneck analysis and tuning for Spark jobs, NiFi processors, and Ab Initio graphs .
- Demos and knowledge transfers to stakeholders around product developments.
- Spread your experience and knowledge to junior engineers.
- Improve automated testing and deployments using CI/CD pipelines.
- Expectation to line manage small team of junior data engineers.
- Continual learning through internal and external training.
Requirements
- Proficient in building Big Data solutions using Hadoop or GCP or AWS.
- Skilled in technologies such as Scala, Java, Python, Spark (PySpark/Scala), Kafka, Kinesis, BigQuery, Dataflow, BigTable, Terraform, and SQL.
- Experience with Terraform, Kafka, Kinesis, SQL are nice to have
- Strong expertise in Ab Initio (GDE, PDL, Conduct>IT, Express>IT, EME) with a focus on parallel processing and component optimisation.
- Hands-on experience with Apache Spark, including Spark SQL, Spark Streaming, and performance tuning.
- Practical experience with Apache NiFi for flow design, processor tuning, error handling, and security configurations.
- Good understanding of Infrastructure as Code toolsets.
- Experienced in implementing Continuous Integration and Deployment strategies.
- Proficient in using Atlassian tools such as Confluence and JIRA.
- Experienced in version control using Git or SVN.
- Able to clearly present technical issues, progress, and outcomes to team members and Product Owners.
- Demonstrates enthusiasm for learning and applying new technologies.
- Experience or interest in managing and developing team members.
- Capable of building new solutions as well as supporting existing data products and platforms.
Benefits & conditions
Tailored benefits make a real difference. That's why we offer a comprehensive range to support your growth, wellbeing, and everyday life. You can design the package to suit you and your lifestyle. Your core benefits include:
- 10% on target annual bonus
- Access to an online private GP 24/7 for you and your immediate family
- Market-leading paid carers leave with up to 2 weeks off
- Equalized maternity, paternity, and adoption leave - 18 weeks' full pay and 8 weeks' half pay
- Discounted EE and BT products, including mobile and broadband
- Market leading Pension scheme - 5% from you and 10% from us
- Holiday purchase scheme
You can select additional benefits, including healthcare, dental, gym memberships and more when you're ready. Ready to connect for good and help shape the future?