Data Engineer

Visa
Charing Cross, United Kingdom
3 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Charing Cross, United Kingdom

Tech stack

Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Unit Testing
Azure
Big Data
Unix
Command-Line Interface
Continuous Integration
Directed Acyclic Graph (Directed Graphs)
Information Engineering
Data Governance
ETL
Data Visualization
Software Debugging
Github
Hadoop
Hive
Python
Operational Data Store
Power BI
Cloud Services
DataOps
SQL Databases
Tableau
Unstructured Data
Workflow Management Systems
Data Processing
Google Cloud Platform
Sql Optimization
Delivery Pipeline
Spark
GIT
Information Technology
Data Analytics
Data Management
Software Version Control
Data Pipelines
Jenkins
Programming Languages

Job description

Visa is accelerating the delivery of data analytics and AI powered products to support client growth and strategic decision-making across regions. We are seeking a Data Engineer to execute on the design, delivery and evolution of scalable data engineering capabilities that underpin Data Science, AI and client facing products for all European markets., * Requirement Analysis: Understand and translate business needs into data models supporting long-term solutions

  • Build, manage and deploy large scale ETL processes to generate data assets for the region
  • Build modular and reusable code considering the configurability and scalability while adhering to low-level design
  • Perform thorough unit testing of development tasks and document the test results using standard defined templates
  • Build, schedule, and manage DAGs in Apache Airflow efficiently
  • Monitor data processing tasks using Airflow
  • Ensure quality control of data assets, through monitoring and reconciling data loaded across different stages in the data pipeline
  • Utilize strong data analytics skills to identify, discuss, and promptly fix data issues
  • Apply debugging skills to quickly rectify execution errors, ensuring minimal delays and impact on business operations
  • Collaborate and communicate with stakeholders for requirement understanding and clarifications
  • Maintain the highest level of quality and detail-oriented approach in daily tasks

Requirements

The role requires understanding and translating business needs into data models, creating robust data pipelines, and developing and maintaining databases. The candidate should be able to define and manage data load procedures, implement data strategies, and ensure robust operational data management systems. Collaborating with stakeholders across the organization to understand their data needs and deliver solutions is also a key part of this role. The ideal candidate will be proficient in big data tools like Hadoop, Hive, and Spark, programming languages such as Python and SQL and have strong analytical skills related to working with structured and unstructured datasets., * 2-4 years development experience in building data pipelines and writing ETL code using Hive, PySpark, SQL and Unix

  • Experience in writing and optimizing SQL queries in a big data environment
  • Experience working in Linux/Unix environment and exposure to command line utilities
  • Experience creating/supporting production software/systems and a proven track record of identifying and resolving performance bottlenecks for production systems
  • Exposure to code version control systems (e.g. git, GitHub)
  • Experience working with cloud services (e.g. AWS, GCP, Azure)
  • Familiarity with common agentic coding tools
  • Hands-on experience building GenAI-based applications or workloads
  • Ability to understand a diverse set of business domains and requirements
  • Good understanding of agile working practices and related program management skills
  • Experience with workflow orchestration tools (e.g., Apache Airflow) and designing reliable data workflows
  • Experience applying data quality frameworks and practices (e.g., automated checks, reconciliation and data observability)
  • Strong communication and presentation skills with ability to interact with different cross-functional team members at varying levels

Preferred Qualifications:

  • Advanced degree in technical field (e.g. Computer Science, statistics, etc.)
  • Experience with visualization tools like Tableau and Power BI
  • Exposure to Financial Services or the Payments Industry
  • Hands-on experience with CI/CD and automation pipelines (e.g., GitHub Actions, Jenkins, Azure DevOps) including testing and release practices

About the company

Visa is a world leader in payments technology, facilitating transactions between consumers, merchants, financial institutions and government entities across more than 200 countries and territories, dedicated to uplifting everyone, everywhere by being the best way to pay and be paid. At Visa, you'll have the opportunity to create impact at scale - tackling meaningful challenges, growing your skills and seeing your contributions impact lives around the world. Join Visa and do work that matters - to you, to your community, and to the world. Progress starts with you.

Apply for this position