Data Engineer
Role details
Job location
Tech stack
Job description
Visa is accelerating the delivery of data analytics and AI powered products to support client growth and strategic decision-making across regions. We are seeking a Data Engineer to execute on the design, delivery and evolution of scalable data engineering capabilities that underpin Data Science, AI and client facing products for all European markets., * Requirement Analysis: Understand and translate business needs into data models supporting long-term solutions
- Build, manage and deploy large scale ETL processes to generate data assets for the region
- Build modular and reusable code considering the configurability and scalability while adhering to low-level design
- Perform thorough unit testing of development tasks and document the test results using standard defined templates
- Build, schedule, and manage DAGs in Apache Airflow efficiently
- Monitor data processing tasks using Airflow
- Ensure quality control of data assets, through monitoring and reconciling data loaded across different stages in the data pipeline
- Utilize strong data analytics skills to identify, discuss, and promptly fix data issues
- Apply debugging skills to quickly rectify execution errors, ensuring minimal delays and impact on business operations
- Collaborate and communicate with stakeholders for requirement understanding and clarifications
- Maintain the highest level of quality and detail-oriented approach in daily tasks
Requirements
The role requires understanding and translating business needs into data models, creating robust data pipelines, and developing and maintaining databases. The candidate should be able to define and manage data load procedures, implement data strategies, and ensure robust operational data management systems. Collaborating with stakeholders across the organization to understand their data needs and deliver solutions is also a key part of this role. The ideal candidate will be proficient in big data tools like Hadoop, Hive, and Spark, programming languages such as Python and SQL and have strong analytical skills related to working with structured and unstructured datasets., * 2-4 years development experience in building data pipelines and writing ETL code using Hive, PySpark, SQL and Unix
- Experience in writing and optimizing SQL queries in a big data environment
- Experience working in Linux/Unix environment and exposure to command line utilities
- Experience creating/supporting production software/systems and a proven track record of identifying and resolving performance bottlenecks for production systems
- Exposure to code version control systems (e.g. git, GitHub)
- Experience working with cloud services (e.g. AWS, GCP, Azure)
- Familiarity with common agentic coding tools
- Hands-on experience building GenAI-based applications or workloads
- Ability to understand a diverse set of business domains and requirements
- Good understanding of agile working practices and related program management skills
- Experience with workflow orchestration tools (e.g., Apache Airflow) and designing reliable data workflows
- Experience applying data quality frameworks and practices (e.g., automated checks, reconciliation and data observability)
- Strong communication and presentation skills with ability to interact with different cross-functional team members at varying levels
Preferred Qualifications:
- Advanced degree in technical field (e.g. Computer Science, statistics, etc.)
- Experience with visualization tools like Tableau and Power BI
- Exposure to Financial Services or the Payments Industry
- Hands-on experience with CI/CD and automation pipelines (e.g., GitHub Actions, Jenkins, Azure DevOps) including testing and release practices