Lead Data Engineer

JPMorgan Chase & Co.
Charing Cross, United Kingdom
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Charing Cross, United Kingdom

Tech stack

Airflow
Amazon Web Services (AWS)
Data analysis
Unit Testing
Azure
Google BigQuery
Continuous Delivery
Continuous Integration
Information Engineering
Data Governance
ETL
Data Security
Data Structures
Data Systems
Data Warehousing
Database Queries
Distributed Computing Environment
Fault Tolerance
Python
Enterprise Messaging Systems
Object-Oriented Software Development
Performance Tuning
Cloud Services
SQL Databases
Data Streaming
Strategies of Testing
Workflow Management Systems
Data Processing
Google Cloud Platform
Cloud Platform System
Snowflake
Spark
Containerization
PySpark
Kubernetes
Information Technology
Apache Flink
Kafka
Data Management
Terraform
Docker
Redshift

Job description

As a Lead Data Engineer at JPMorgan Chase within Personal Investing, you will design, build, and operate a robust cloud-native data platform and pipelines that power analytics, regulatory reporting, and data-promoten applications. You will help us deliver reliable, scalable, observable, and secure data solutions by applying strong software engineering fundamentals and modern data engineering patterns. You'll work closely with partners across product, analytics, and engineering to translate business needs into resilient technical designs. You'll also contribute to engineering excellence through best practices, mentoring, and thoughtful technical direction., * Design scalable, reusable data processing and data quality frameworks using Python, PySpark, and dbt

  • Build and optimize batch and streaming data pipelines with strong performance, fault tolerance, and observability
  • Develop and operate workflow orchestration (e.g., Apache Airflow) to schedule, monitor, and manage data movement and transformations
  • Model and transform data for analytics using SQL and dbt to support business intelligence and reporting workloads
  • Write production-grade Python/PySpark code with disciplined testing, performance tuning, and maintainable object-oriented design
  • Implement infrastructure-as-code (e.g., Terraform) to provision and manage cloud-based data platform components
  • Containerize and deploy services using Docker and Kubernetes (and related tooling such as Helm)
  • Collaborate with analysts, data scientists, and application teams to turn requirements into technical designs and delivered solutions
  • Own critical data systems by improving reliability, scalability, security, and operational excellence
  • Mentor junior engineers and influence the team's technical direction through standards, reviews, and knowledge sharing

Requirements

  • Degree in Computer Science or a STEM-related field (or equivalent)
  • Demonstrated experience delivering in an agile, fast-paced engineering environment
  • 8 years of recent, hands-on professional experience actively coding as a data engineer
  • Strong software engineering fundamentals (system design, data structures, object-oriented programming, testing strategies, and end-to-end development lifecycle)
  • Strong Python programming skills, including unit and integration testing
  • Hands-on experience building and operating cloud-based data platforms using major cloud services (e.g., AWS, Google Cloud, or Azure)
  • Experience with large-scale distributed data processing and performance tuning
  • Hands-on experience with modern data warehousing/lakehouse technologies (e.g., Redshift, BigQuery, Snowflake; and engines such as Spark, Flink, or Trino; and table formats such as Iceberg, Hudi, or similar)
  • Strong SQL skills and experience with SQL-based transformation tooling (e.g., dbt)
  • Experience designing and operating orchestration pipelines using Airflow or similar tools
  • Experience designing and building streaming pipelines using Kafka, Pub/Sub, or similar messaging systems

Preferred qualifications, capabilities, and skills

  • Data modeling experience for analytics and reporting use cases
  • Knowledge of security, risk, compliance, and governance considerations for data platforms
  • Experience building continuous integration and continuous delivery automation for data and platform services
  • Experience with container-based deployment environments (Docker, Kubernetes, etc.)
  • Demonstrated ability to coach teammates on engineering practices and contribute to a collaborative, inclusive team culture

About the company

J.P. Morgan is a global leader in financial services, providing strategic advice and products to the world's most prominent corporations, governments, wealthy individuals and institutional investors. Our first-class business in a first-class way approach to serving clients drives everything we do. We strive to build trusted, long-term partnerships to help our clients achieve their business objectives., Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we're setting our businesses, clients, customers and employees up for success.

Apply for this position