Data Engineer 2
Role details
Job location
Tech stack
Job description
-
This individual contributor role is responsible for designing and developing data solutions that are strategic to the business and built on the latest technologies and patterns
-
This is a global role that requires partnering with the broader JLLT team at the country, regional, and global levels by utilizing in-depth knowledge of data, infrastructure, technologies, and data engineering experience
Responsibilities:
Technical Development
-
Design and implement robust, scalable data pipelines using Databricks, Apache Spark, and Delta Lake as well as BigQuery
-
Design and implement efficient data pipeline frameworks, ensuring the smooth flow of data from various sources to data lakes, data warehouses, and analytical platforms
-
Troubleshoot and resolve issues related to data processing, data quality, and data pipeline performance
-
Document data infrastructure, data pipelines, and ETL processes, ensuring knowledge transfer and smooth handovers
-
Create automated tests and integrate them into testing frameworks
Platform Engineering
· Configure and optimize Databricks workspaces, clusters, and job scheduling
· Work in a Multi-cloud environment including Azure, GCP and AWS
· Implement security best practices including access controls, encryption, and audit logging
· Build integrations with market data vendors, trading systems, and risk management platforms
· Establish monitoring and performance tuning for data pipeline health and efficiency
Collaboration & Mentorship
- Collaborate with cross-functional teams to understand data requirements, identify potential data sources, and define data ingestion
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet their needs
Requirements
-
We are seeking a Data Engineer P2 who is a self-starter to work in a diverse and fast-paced environment as part of our Capital Markets Data Engineering team, * Bachelor's degree in Computer Science, Data Engineering, or a related field (Master's degree preferred)
-
Minimum 3-5 years of experience in data engineering or full-stack development, with a focus on cloud-based environments
Technical Skills:
-
Strong expertise in managing big data technologies (Python, SQL, PySpark, Spark) with a proven track record of working on large-scale data projects
-
Strong Databricks experience
-
Strong database/backend testing with the ability to write complex SQL queries for data validation and integrity
-
Strong experience in designing and implementing data pipelines, ETL processes, and workflow automation
-
Familiarity with data warehousing concepts, dimensional modeling, data governance best practices, and cloud-based data warehousing platforms (e.g., Google BigQuery, Snowflake)
-
Experience with cloud platforms such as Microsoft Azure, or Google Cloud Platform (GCP)
-
Experience working in DevOps model
-
Experience with Unit, Functional, Integration, User Acceptance, System, and Security testing of data pipelines
Core Competencies:
-
Strong problem-solving skills and ability to analyze complex data processing issues
-
Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams
-
Attention to detail and commitment to delivering high-quality, reliable data solutions
-
Ability to adapt to evolving technologies and work effectively in a fast-paced, dynamic environment