Data Engineer
Role details
Job location
Tech stack
Job description
This contingent Software Engineer 4 role supports complex software engineering and data engineering initiatives at scale. The engineer will analyze multi-faceted technical challenges, design and optimize data pipelines, and collaborate cross-functionally on enterprise data processing systems. The position requires strong database expertise, SQL/PLSQL proficiency, cloud platform experience, and familiarity with large-scale data engineering tools such as Spark, Hadoop, or Kafka. The engineer will work on high-impact efforts involving data ingestion, ETL pipelines, performance tuning, and automation while ensuring best practices in compliance, governance, and engineering quality. Day-to-Day Responsibilities:
- Develop, optimize, and maintain data pipelines, ETL workflows, and processing frameworks.
- Write and optimize complex SQL/PLSQL queries for relational databases.
- Build backend data components using Python or Java.
- Manage and process Parquet files in cloud object storage (S3).
- Work with both relational and NoSQL databases across systems.
- Use big-data tools including Apache Spark, Hadoop, Kafka.
- Perform performance tuning, including report automation and query optimization.
- Participate in Agile ceremonies: backlog grooming, sprint planning, daily standups.
- Apply AI/ML concepts where appropriate in data workflows.
- Work with cloud platforms (AWS, Google Cloud Platform, Azure) for data processing and pipeline orchestration.
- Collaborate with engineering, product, and data teams on requirements and solution design.
- Support multi-faceted, large-scale engineering initiatives requiring deep analysis and cross-functional coordination.
Requirements
- 5+ years of Software Engineering experience (or equivalent through work, consulting, military, training, or education).
- Strong Data Engineering experience with ETL, pipelines, and large-scale data processing.
- Proficiency in SQL & PLSQL, strong relational DB experience.
- Experience with Python or Java for backend/data workflows.
- Big data tech experience (Apache Spark, Hadoop, Kafka).Plusses:
- Experience working with Parquet files in S3 and cloud object storage.
- Familiarity with NoSQL databases.
- Hands-on performance tuning + automation (including report automation).
- Experience with AI / machine learning concepts.
- Experience with AWS, Google Cloud Platform, or Azure.
- Proven experience working in Agile environments (grooming, planning, standups).Job Summary