Data Engineer

BURGEON IT SERVICES LLC
Santa Clara, United States of America
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Santa Clara, United States of America

Tech stack

API
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data analysis
Big Data
Cloud Computing
Computer Programming
Databases
Continuous Integration
Data Validation
Information Engineering
ETL
Data Warehousing
Relational Databases
DevOps
Distributed Systems
Amazon DynamoDB
Github
Hadoop
Hive
Identity and Access Management
Python
PostgreSQL
Machine Learning
Meta-Data Management
Microsoft SQL Server
MongoDB
MySQL
NoSQL
Scala
SQL Databases
Data Streaming
Data Logging
Data Ingestion
Snowflake
Spark
Cloudformation
Data Lake
PySpark
Data Management
Cloudwatch
Terraform

Job description

  • Design, build, and maintain scalable ETL/ELT pipelines using AWS services (Glue, Lambda, EMR, Step Functions).
  • Develop batch and realtime data ingestion processes from diverse sources (APIs, RDBMS, streaming platforms).
  • Optimize data workflows for performance, scalability, and cost-efficiency.

Data Platform Engineering

  • Architect and implement data lakes and data warehouses using S3, Redshift, Lake Formation, Athena.
  • Manage data modeling (star/snowflake schemas) and design optimized storage layers.
  • Implement data cataloging, metadata management, and data lifecycle policies.

Big Data & Analytics

  • Work with big data tools such as Spark, Hadoop, Hive, and PySpark.
  • Support analytics and machine learning teams by providing highquality, curated datasets.

Cloud Infrastructure & DevOps

  • Build CI/CD pipelines for data engineering (CodePipeline, CodeBuild, GitHub Actions).
  • Write IaC using Terraform or AWS CloudFormation.
  • Monitor, troubleshoot, and optimize workloads using CloudWatch and distributed logging.

Data Quality & Governance

  • Implement data validation frameworks and automated quality checks.
  • Ensure compliance with security, privacy, and governance standards (IAM, KMS, encryption).

Requirements

  • 10+ years of experience in Data engineering or related fields.

  • Strong handson experience with:

  • AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena.

  • Big Data tech: Spark/PySpark, Hadoop, Hive.

  • Programming: Python, SQL, Scala (optional).

  • Databases: SQL Server, PostgreSQL, MySQL, NoSQL (DynamoDB, MongoDB).

  • Experience with CI/CD, DevOps, and IaC tools.

  • Strong understanding of data modeling, warehousing, and distributed computing.

Apply for this position