Data Engineer
Sky Solutions LLC.
Tysons, United States of America
yesterday
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Remote
Tysons, United States of America
Tech stack
API
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Databases
Continuous Integration
Data as a Services
Information Engineering
Data Integrity
Python
Microsoft SQL Server
OpenFlow
Performance Tuning
Cloud Services
Salesforce
SQL Databases
Data Streaming
Systems Integration
Data Ingestion
System Availability
Snowflake
Star Schema
Kafka
Devsecops
Job description
- Build and maintain ingestion pipelines from multiple source systems (APIs, databases, streaming platforms).
- Develop transformation logic using dbt to support curated data products.
- Implement workflow orchestration using Apache Airflow.
- Deploy and optimize Snowflake schemas, warehouses, and performance settings.
- Collaborate with Data Analysts and Product Developers to translate requirements into scalable pipelines.
- Ensure pipelines meet SLAs for latency, quality, and data availability.
- Troubleshoot pipeline failures and perform root-cause analysis.
- Work with DevSecOps to deploy pipelines via CI/CD in AWS.
Requirements
We are seeking a Data Engineer experienced in building scalable, cloud-native data pipelines and data products. The ideal candidate has hands-on experience with AWS, Snowflake, Airflow, dbt, and streaming or batch ingestion tools. This engineer will be responsible for developing ingestion pipelines, transformation logic, and curated datasets used across the enterprise., * 5+ years of Data Engineering experience
- Strong experience with AWS data services (Lambda, S3, Glue, Step Functions, Kinesis, etc.)
- Hands-on experience with Snowflake (warehouses, roles, integrations, performance tuning)
- Proficiency with Airflow for scheduling/orchestration
- Experience using dbt for transformations
- Experience with SQL and Python
- Experience integrating with APIs, SQL Server, Salesforce, or similar platforms
Preferred Qualifications:
- Experience with Kafka or OpenFlow ingestion
- Experience with Precisely or similar data catalog/data integrity tools
- Experience in GovCloud environments
- Familiarity with UDM or enterprise semantic models