Remote Cloud Data Engineer (Must have experience with Pyspark)

ADVANTECH INC
Fort Meade, United States of America
6 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 12K

Job location

Remote
Fort Meade, United States of America

Tech stack

Agile Methodologies
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Azure
Big Data
Cloud Database
Cloud Engineering
Information Systems
Databases
Continuous Delivery
Continuous Integration
Data as a Services
Data Governance
ETL
Data Migration
Data Security
Database Queries
DevOps
Hive
JSON
Python
Key Management
Cisco Nexus Switches
Oracle
Performance Tuning
Query Optimization
Role-Based Access Control
Software Engineering
SQL Databases
Systems Integration
Unstructured Data
Data Logging
Data Ingestion
Azure
Change Data Capture
SC Clearance
PySpark
Information Technology
Deployment Automation
Amazon Web Services (AWS)
REST
Azure
Data Pipelines
Redshift
Databricks

Job description

Advantech GS Enterprises is seeking a highly skilled Data Cloud Engineer to support the DISA NEXUS program at Fort Meade. This role will focus on designing, building, and maintaining scalable cloud-based data solutions supporting enterprise modernization and mission-critical analytics within secure DoD environments.

The ideal candidate will have strong experience developing production-grade data pipelines within Azure and/or AWS cloud environments, with expertise in PySpark, Spark SQL, Python, and modern data engineering best practices. This position offers the opportunity to contribute to a long-term federal modernization effort centered around multi-cloud integration, secure data architecture, and advanced analytics capabilities., Design, develop, and maintain scalable cloud-based ETL/ELT pipelines using Azure Synapse Analytics, Databricks, AWS Glue, and related technologies Build and optimize large-scale data transformations using PySpark and Spark SQL, applying best practices for partitioning, query optimization, and performance tuning Develop and support data ingestion frameworks for both structured data (relational tables, CSV files) and unstructured data (JSON, nested structures, REST API integrations) Implement full and incremental data loading strategies, including change data capture (CDC), late-arriving record handling, and rerunnable pipelines Design and maintain cloud-based data lakes, warehouses, and analytics-ready datasets supporting enterprise reporting and operational decision-making Implement data quality and governance controls including schema validation, schema enforcement, schema drift handling, RBAC, lineage, cataloging, and credential management Monitor and troubleshoot pipelines for latency, failures, logging, alerting, and operational reliability Support CI/CD pipeline implementation for automated deployments, rollback strategies, and environment promotion processes Collaborate with cybersecurity and cloud engineering teams to ensure compliance with RMF, STIG, FedRAMP, and DoD security standards Utilize Oracle databases and cloud-native tools to support data migration, integration, and modernization initiatives Support Agile development efforts and collaborate with DevOps and software engineering teams across the program lifecycle

Requirements

Active Secret Clearance required Bachelor's degree in Computer Science, Data Science, Engineering, Information Systems, or related technical field 5+ years of recent experience designing and operating scalable, production-grade data pipelines using Azure Synapse Analytics and/or Databricks Strong hands-on experience with PySpark and Spark SQL for large-scale transformations and optimization Advanced proficiency in Python and SQL for data querying, automation, and analysis Experience ingesting and integrating structured and unstructured datasets from databases, flat files, REST APIs, and external systems Experience implementing full and incremental load strategies including CDC concepts and rerunnable pipeline architectures Experience with data quality controls including schema validation, enforcement, and schema drift handling Experience with pipeline monitoring, logging, alerting, and operational support Experience implementing CI/CD pipelines and automated deployment processes Knowledge of data governance and secure access management concepts including RBAC and credential management Hands-on experience with Azure and/or AWS cloud data services such as Azure Synapse, Azure Data Factory, Databricks, AWS Glue, Redshift, and S3

Apply for this position