DevOps Engineer

VTG LLC
McLean, United States of America
27 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

McLean, United States of America

Tech stack

Artificial Intelligence
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Apache HTTP Server
Bash
Cloud Engineering
Databases
Relational Databases
DevOps
Disaster Recovery
Distributed Systems
Geospatial Intelligence
Identity and Access Management
Python
PostgreSQL
MySQL
Open Source Technology
Performance Tuning
Prometheus
Software Systems
Systems Integration
Data Logging
Scripting (Bash/Python/Go/Ruby)
System Availability
Delivery Pipeline
Spark
Amazon Web Services (AWS)
GIT
Amazon Web Services (AWS)
Containerization
Git Flow
Infrastructure Automation Frameworks
Data Management
Data Lakehouse
Functional Programming
Cloudwatch
Api Gateway
Terraform
Software Version Control
Docker
Jenkins
Microservices

Job description

The ideal candidate brings strong expertise in AWS, Terraform, containerization, and DevOps best practices, along with experience supporting data-intensive and mission-critical applications., + Design, implement, and maintain CI/CD pipelines using Jenkins to support automated build, test, and deployment processes.

  • Develop and manage infrastructure using Infrastructure as Code (IaC) tools, primarily Terraform.

  • Architect, deploy, and manage AWS cloud environments, including EC2, S3, RDS, Lambda, VPC, IAM, and CloudWatch.

  • Build and manage containerized environments using Docker and Podman.

  • Develop automation scripts using Bash and Python to streamline operations and deployments.

  • Install, configure, and manage self-hosted databases (PostgreSQL, MySQL), including: o Backup and recovery o High availability o Performance tuning

  • Implement and maintain monitoring, logging, and alerting solutions using tools such as Prometheus, Alertmanager, and CloudWatch.

  • Integrate and deploy diverse software systems, including AWS services, open-source tools, COTS/GOTS products, and custom-built applications.

  • Ensure adherence to AWS architecture best practices, including networking, security, and IAM policies.

  • Manage Git-based workflows, including branching strategies and version control best practices.

  • Deploy and manage modern data platform components, such as: o Apache Spark o Trino o Apache Ranger o Apache Iceberg o Apache Superset o Data catalog solutions

  • Support and optimize data lakehouse architectures and supporting infrastructure.

  • Implement secrets management solutions using AWS Secrets Manager and Parameter Store.

  • Develop and maintain disaster recovery, backup strategies, and continuity planning.

  • Support API gateway integrations and microservices-based architectures.

  • Optimize AWS environments for cost efficiency and performance.

  • Support deployment and operation of AI/ML workloads and model-serving infrastructure.

  • Mentor junior team members and promote DevOps and cloud best practices.

  • Integrate with enterprise-level services and systems.

Requirements

We are seeking a highly skilled Senior DevOps Engineer to design, implement, and manage scalable cloud infrastructure and CI/CD pipelines in a dynamic AWS environment. This role will focus on automation, infrastructure as code, and deployment of complex data platforms and distributed systems., * Bachelor's degree in Geospatial Intelligence, Geography, Remote Sensing, Intelligence Studies, Engineering, or related field, or equivalent experience

  • Strong experience building and managing CI/CD pipelines using Jenkins.
  • Advanced proficiency with Terraform for Infrastructure as Code.
  • Hands-on experience with AWS services, including:
  • EC2, S3, RDS, Lambda
  • VPC, IAM, CloudWatch
  • Experience with containerization technologies such as Docker and Podman.
  • Strong scripting skills in:
  • Bash (required)
  • Python (required)
  • Experience managing self-hosted relational databases (PostgreSQL, MySQL).
  • Experience implementing monitoring and alerting solutions (Prometheus, Alertmanager, CloudWatch).
  • Strong understanding of AWS architecture, networking, security, and IAM best practices.
  • Experience with Git-based development workflows and version control.
  • Experience integrating and deploying complex, distributed systems.

Apply for this position