AWS Database Engineer / Cloud DBA

OpenKyber LLC
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Remote

Tech stack

API
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Apache HTTP Server
Systems Engineering
Azure
Big Data
Business Software
Cloud Computing
Databases
Continuous Delivery
Data Deduplication
Data Governance
Data Integrity
ETL
Data Visualization
DevOps
Perl
Hadoop
Monitoring of Systems
Python
Machine Learning
Meta-Data Management
Metadata Repositories
Oracle Applications
Queueing Systems
Cloud Services
Ruby
SAS (Software)
SQL Databases
Tableau
Teradata
Unstructured Data
Data Ingestion
Azure
Spark
GIT
Data Lake
Collibra
QlikView
Kafka
Apache Nifi
Spark Streaming
Data Pipelines
Databricks

Job description

We are seeking a Senior Data Engineer to support our client with data ingestion, data deduplication, and data tagging for migration of a large-scale data environment into Databricks., * Design, develop, and maintain scalable data ingestion pipelines to onboard structured, semi-structured, and unstructured data from batch and streaming sources (e.g., APIs, databases, flat files, message queues) into the Azure/Databricks environment.

  • Implement de-duplication strategies across large-scale datasets using deterministic and probabilistic matching techniques to ensure data integrity and reduce redundancy within the Data Lake.
  • Develop and enforce data tagging frameworks to classify, label, and annotate datasets with appropriate metadata (e.g., sensitivity, source, domain, lineage) to support data governance, discoverability, and compliance requirements.
  • Assist with Operationalizing deployments and support of Cloud services for ETL Operations. This will include standardizing and automating processes and workflows, creating documentation/knowledge articles, and overall assisting Operations staff who have limited experience in Cloud.
  • Written and oral presentations to high-level CIO management on status of current efforts.
  • Possesses skills and experience related to business management, systems engineering, operations research, and management engineering. Typically has specialization in a particular technology or business application. Keeps abreast of technological developments and industry trends.
  • Assist with deployment, configuration, and management of Azure Cloud environment.
  • Assist with migration efforts of existing ETL jobs into Azure/Databricks cloud environment.
  • Ability to share optimization and efficiencies with the larger team and management.
  • Ability to automate solutions to repetitive problems/tasks.

Requirements

Do you have experience in Metadata management?, Do you have a Bachelor's degree?, * Must be eligible for a Position of Public Trust, including U.S. citizenship or permanent residency, five years of U.S. residency, and no more than six months of international travel in the past five years (excluding travel for U.S.-based work).

  • Bachelor s degree and 13 years of experience.
  • A degree from an accredited College/University in the applicable field of services is preferred.
  • Four additional years of relevant experience in lieu of a college degree is required.
  • If Degree is not in the applicable field, then four additional years of related experience is required.
  • 5+ years demonstrated experience designing and implementing data ingestion pipelines using tools such as: Azure Data Factory Apache Kafka Apache NiFi Spark Structured Streaming or equivalent technologies.
  • 5+ years of experience applying de-duplication techniques at scale, including: Record linkage Fuzzy matching Entity resolution across structured and unstructured datasets.
  • 5+ years Hands-on experience with: Data tagging Metadata management Tagging schemas Data catalogs (e.g., Azure Purview, Apache Atlas) Automated classification tools to support data governance and lineage tracking.
  • 5+ years Demonstrated experience working with unstructured data.
  • 2+ years of experience using Databricks or other Spark-based platforms.
  • Fluency in at least one scripting language: Python Perl Ruby or equivalent.

Desired Skills:

  • Integration of Git in continuous deployment and experience with DevOps monitoring tools.
  • Experience with one or more of the following products and technologies: SAS Python C++ Hadoop SQL Database/Coding Teradata Oracle Amazon S3 Apache Spark Machine Learning Natural Language Processing (NLP) Visualization tools such as: Tableau Strategy QLIK Strong skills and experience in Cloud Operations support in Azure.

About the company

OpenKyber is an IT services organization, with a mission to bring great people and great organizations together. Our diverse client base represents a wide range of industries, including technology, telecom, insurance, healthcare, manufacturing, banking & financial services, food & commodities trading and federal organizations. Our teams of experienced recruiters directly work with client companies seeking exceptional people to help with their business initiatives. OpenKyber, Inc. is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, age, disability, military status, national origin or any other characteristic protected under federal, state, or applicable local law.

Apply for this position