Aure/ Databricks / Architect

Tekaccel Inc
Louisville, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English

Job location

Louisville, United States of America

Tech stack

Java
Artificial Intelligence
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Azure
Big Data
Cloud Database
Code Review
Continuous Integration
Information Engineering
Data Governance
ETL
Data Warehousing
DevOps
Distributed Systems
Python
Performance Tuning
Software Architecture
Azure
SQL Databases
Google Cloud Platform
Cloud Platform System
Data Ingestion
Spark
Data Lake
PySpark
Integration Frameworks
Software Coding
Data Pipelines
Databricks
Programming Languages

Job description

Architect and develop scalable data pipelines using Databricks, Apache Spark, and related

technologies.

Collaborate with data scientists, analysts, and business stakeholders to understand

requirements and deliver robust solutions.

Optimize ETL workflows for performance, reliability, and cost efficiency on cloud platforms

(Azure, AWS, or Google Cloud Platform).

Implement data governance, security, and compliance best practices within the Databricks

environment.

Mentor junior engineers and contribute to code reviews, architectural decisions, and platform

enhancements.

Develop and maintain documentation for data pipelines, architecture, and operational

procedures.

Troubleshoot and resolve complex technical issues related to data ingestion, transformation,

and storage.

Generic Managerial Skills, If any

We are seeking a highly skilled Databricks Developer/Architect with deep hands-on coding experience. The

ideal candidate will design, implement, and optimize big data solutions on the Databricks platform,

leveraging advanced data processing frameworks, cloud architectures, and automation strategies.

Role Descriptions: Azure Architect- with Databricks Automation Bundles and DQX framework| understanding of GEN AI and ICEBERG table implementation

Requirements

Job Description: Required Skills: Extensive hands-on experience with Databricks and Apache Spark (PySpark, Scala, or SQL).

Proficiency in at least one programming language (Python, Scala, or Java).

Strong understanding of cloud data architectures (Azure Data Lake, AWS S3, Google Cloud Platform Storage) and related ecosystem.

Experience with CI/CD tools and DevOps practices for data engineering.

Familiarity with data warehousing concepts and tools (Delta Lake, SQL, etc.).

Solid grasp of distributed computing, performance tuning, and big data best practices.

Excellent communication and problem-solving skills. Desirable skills: Iceberg table Implementation

Databricks Automation Bundles and DQX framework. Exposure and understanding of GenAI and its opportunities in data engineering.

Apply for this position