Databricks Engineer
SUNRAY INFORMATICS
Wilmington, United States of America
yesterday
Role details
Contract type
Temporary contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Remote
Wilmington, United States of America
Tech stack
Amazon Web Services (AWS)
Azure
Big Data
Code Review
Computer Programming
Information Engineering
ETL
Dataspaces
Data Warehousing
Software Debugging
Distributed Computing Environment
Hive
Python
Performance Tuning
Scala
Data Processing
Google Cloud Platform
Spark
GIT
Data Lake
PySpark
Information Technology
Optimization Algorithms
Data Management
Data Lakehouse
Software Version Control
Data Pipelines
Serverless Computing
Databricks
Job description
- Architect and build scalable data platforms using Databricks and Apache Spark
- Lead the design and development of complex ETL/ELT pipelines and data workflows
- Optimize and tune large-scale Spark jobs for performance and cost efficiency
- Implement and manage Delta Lake architecture for reliable data lakes
- Define data engineering standards, best practices, and governance frameworks
- Collaborate with data scientists, analysts, and business stakeholders to translate requirements into technical solutions
- Design and implement real-time and batch data processing systems
- Ensure data quality, reliability, and observability across pipelines
- Lead code reviews, mentor junior engineers, and drive technical excellence
- Integrate Databricks with cloud-native services (AWS, Azure, or Google Cloud Platform)
- Troubleshoot and resolve complex production issues
Requirements
MOI : Telephonic & Skype
Primary Skills: DATABRICKS, Apache Spark
Need Candidates with a Minimum of 9 Years of Experience. This is 100% Remote role on our W2., We are looking for a Senior Databricks Engineer to lead the design and implementation of scalable, high-performance data platforms using Databricks and Apache Spark. This role requires strong technical expertise, architectural thinking, and the ability to mentor junior engineers while driving best practices across the data ecosystem., * 9+ years of experience in data engineering or big data development
- Strong hands-on experience with the Databricks platform
- Expert-level proficiency in Apache Spark (PySpark, Scala, Spark SQL)
- Strong programming skills in Python and/or Scala
- Deep understanding of distributed data processing and optimization techniques
- Extensive experience with Delta Lake and data lakehouse architecture
- Strong experience with cloud platforms (AWS, Azure, or Google Cloud Platform)
- Expertise in data modeling, data warehousing, and ETL design patterns
- Experience with version control (Git) and CI/CD pipelines
- Strong analytical, debugging, and performance tuning skills
Education:
- Bachelor s or Master s degree in Computer Science, Engineering, or related field.