Databricks Engineer
Ztek Consulting
Raleigh, United States of America
2 days ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Raleigh, United States of America
Tech stack
Business Analytics Applications
Data analysis
Azure
Information Engineering
ETL
Data Transformation
Python
Performance Tuning
Power BI
DataOps
SQL Databases
Cloud Platform System
Spark
PySpark
Data Pipelines
Databricks
Job description
- Design, develop, and deploy scalable ETL/ELT data pipelines using Apache Spark, PySpark, and Databricks.
- Develop and optimize SQL queries for data transformation and analysis.
- Collaborate with product owners, data architects and analysts to build data models, delta lake structures, and data workflows.
- Collaborate with data analysts and business teams to deliver actionable insights.
- Build job orchestration and monitoring solutions
- Ensure data quality, performance, and reliability across workflows.
- Develop and maintain CI/CD pipelines for Databricks notebooks, jobs, and workflows.
- Work with cloud-based data platforms (Azure preferred).
Requirements
- 12+ years overall experience in data engineering or related fields.
- 3 5 years hands-on experience with Databricks and Spark.
- Strong proficiency in SQL and data analysis techniques.
- Experience with ETL processes, data modeling, and performance tuning.
- Familiarity with Python or Scala for data engineering tasks.
- Excellent problem-solving and communication skills.
Roles & Responsibilities
We are seeking a hands-on Sr. Databricks Data Engineer to design, develop, and optimize data pipelines and analytics solutions. The ideal candidate will have strong experience in data engineering, ETL development, and production support, ensuring reliable, scalable, and high-performing data operations within Azure environment and can work in a fast-paced environment. Knowledge of insurance domain and Power BI is a plus but not mandatory., * 10+ years overall experience in data engineering or related fields.
- 3 5 years hands-on experience with Databricks and Spark.
- Strong proficiency in SQL and data analysis techniques.
- Experience with ETL processes, data modeling, and performance tuning.
- Familiarity with Python or Scala for data engineering tasks.
- Excellent problem-solving and communication skills.
Nice-to-Have:
- Knowledge of insurance industry data.