Data Engineer
Peritus Inc
yesterday
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
EnglishJob location
Tech stack
Application Performance Management
Data Security
Distributed Computing Environment
Key Management
SQL Azure
NoSQL
SQL Databases
Data Streaming
Systems Integration
Management of Software Versions
Data Storage Technologies
Azure
Data Lake
PySpark
Cosmos DB
Azure
Data Pipelines
Databricks
Requirements
Overview: Data Engineer with strong hands-on experience in Azure data platforms to design, build, and maintain scalable, reliable, and high-performance data pipelines in an enterprise environment.
- Expertise in Azure Databricks, leveraging PySpark and Scala for large-scale distributed data processing, along with implementing Delta Lake to enable efficient storage, data versioning, and ACID-compliant operations.
- Experience in designing and implementing structured data pipelines using the Medallion architecture (Bronze, Silver, Gold layers) is essential.
- The role involves orchestrating end-to-end data workflows using Azure Data Factory (ADF) and enabling advanced analytics and integration using Azure Synapse Analytics.
- Strong proficiency in SQL, including working with Azure SQL Hyperscale, is required.
- The candidate should be experienced in managing and optimizing data storage in ADLS Gen1 and Gen2 and integrating real-time and event-driven data streams using Event Hub and Service Bus.
- Familiarity with NoSQL databases such as Cosmos DB is expected.
- Additionally, the role requires implementing robust monitoring and observability using Azure Monitor and Application Insights, along with ensuring secure data access and secret management using Azure Key Vault.
- The candidate should demonstrate the ability to optimize data pipelines for performance and cost, troubleshoot complex production issues, and consistently deliver high-quality, scalable data solutions in a distributed and enterprise-scale ecosystem.