Data Engineer
Role details
Job location
Tech stack
Job description
We are seeking a Data Engineer who is continuously adapting and excited to work on our Data Analytics Platform that affect our customers every day. You will be responsible for designing, implementing, and supporting data warehousing and business intelligence solutions on Microsoft Fabric. You will work closely with various departments to ensure the efficient and secure handling of data, enabling the organization to make informed decisions. You will take responsibility for all phases of the solution lifecycle from discovery to operation.
- Own the data platform from ingestion through to the creation and maintenance of semantic models, ensuring data quality, security, and usability at every stage.
- Develop scalable and efficient data pipelines using Azure Data Factory, PySpark notebooks, Spark SQL, and Python. This includes data ingestion, transformation, and loading processes.
- Implement ETL processes to extract data from diverse sources, transform it into suitable formats, and load it into the data warehouse or analytical systems.
- Design, implement, and maintain semantic models within Microsoft Fabric to support advanced analytics and self-service BI
- Ensure data security and compliance with data privacy regulations throughout the data engineering process
- Continuously monitor and enhance data security measures, including access controls, encryption, and auditing, to protect sensitive information
- Implement row-level security on data and ensure compliance with data privacy policies
- Develop tabular and multidimensional data models that are compatible with data warehouse standards
- Continuously monitor and fine-tune data pipelines and processing workflows to enhance overall performance and efficiency, considering large-scale data sets
- Establish, implement, and improve best-practices in development processes
- Support and enable junior colleagues and Data Analysts with technical expertise and experience
- Work closely with business stakeholders to translate analytical requirements into scalable, secure, and user-friendly data solutions. Perform other duties and tasks, consistent with the skills and expertise as required
Requirements
**Level of education: **The position holder will have a Degree in Computer Science or Equivalent Years of professional experience: 3- 5 years' experience in a relevant area of work. Job Specific Competencies/Knowledge:
-
Experience: Minimum 2 years of experience in designing, implementing, and supporting data warehousing and business intelligence solutions on Microsoft Fabric data pipelines.
-
Technical Skills: Proficiency in Azure Data Factory, PySpark, Spark SQL, Python, and SQL. Experience with Microsoft Fabric, Azure Data Analytics Service (Azure Data Factory - ADF, Data Lake, Azure Synapse, Azure SQL, and Databricks).
-
An ability to work to tight deadlines and within a challenging environment (E) Personal Qualities:
-
A willingness to learn new applications and practices to help our users
-
Services oriented, outgoing and friendly
-
Strong problem-solving skills, excellent communication skills, and the ability to work collaboratively with cross-functional teams
-
Ability to work in pressurised situations
-
Very good organizational and work prioritization skills (E)
-
Highly collaborative and supportive approach when providing assistance (E) Software Tools:
-
Microsoft Fabric
-
SQL Server
-
Devops