Data Modeler / Data Engineer / ETL Developer
Role details
Job location
Tech stack
Job description
We are seeking a skilled and passionate Data Modeler / Data Engineer / ETL Developer to join our data engineering team to designs, builds, and maintains data warehouse solutions on the Google Cloud Platform. As a key contributor, you will design and implement scalable data models, develop robust data pipelines, and ensure seamless data integration and data quality across platforms. Your work will directly support business insights, product innovation, and strategic decisions by enabling high-quality, well-structured, and easily accessible data.
Main Responsabilities:
- Analyze business requirements and translate them into effective data models and technical designs.
- Design and maintain conceptual, logical, and physical data models DWHs.
- Build and manage scalable, reliable, and secure data pipelines (ETL/ELT) in hybrid and cloud environments.
- Ensure data consistency, integrity, and quality across data platforms.
- Collaborate closely with data architects, analysts, and software developers to deliver end-to-end solutions.
- Continuously optimize data flows for performance and cost-efficiency.
Key performace Indicators:
- Accuracy and scalability of data models.
- Pipeline stability and performance metrics (e.g., run time, error rate).
- Reduction in data latency and processing costs.
- Positive feedback from cross-functional collaborators.
- Contributions to documentation and knowledge-sharing within the team.
Requirements
- Data Modeling and Architecture
- Solid experience in data modeling techniques (3NF, star/snowflake schemas, data vault) and data architecture best practices.
- Proficiency in SQL
- Experience in metadata management, data lineage, and governance practices.
Data Engineering
- Proficient in developing ETL/ELT pipelines using modern frameworks (e.g., SQL Procedures, dbt, Dataflow).
Cloud and DevOps
- Experience with cloud data platforms, preferably Google Cloud Platform (GCP); knowledge of BigQuery, Teradata, Oracle.
- Familiarity with CI/CD pipelines using tools such as Jenkins, GitHub Actions, or GitLab CI.
Tools & Technologies
- Hands-on with data preparation and profiling tools (e.g., Talend, Trifacta, Collibra).
- Experience working in Agile/Scrum
Other Competencies:
- Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related field.
- 3+ years of experience in datawarehouse implementation or related field.
- Professional certification in cloud platforms (e.g., GCP Data Engineer, AWS Big Data, Azure Data Engineer) is a plus.
- Fluent English speaking and writing
- Experience in Agile Scrum Methodology
Benefits & conditions
- Hybrid work model
- Bonus on top of the gross salary.
- Meal voucher (Ticket Restaurant),
- Flexible working hours from Monday to Thursday, and an intensive schedule on Fridays.
- Intensive Summer Schedule during July and August.
- Up to 20 days per year of 100% remote work from other locations.
- Private Health and Life Insurance for employees.
- 25 vacation days, plus December 24th and 31st off.
- Optional Pension Plan.
- Access to an online learning platform for continuous training.