Engineer - Data Ingestion & Preparation (CSL)
Role details
Job location
Tech stack
Job description
Getnet is a global technology company specializing in payment solutions for commerce. Founded in Brazil and operating across Latin America and the Iberian Peninsula, we support over 1.3 million merchants with end-to-end services - from POS terminals to e-commerce platforms. We are part of PagoNxt, the global fintech of the Santander Group, and operate as an acquiring hub with a strong presence in Spain, Portugal, Brazil, Mexico, Chile, Argentina, and Uruguay. Our mission is clear: to simplify payments with innovation, security, and scale, helping businesses of all sizes grow with agility. We offer a unified platform that integrates hardware, software, fraud prevention, acquiring, reconciliation, and financial services - all in a single ecosystem, so our clients can focus on growing their business. Being part of Getnet means joining a company that combines the innovation of a fintech with the solidity of a global bank. Imagine your future. Care for your career. Simplify your journey. This means you'll have the chance to build impactful solutions, grow with real development opportunities, and thrive in a culture that values well-being, inclusion, and clarity. We combine flexibility, autonomy, and global collaboration - so you can focus on what matters, connect with purpose, and help shape the future. Here, you'll find space to grow, real opportunities to lead, and a culture where everyone belongs and contributes. If you want to be part of the next generation of financial solutions, this is the place. Act in a hands-on capacity to implement, operate, and stabilize data ingestion and preparation processes, ensuring data is properly structured and ready for publication in the Common Semantic Layer (CSL), in alignment with the semantic data dictionary defined by the external provider What you'll do
Role Mission
- Design, implement, and operate data ingestion pipelines (batch and/or streaming) from multiple sources (e.g., SEP, NUEK, fraud systems, webservices)
- Ensure data availability in the Stage layer as a faithful replica of source systems
- Prepare and transform datasets into Common and Refined layers, following CSL standards
- Apply business transformation rules aligned with data mappings (e.g., OneGMR, Churn)
- Work closely with the external provider to ensure adherence to the semantic data dictionary and mapping requirements
- Identify, troubleshoot, and resolve ingestion issues (failures, delays, inconsistencies)
- Ensure minimum data quality standards (completeness, consistency, standardization)
- Document pipelines, transformations, and data sources
- Support dependency resolution with local teams and external providers
- Map technical dependencies across pipelines, sources, and deliverables
Requirements
- Databricks / Apache Spark
- Azure Data Factory (or similar ETL/ELT tools)
- Advanced SQL
- Experience with data ingestion processes (ETL/ELT)
- Knowledge of Data Lake / Lakehouse architectures
- Experience integrating APIs, file-based ingestion (SFTP/B2B), and third-party systems
- Basic knowledge of CI/CD and version control (Azure DevOps or similar)
Nice to Have
- Experience in multi-country / multi-source environments
- Understanding of data modeling and semantic layers
- Experience with financial / transactional data
- Familiarity with tools such as Databricks, Purview, or similar
Profile
- Strong hands-on mindset
- Proactive problem solver
- Autonomous execution capability
- Effective communication with technical and functional teams
What we offer Medical insurance Dental insurance Wellhub Life insurance Annual healthcare Transportation allowance Meal voucher/food voucher Profit sharing program Access to self-development programmes Hybrid Collaborative environment: Successful candidate must be prepared to work 60% on site Next step Apply and if you know someone who might be looking for this opportunity, share it.