Data Engineer
Role details
Job location
Tech stack
Job description
As a Data Engineer at Kyndryl's AI Innovation Hub, you'll support the design, development, and maintenance of data pipelines and platforms that enable advanced analytics and AI solutions. Working under the guidance of senior engineers, you'll gain hands-on experience with modern data tools and cloud technologies, contributing to the delivery of high-quality, governed, and scalable data assets for enterprise clients., * Assist in building and maintaining ETL/ELT pipelines and data models for structured and unstructured data.
- Support the development of scalable data architectures for batch and real-time processing in cloud and hybrid environments.
- Participate in data quality assurance, validation, and governance activities.
- Collaborate with data scientists, architects, and business teams to deliver data solutions aligned with project requirements.
- Contribute to the integration of APIs and data services for analytics and AI use cases.
- Document data workflows, processes, and best practices to ensure transparency and reproducibility.
- Stay current with emerging data engineering tools, frameworks, and cloud services. Who You Are
Requirements
-
2-4 years of experience in data engineering, analytics, or related technical projects.
-
Practical experience with Python and SQL for data processing and analysis.
-
Familiarity with ETL/ELT tools and frameworks (e.g., Airflow, dbt, Databricks).
-
Basic knowledge of cloud data services (AWS, Azure, GCP) and data pipeline tooling.
-
Experience with BI tools (Looker, PowerBI) for data visualization and reporting.
-
Understanding of data quality, governance, and validation concepts.
-
Fluent or native in Spanish; effective communication skills. Education & Certifications
-
Bachelor's degree in Computer Science, Mathematics, Engineering, or a related technical field.
-
Postgraduate or complementary studies in Data Engineering, Cloud Computing, or Data Science are valued.
-
Certifications or coursework in cloud platforms (AWS, Azure, GCP) or data engineering tools are a plus.
-
Demonstrated interest in continuous learning and professional growth in data engineering. Preferred Skills
-
Exposure to data pipeline development and tooling (Databricks, Cloudera, Teradata, Snowflake).
-
Awareness of DevOps/DataOps or CI/CD practices for data workflows.
-
Familiarity with containerization (Docker) and orchestration (Kubernetes).
-
Cloud platform certification or relevant coursework.
-
Interest in learning about data governance, metadata management, and AI/ML data integration. Soft Skills
-
Curiosity and learning mindset, eager to explore new technologies and methodologies.
-
Clear and structured communication, both in documentation and team discussions.
-
Analytical thinking and attention to detail.
-
Collaborative spirit, working effectively in multidisciplinary teams.
-
Proactive attitude, contributing ideas and energy to drive innovation. #AgenticAI