Big Data Engineer
Role details
Job location
Tech stack
Job description
Design, develop, and optimize data pipelines with Spark/Scala.
-
Collaborate with business analysts and data teams to translate requirements into technical solutions.
-
Ensure quality, documentation, and automated testing of developments.
-
Work with AWS services such as EMR, Glue, and Lambda, leveraging Terraform for infrastructure as code.
-
Align with technical best practices and regulatory requirements.
Requirements
More than 3 years of experience in Spark/Scala within Big Data environments (Hadoop).
-
Hands-on experience with AWS (EMR, Glue, Lambda) and Terraform.
-
Solid knowledge of Unix/Linux, Bash/Python scripting, and code repositories (Git, Maven).
-
Experience in DevOps practices (Jenkins, CI/CD).
-
Advanced English (C1) - mandatory.
-
Background in Financial Services/Banking.
-
Advanced SQL knowledge and batch process control.
-
Experience with BI platforms (ideally Power BI).
Benefits & conditions
100% Remote - Work from anywhere in Spain.
-Salary based on the candidate's qualifications and experience.
-Continuous training.
-Good work environment: professional and highly specialized.
-Opportunity to work in a multinational company that is growing nationally and internationally.
-Internal promotion based on your own goals.