Senior Data Engineer
Role details
Job location
Tech stack
Job description
- Build ETL processes using Azure or AWS services tailored to each client.
- Collaborate with the team to design platforms capable of processing large volumes of data.
- Develop solutions using services such as Glue, EMR, Synapse, Data Factory, and Microsoft Fabric.
- Implement and optimize data pipelines using Python and PySpark.
- Create real-time ETL processes, ensuring efficiency and accuracy.
- Integrate data from multiple sources using Azure and AWS tools., In this company, you can be yourself and work with colleagues equally passionate about technology and innovation, ready to support you at all times. We want to see you evolve and grow with us. To that end, we invest in your professional development with training, tech breakfasts, and attendance at forums. We are advocates of continuous learning, which is why we provide a personal budget for staying up to date with market trends.
We care about well-being and happiness at work, which is why we listen to needs to offer the best working conditions in a unique human environment. Due to the emphasis we place on flexibility and work-life balance, we won the Madrid Flexible Company Award in 2020 and received an honorable mention in 2022.
This is a WIN-WIN, and we have 42 benefits to offer you, but here are some highlights:
? 100% remote work Flexible hours ? Special timetable: Fridays and summer 7h. ? Individual budget for attending forums and training ? English classes ? Health insurance ?️ Every three years, 6 extra days off. ? Day off on your birthday ?️ Bring a friend and benefit from it ? Flexible compensation ? ️ Wellness - Gympass ? Volunteering opportunities ? Company events and team buildings
Requirements
We are looking for a Senior or Mid-level Data Engineer with experience in public cloud environments such as Azure or AWS, and strong proficiency in Python and PySpark., * Proficiency in Python and proven experience with PySpark.
- Strong knowledge of Data Modeling.
- Demonstrated experience building ETL processes.
- Proficiency in SQL/PLSQL and experience with both SQL and NoSQL databases.
- Experience with cloud platforms such as Azure and AWS.
- Hands-on experience with specific ETL tools like Glue, EMR, Synapse, Data Factory, and Microsoft Fabric.
- Experience in data design and architecture.
- Communication and interaction with the customer
Nice to Have:
- Experience with other Apache Foundation tools, such as Hadoop.
- Knowledge of queueing systems in cloud environments.
- Experience with AI and machine learning tools.
- Familiarity with Business Intelligence tools on Azure or AWS.
- University degree in a related field.
If you are someone who likes to stay updated on the latest trends, is eager to try new technologies, and enjoys finding alternative ways to do things, this is the place for you. If you consider yourself a team player who advocates for Fair Play above all... a team is waiting for you!