Data Engineer
Role details
Job location
Tech stack
Job description
Develop, maintain, and optimize scalable data pipelines using Apache Spark and Python. Implement ETL processes to ensure seamless extraction, transformation, and loading of data across systems. Collaborate with cross-functional teams to integrate Apache Hadoop and Apache Kafka into the data architecture. Monitor and troubleshoot data systems to ensure reliability and performance. Design and maintain data models, ensuring alignment with business requirements. Conduct thorough testing and validation of data processes to guarantee accuracy. Document data workflows and processes for future reference and team collaboration. Provide technical guidance and support to team members on data engineering best practices. Stay current on emerging technologies and trends in big data and analytics. Contribute to improving data governance and security protocols.
Requirements
Proficiency in Apache Spark and Python for data processing and analysis. Hands-on experience with Apache Hadoop and Apache Kafka. Strong knowledge of ETL processes and tools. Ability to design and optimize data pipelines for scalability and efficiency. Experience with data modeling and database management. Solid understanding of data governance and security practices. Excellent problem-solving skills and attention to detail. Effective communication and collaboration skills for working in a team environment.
Benefits & conditions
Robert Half works to put you in the best position to succeed. We provide access to top jobs, competitive compensation and benefits, and free online training. Stay on top of every opportunity - whenever you choose - even on the go. Download the Robert Half app and get 1-tap apply, notifications of AI-matched jobs, and much more.