Senior Software Data Engineer
Role details
Job location
Tech stack
Job description
This is a heavy hands on data engineering role. You will work on integrating multiple data sources and building large scale data pipelines using Hive, Hadoop, Python, SQL, Apache Spark, Kafka and Airflow. You must be strong in distributed systems, cloud and big data tools.
What You Will Do
- Build and maintain data pipelines and ETL workflows
- Work with big data technologies like Hive, Hadoop, Spark and Kafka
- Develop Python and PySpark based data solutions
- Write clean and efficient SQL for data processing
- Design, test and deploy scalable systems
- Work with cross functional teams on data integration
- Support and improve existing data platforms
- Follow Agile processes, code reviews and best practices
Requirements
Do you have experience in Terraform?, * Strong hands on experience with Hive, Hadoop, Spark, Kafka and Airflow
-
Strong Python and SQL skills
-
Experience with PySpark ETL pipelines ** Experience with distributed systems*
-
Experience in data warehouse and data platform engineering
-
Experience building microservices in Python or Java
-
Strong understanding of software architecture and data structures
-
Good communication skills
-
6 to 10 years of total experience
Nice to Have
- AWS or GCP cloud experience
- Experience with Terraform
- Experience with Tableau, Looker or QuickSight
- Experience working on Finance or HR systems
Benefits & conditions
You work directly with the Enterprise Engineering team at Roku, a global leader in TV streaming. This role starts as a contract but is designed to convert into a full time position for the right candidate.