Remote Sr. Data Engineer
Role details
Job location
Tech stack
Job description
The Senior Data Engineer will design, code, test, and analyze software programs and applications. This includes researching, designing, documenting, and modifying software specifications throughout the production lifecycle. This role will also analyze and amend software errors in a timely and accurate fashion and provide status reports where required. The position responsibilities outlined below are not all encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary., Work with Product team to determine requirements and propose approaches to address users' needs.
- Analyze requirements to determine approach/proposed solution.
- Design and Build Solutions using relevant programming language.
- Thoroughly test solutions using relevant approaches and tools
- Conduct research into software-related issues and products
- Bring out-of-box thinking and solutions to address challenging issues
- Effectively prioritize and execute tasks in a fast-paced environment
- Work both independently and in a team-oriented, collaborative environment
- Flexible and adaptable to learning and understanding new technologies
- Highly self-motivated and directed
- Demonstrate a commitment to Hyatt core values
- Exercise independent judgment in methods and techniques for obtaining results.
- Work in an agile/scrum environment.
Requirements
Required Skills - Experience and comfort solving problems in an ambiguous environment where there is constant change. Have the tenacity to thrive in a dynamic and fast-paced environment, inspire change, and collaborate with a variety of individuals and organizational partners
- 5-8 years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business
Apache Airflow, Snowflake, Writing dags for Python for Airflow, AWS, Local to Chicago is a plus
- Experience in one of the scripting languages: Python or Unix Scripting · Proficient in SQL, PL/SQL, relational databases (RDBMS), database concepts and dimensional modeling · Strong verbal and written communication skills
- Demonstrate analytical and problem-solving skills, particularly those that apply to Data Warehouse and Big Data environments.
- Very good understanding of the full software development life cycle
- Very good understanding of Data warehousing concepts and approaches
- Experience in building Data pipelines and ETL approaches using Informatica IICS, Alteryx or any leading pipeline tool.
- Experience in building high-volume data workflows in a cloud environment
- Experience in building Data warehouse and Business intelligence projects
- Good Experience using Snowflake environment.
- Good Experience using streaming data using Kafka.
- Experience in data cleansing, data validation and data wrangling.
- Hands-on experience in AWS cloud and AWS native technologies such as Glue, Lambda, Kinesis, Lake Formation, S3, Redshift
- Experience using Spark EMR, RDS, EC2, Athena, API capabilities, CloudWatch, CloudTrail is a plus
- Experience with Business Intelligence tools like Tableau, Cognos, ThoughtSpot, etc is a plus.
- Experience in one of the scripting languages: Python or Unix Scripting · Proficient in SQL, PL/SQL, relational databases (RDBMS), database concepts and dimensional modeling · Strong verbal and written communication skills
- Demonstrate analytical and problem-solving skills, particularly those that apply to Data Warehouse and Big Data environments.
- Resources should be able work along with On-site hours to participate in Req/Design discussions. · Experience using Spark EMR, RDS, EC2, Athena, API capabilities, CloudWatch, CloudTrail is a plus
- Experience with Business Intelligence tools like Tableau, Cognos, ThoughtSpot, etc is a plus.