Software Engineering Senior Advisors- Hybrid
Role details
Job location
Tech stack
Job description
-
Design and develop a consolidated, conformed enterprise data warehouse and data lake that will store all critical data across customer, provider, claims, client, and benefits data;
-
Design, develop, and implement methods, processes, tools, and analyses to sift through large amounts of data stored in a data warehouse or data mart to find relationships and patterns;
-
Participate in the delivery of the definitive enterprise information environment that enables strategic decision-making capabilities across enterprise via analytics and reporting;
-
Manage processes that are highly complex and impact the greater organization;
-
Provide counsel and advice to top management on significant engineering matters, often requiring coordination between organizations;
-
Provide thought leadership and technical expertise across multiple disciplines;
-
Act as a source of knowledge and support for the most complex Information Management assignments; and
-
Lead and manage sizable projects, as necessary.
Requirements
-
Bachelor's degree or foreign equivalent in any engineering field;
-
5 years of experience in a related occupation;
-
Active or past AWS and Databricks certifications required;
-
Experience working in/with clients in the healthcare or pharmacy benefit management industry;
-
Experience migrating enterprise data pipelines from on-premises Hadoop to AWS Databricks;
-
Experience transforming legacy oozie workflows into modular Databricks jobs and notebooks;
-
Experience migrating hive tables to Delta Lake on S3, incorporating schema evolution, ACID guarantees, and time travel;
-
Experience designing and implementing event-driven pipelines using Amazon EventBridge and Step Function;
-
Experience building and optimizing Spark workloads on EMR and Databricks;
-
Experience engineering serverless automated validation framework using AWS Lambda, SQS and EMR;
-
Experience developing and automating CI/CD pipelines using Terraform and Jenkins;
-
Experience creating and maintaining Apache Iceberg tables from full and incremental loads leveraging schema evolution and versioning;
-
Experience designing automation scripts for resource cleanup and cost optimization across AWS and Databricks environments; and
-
Experience using: Kafka, NIFI, Hadoop, Java, Jenkins, Docker, Ansible, AWS, Spark, Python, Scala, Golang, Apache Kafka, Apache Camel, and Spring Boot.
If you will be working at home occasionally or permanently, the internet connection must be obtained through a cable broadband or fiber optic internet service provider with speeds of at least 10Mbps download/5Mbps upload.