Data Solutions Engineer
Role details
Job location
Tech stack
Job description
This role will be hands on in code development to drive solutions to delivery by effectively engaging with team members across the globe. The person in this role must be proficient in SQL and Python, as you will work both independently to meet required specifications of solution delivery and with business users to ingest new complex data quality uses cases. In both cases the output work product will be incorporated into production environments, so the ideal candidate stepping into the role will also be a key contributor in delivering critical business features with a passion for big data technologies., * Build and maintain data solutions collecting and warehousing hundreds of terrabytes of data per day.
- Influence our technical decisions
- Keep yourself informed and up to date with technologies
- Build data expertise on subject matter and be able to speak to data warehouse constructs and data architecture
- You will design and code solutions in both on premises and cloud environments
- Working closely with Engineering and Business resources across the globe to ensure delivery of efficient cost effective solutions to business needs are met
- Design and implement scalable data solutions within a CI/CD environment, supporting the migration from a complex on-premises ecosystem-including Kafka, Hadoop, HDFS, Spark, and multiple MPP GreenPlum databases-to a modern, cloud-based architecture leveraging Databricks.
- Continuous improvement of processes optimizing for speed and cost savings while scaling for increasing data volumes
- Ideal candidate can lead in the developing solutions in Spark (Scala and Python), cross team communication, testing and release deployment
- Manage deployments to Kubernetes clusters using ArgoCD for GitOps-based delivery and Strimzi for Kafka operations.
Requirements
- Able to do your best work in a team setting and autonomously
- Well-developed interpersonal skills.
- Owns a problem to the end
- Proud to share in team's success
- Wants to grow a career with a great company., + Bachelor's Degree in Computer Science or equivalent degree is required.
- Around 3-6 (depending on if senior or not) years' professional experience on a development team manipulating data
- Fluent SQL with ability to ingest complex use cases, refactor and ask questions
- Strong experience in Python or other languages (JAVA or Scala)
- Ability to troubleshoot production issues and solve for performance bottlenecks
- Ability to analyze processes and identify improvements and optimizations
- Excellent communication skills and ability to work with the internal analyst community
- Ability to thrive in a collaborative team environment and handle complex products
- You enjoy working with numerous programming languages, relational databases, and distributed systems. Our platform is ever evolving, but currently is a combination of Kafka, Spark, Scala, Java, Python, MPP RDBMS, Postgres, Hadoop, AWS, AirFlow, Docker, and Kubernetes
- Why you might stand out from other talent:
- Internet/Digital Advertising ecosystem knowledge is a plus
- Cloud development experience is a plus
- Databricks experience is a plus
Benefits & conditions
As an Epsilon employee, you deserve perks and benefits that put you, your family and your finances first. Our benefits encompass a wide range of offerings, including but not limited to the following:
- Time to Recharge: Flexible time off (FTO), 15 paid holidays
- Time to Recover: Paid sick time
- Family Well-Being: Parental/new child leave, childcare & elder care assistance, adoption assistance
- Extra Perks: Comprehensive health coverage, 401(k), tuition assistance, commuter benefits, professional development, employee recognition, charitable donation matching, health coaching and counseling
Epsilon benefits are subject to eligibility requirements and other terms.