Software Data Engineer

Hg Insights
24 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Tech stack

Java
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Big Data
Cloud Computing
Code Review
Databases
Directed Acyclic Graph (Directed Graphs)
Data Governance
ETL
Software Debugging
DevOps
Distributed Data Store
Distributed Systems
Elasticsearch
Hadoop
Machine Learning
MySQL
NoSQL
Open Source Technology
Performance Tuning
Scrum
Query Optimization
Cloud Services
Prometheus
SQL Databases
Data Ingestion
Grafana
Spark
Electronic Medical Records
Spark Mllib
Data Lake
Kubernetes
Information Technology
Data Management
Front End Software Development
REST
Terraform
Data Pipelines
Serverless Computing
Docker
Databricks
Microservices

Job description

  • Design, build, and optimize large-scale distributed data pipelines for processing billions of unstructured documents using Databricks, Apache Spark, and cloud-native big data tools
  • Architect and scale enterprise-grade big-data systems, including data lakes, ETL/ELT workflows, and syndication platforms for customer-facing Insights-as-a-Service (InaaS) products.
  • Collaborate with product teams to develop features across databases, backend services, and frontend UIs that expose actionable intelligence from complex datasets.
  • Implement cutting-edge solutions for data ingestion, transformation, and analytics using Hadoop/Spark ecosystems, Elasticsearch, and cloud services (AWS EC2, S3, EMR).
  • Drive system reliability through automation, CI/CD pipelines (Docker, Kubernetes, Terraform), and infrastructure-as-code practices.

What You'll Be Responsible For

  • Leading the development of our Big Data Insights Platform, ensuring scalability, performance, and cost-efficiency across distributed systems.
  • Mentoring engineers, conducting code reviews, and establishing best practices for Spark optimization, data modeling, and cluster resource management.
  • Building & Troubleshooting complex data pipeline issues, including performance tuning of Spark jobs, query optimization, and data quality enforcement.
  • Collaborating in agile workflows (daily stand-ups, sprint planning) to deliver features rapidly while maintaining system stability.
  • Ensuring security and compliance across data workflows, including access controls, encryption, and governance policies.

Requirements

  • BS/MS/Ph.D. in Computer Science or related field, with 5+ years of experience building production-grade big data systems.
  • Expertise in Scala/Java for Spark development, including optimization of batch/streaming jobs and debugging distributed workflows.
  • Proven track record with:
  • Databricks, Hadoop/Spark ecosystems, and SQL/NoSQL databases (MySQL, Elasticsearch).
  • Cloud platforms (AWS EC2, S3, EMR) and infrastructure-as-code tools (Terraform, Kubernetes).
  • RESTful APIs, microservices architectures, and CI/CD automation37.
  • Leadership experience as a technical lead, including mentoring engineers and driving architectural decisions.
  • Strong understanding of agile practices, distributed computing principles, and data lake architectures.
  • Airflow orchestration (DAGs, operators, sensors) and integration with Spark/Databricks
  • 7+ years of designing, modeling and building big data pipelines in an enterprise work setting.

Nice-to-Haves

  • Experience with machine learning pipelines (Spark MLlib, Databricks ML) for predictive analytics.
  • Knowledge of data governance frameworks and compliance standards (GDPR, CCPA).
  • Contributions to open-source big data projects or published technical blogs/papers.
  • DevOps proficiency in monitoring tools (Prometheus, Grafana) and serverless architectures.

About the company

HG Insights is the global leader in technology intelligence, delivering actionable AI driven insights through advanced data science and scalable big data solutions. Our Big Data Insights Platform processes billions of unstructured documents and powers a vast data lake, enabling enterprises to make strategic, data-driven decisions. Join our team to solve complex data challenges at scale and shape the future of B2B intelligence.

Apply for this position