GCP Scala Data engineer

Huxley Associates
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Tech stack

Artificial Intelligence
Cloud Engineering
Databases
Continuous Integration
Data Structures
Data Warehousing
Hadoop
HBase
Machine Learning
NoSQL
Scala
Software Engineering
SQL Databases
Unstructured Data
Parquet
Data Processing
Google Cloud Platform
Spark
Data Lake
Kubernetes
Information Technology
Kafka
Spark Streaming
Data Pipelines
Docker

Job description

For a client active within aviation, I am looking for an experienced Data engineer that has experience with the GCP stack and scala. This is an external role open to freelance and Payroll. The role requires Hybrid working in the Schiphol area and is only open to candidates that have european nationality due to screening processes.

What will you do? As a Data Engineer, you'll design and build data pipelines, industrialize machine learning and operations research models, and replace legacy data warehousing systems with cutting-edge data lake solutions. You'll do this as part of our client's central Data, OR & AI department, working in a product team dedicated to the Finance business.

Finance will rely on the scalable, future-proof solutions you create, leveraging both structured and unstructured data. Once the solution is built, you'll ensure it operates smoothly and remains reliable-all within an agile environment.

Your profile

We're looking for a passionate and talented Data Engineer who keeps the end goal in mind. You can quickly analyze and identify core problems, provide the best solutions, and explain complex issues clearly to stakeholders at all levels. You coach junior teammates, spot opportunities, take action, and influence decision-makers.

Requirements

  • Bachelor's degree or higher in Computer Science, Software Engineering, or similar (or equivalent experience).
  • 4+ years building production-grade data processing systems as a Data Engineer.
  • In-depth knowledge of:
  • Hadoop ecosystem (migration to GCP in progress).
  • Building applications with Apache Spark.
  • Columnar storage solutions (e.g., Parquet, Apache HBase) and data modeling for columnar storage.
  • Key-value pair databases (HBase), NoSQL databases.
  • Event streaming platforms (Apache Kafka, Spark Streaming).
  • Cloud development (preferably GCP).
  • 3+ years experience with Scala.
  • Strong understanding of algorithms and data structures.
  • Experience with databases and SQL.
  • Knowledge of CI/CD techniques.
  • Affinity with Machine Learning and/or Operations Research concepts.
  • Experience with distributed databases and computing.
  • Bonus: Kubernetes or Docker experience.

Technologies you'll work with: Hadoop, Apache Spark, Parquet, HBase, Kafka, Scala, SQL, GCP, CI/CD, Kubernetes, Docker.

Apply for this position