Data Engineer

bigspark
Glasgow, United Kingdom
19 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate
Compensation
£ 60K

Job location

Glasgow, United Kingdom

Tech stack

Java
Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Data analysis
Apache HTTP Server
CA Workload Automation Ae
Big Data
Google BigQuery
Cloud Computing
Computer Programming
Databases
Continuous Integration
Information Engineering
Data Vault Modeling
DevOps
Amazon DynamoDB
Github
Identity and Access Management
Python
PostgreSQL
Linux System Administration
MongoDB
MySQL
NoSQL
Openshift
SQL Databases
Data Streaming
Virtualization Technology
Parquet
Snowflake
Spark
Gitlab
GIT
Data Lake
Kubernetes
Apache Flink
Cassandra
Avro
Kafka
Data Management
Terraform
Azure
Docker
Jenkins
Databricks

Job description

Were looking for a Data Engineer to developer enterprise-scale data platforms and pipelines that power analytics, AI, and business decision-making. You'll work in a hybrid capacity which may require up to 2 days per week on client premises.

Requirements

  • 3+ years commercial data engineering experience
  • Strong programming skills in Python, Scala, or Java, with clean coding and testing practices.
  • Big Data & Analytics Platforms: Hands-on experience with Apache Spark (core, SQL, streaming), Databricks, Snowflake, Flink, Beam.
  • Data Lakehouse & Storage Formats: Expert knowledge of Delta Lake, Apache Iceberg, Hudi, and file formats like Parquet, ORC, Avro.
  • Streaming & Messaging: Experience with Kafka (including Schema Registry & Kafka Streams), Pulsar, AWS Kinesis, or Azure Event Hubs.
  • Data Modelling & Virtualisation: Knowledge of dimensional, Data Vault, and semantic modelling; tools like Denodo or Starburst/Trino.
  • Cloud Platforms: Strong AWS experience (Glue, EMR, Athena, S3, Lambda, Step Functions), plus awareness of Azure Synapse, GCP BigQuery.
  • Databases: Proficient with SQL and NoSQL stores (PostgreSQL, MySQL, DynamoDB, MongoDB, Cassandra).
  • Orchestration & Workflow: Experience with Autosys/CA7/Control-M, Airflow, Dagster, Prefect, or managed equivalents.
  • Observability & Lineage: Familiarity with OpenLineage, Marquez, Great Expectations, Monte Carlo, or Soda for data quality.
  • DevOps & CI/CD: Proficient in Git (GitHub/GitLab), Jenkins, Terraform, Docker, Kubernetes (EKS/AKS/GKE, OpenShift).
  • Security & Governance: Experience with encryption, tokenisation (e.g., Protegrity), IAM policies, and GDPR compliance.
  • Linux administration skills and strong infrastructure-as-code experience.

Benefits & conditions

  • Competitive salary
  • Generous Annual Leave
  • Discretionary Annual Bonus
  • Pension Scheme
  • Life Assurance
  • Private Medical Cover (inc family)
  • Permanent Health Insurance Cover / Income Protection
  • Employee Assistance Programme
  • A Perkbox account

Apply for this position