Data Engineer

Cheltenham
Cheltenham, United Kingdom
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English

Job location

Remote
Cheltenham, United Kingdom

Tech stack

Java
Airflow
Amazon Web Services (AWS)
Azure
Big Data
Google BigQuery
Cloud Computing
Information Systems
Databases
Data Architecture
Information Engineering
Data Governance
Data Infrastructure
ETL
Data Security
Data Systems
Data Warehousing
DevOps
Hadoop
Python
SQL Databases
Data Streaming
Data Processing
Google Cloud Platform
Snowflake
Spark
Containerization
Data Lake
Kubernetes
Information Technology
Kafka
Data Pipelines
Docker
Redshift
Programming Languages

Job description

Job Summary We are seeking a skilled and detail-oriented Data Engineer to design, build, and maintain scalable data infrastructure and pipelines. In this role, you will be responsible for ensuring reliable data flow, optimizing data systems, and enabling analytics and business intelligence across the organization. Key Responsibilities Design, develop, and maintain scalable data pipelines (ETL/ELT processes) Build and optimize data architectures, including data warehouses and data lakes Develop and maintain robust data models to support analytics and reporting Write efficient SQL queries and manage large datasets Ensure data quality, integrity, and security across systems Monitor and troubleshoot data pipeline performance issues Collaborate with data analysts, data scientists, and software engineers Implement data governance and best practices Support real-time and batch data processing solutions

Requirements

Qualifications Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a related field Strong proficiency in SQL and database technologies Experience with programming languages such as Python, Java, or Scala Hands-on experience with data pipeline tools (e.g., Apache Airflow, Kafka) Familiarity with cloud platforms (AWS, Azure, Google Cloud) Understanding of data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) Strong problem-solving and analytical skills Preferred Skills Experience with big data technologies (e.g., Hadoop, Spark) Knowledge of data modeling techniques and schema design Familiarity with containerization tools (Docker, Kubernetes) Experience with CI/CD pipelines and DevOps practices Understanding of data security and compliance standards

Apply for this position