Big Data Developer

Information Tech Consultants
Charing Cross, United Kingdom
3 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Junior
Compensation
£ 45K

Job location

Charing Cross, United Kingdom

Tech stack

Amazon Web Services (AWS)
Azure
Big Data
Google BigQuery
Unix
Command-Line Interface
Code Review
Computer Programming
Databases
Information Engineering
Data Integration
ETL
Data Warehousing
Data Flow Control
Hadoop
Hive
Python
Machine Learning
SQL Databases
SQLAlchemy
Data Streaming
Unstructured Data
Data Processing
Google Cloud Platform
Data Ingestion
Spark
GIT
Pandas
PySpark
Information Technology
Kafka
Software Version Control
Data Pipelines

Job description

  • Assist in the development, testing, and maintenance of big data pipelines for processing large volumes of structured and unstructured data.
  • Write efficient Python scripts for data ingestion, transformation, and automation.
  • Develop and optimize SQL queries to extract, clean, and manipulate data from various databases.
  • Work with ETL tools and data integration frameworks to move data across systems.
  • Collaborate with data engineers, data scientists, and analysts to ensure reliable data flow and accessibility.
  • Monitor and troubleshoot data pipeline performance issues.
  • Maintain data quality, consistency, and integrity across all data sources.
  • Document technical workflows and participate in code reviews.

Requirements

Do you have experience in UNIX?, Do you have a Bachelor's degree?, We are seeking a motivated Junior Big Data Developer with strong skills in Python and SQL to join our growing data engineering team. In this role, you will help design, build, and maintain large-scale data processing systems that enable analytics, reporting, and machine learning initiatives. This position is ideal for someone with a passion for data and a desire to develop technical expertise in Big Data technologies., * Bachelor's degree in Computer Science, Information Technology, Data Engineering, or a related field.

  • Strong programming skills in Python (experience with libraries such as pandas, PySpark, or SQLAlchemy is a plus).
  • Proficiency in SQL with hands-on experience writing complex queries, joins, and data manipulations.
  • Basic understanding of data warehousing concepts and ETL processes.
  • Familiarity with Big Data ecosystems such as Hadoop, Spark, Hive, or Kafka.
  • Knowledge of Linux/Unix environments and command-line tools.
  • Strong analytical, problem-solving, and communication skills.
  • Eagerness to learn and adapt to new tools and technologies in the data engineering domain.

Preferred Qualifications

  • Exposure to cloud platforms like AWS (Glue, EMR, Redshift), Azure (Data Factory, Synapse), or Google Cloud (BigQuery, Dataflow).
  • Experience working with data lakes or data warehouse architectures.
  • Familiarity with version control systems (Git) and CI/CD pipelines.
  • Internship, coursework, or project experience related to big data or data pipeline development., * Bachelor's (preferred)

Benefits & conditions

  • Competitive salary and comprehensive benefits.
  • A collaborative, learning-oriented environment that values innovation and teamwork.
  • Access to training resources and certifications in cloud and big data platforms.

Job Types: Full-time, Permanent

Pay: £35,000.00-£45,000.00 per year

Apply for this position