Data Management Specialist

Lognext
Municipality of Madrid, Spain
6 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Municipality of Madrid, Spain

Tech stack

Artificial Intelligence
Airflow
Amazon Web Services (AWS)
Azure
Cloud Computing
Data Architecture
Data Integration
ETL
Data Security
Data Systems
DevOps
Github
Hive
Python
Open Source Technology
DataOps
SQL Databases
Data Lake
PySpark
Semi-structured Data
Kafka
Data Management
Terraform
Data Pipelines
Jenkins
Databricks

Job description

The Data Management Specialist role at Lognext invites an experienced professional to join our team of technologists dedicated to creating meaningful, scalable data architectures that propel business success.

As a member of our data team, you will be responsible for designing, implementing, and maintaining modern data pipelines and infrastructure across public and hybrid cloud environments. Responsibilities

  • Design and implement modern, scalable, and secure data architectures and pipelines.
  • Support validation and refinement of high-level and low-level data architecture designs.
  • Deploy and configure cloud data platform components (Azure, Fabric, Synapse, Databricks, etc.).
  • Develop modular and reusable transformation code using DBT, PySpark or SQL.
  • Contribute to CI/CD pipelines and environment setup for dev, test, and production.
  • Design, build and maintain ETL/ELT pipelines for structured and semi-structured data.
  • Implement data quality rules, governance frameworks, and lineage tracking.
  • Collaborate with Data Architects, BI engineers, and DevOps teams.
  • Contribute to documentation, training materials, and handover activities.

Requirements

  • Minimum 4 years as a Data Engineer designing cloud and hybrid data solutions.
  • Strong expertise in at least one public cloud (Azure, AWS or GCP) with multicloud experience.
  • Proficiency with Python, PySpark, Spark SQL, and T-SQL.
  • Experience with Data Lakes, Delta Lakes, and data-modeling frameworks.
  • Familiarity with DevOps/DataOps practices and CI/CD tooling (Azure DevOps, GitHub Actions, Jenkins, etc.).
  • Strong communication and collaboration skills.
  • Experience with open-source technologies such as Airflow, Kafka, dbt, Great Expectations, Clickhouse or Superset.
  • Hands-on experience with Infrastructure-as-Code (Terraform).
  • Knowledge of hybrid cloud data integration.
  • Exposure to governance tools (e.g., Purview).
  • FinOps awareness for cloud cost optimization.
  • Understanding of AI/ML integration pipelines.

Apply for this position