Platform Engineer (Big Data System Engineer)

W3global Eu Inc
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English, German
Experience level
Senior

Job location

Tech stack

Java
Agile Methodologies
Amazon Web Services (AWS)
Apache HTTP Server
Big Data
Cloud Computing
Continuous Delivery
Continuous Integration
ETL
DevOps
Distributed Systems
Fault Tolerance
Jinja (Template Engine)
Python
Octopus Deploy
Scrum
Ansible
Data Streaming
Tableau
Parquet
Scripting (Bash/Python/Go/Ruby)
Data Ingestion
Spark
Firewalls (Computer Science)
Data Lake
Kubernetes
Collibra
Kafka
Data Management
Machine Learning Operations
Dataiku
Data Delivery
Puppet
Xl Deploy
Network Server
Data Pipelines
Devsecops
Docker
Jenkins
Control M

Job description

  • You are operating Global Data Platform components (VM Servers, Kubernetes, Kafka) and applications (Apache stack, Collibra, Dataiku and similar)
  • Implement automation of infrastructure, security components, and Continuous Integration & Continuous Delivery for optimal execution of data pipelines (ELT/ETL).
  • Develop solutions to build resiliency in data pipelines with platform health checks, monitoring, and alerting mechanisms, quality, timeliness, recency, and accuracy of data delivery are improved
  • Apply DevSecOps & Agile approaches to deliver the holistic and integrated solution in iterative increments.
  • Liaison and collaborate with enterprise security, digital engineering, and cloud operations to gain consensus on architecture solution frameworks.
  • Review system issues, incidents, and alerts to identify root causes and continuously implement features to improve platform performance.
  • Be current on the latest industry developments and technology trends to effectively lead and design new features/capabilities.

Requirements

Do you have experience in Tableau?, * You have 5+ years of experience in building or designing large-scale, fault-tolerant, distributed systems

  • Integration of streaming and file-based data ingestion / consumption (Kafka, Control M, AWA)
  • Experience in DevOps, data pipeline development, and automation using Jenkins and Octopus (optional: Ansible, Chef, XL Release, and XL Deploy)
  • Experience predominately with on-prem Big Data architecture, cloud migration experience might come handy
  • Hands-on experience in integrating Data Science Workbench platforms (e.g. Dataiku)
  • Experience of agile project management and methods (e.g., Scrum, SAFe)
  • Supporting all analytical value streams from enterprise reporting (e.g. Tableau) to data science (incl. ML Ops), * Hands-on working knowledge of large data solutions (for example: data lakes, delta lakes, data meshes, data lakehouses, data platforms, data streaming solutions)
  • In-depth knowledge and experience in one or more large scale distributed technologies including but not limited to: S3/Parquet, Kafka, Kubernetes, Spark
  • Expert in Python and Java or another static language like Scala/R, Linux/Unix scripting, Jinja templates, puppet scripts, firewall config rules setup
  • VM setup and scaling (pods), K8S scaling, managing Docker with Harbor, pushing Images through CI/CD
  • Good knowledge of German is beneficial, excellent command of English is essential
  • Knowledge of financial sector and its products
  • Higher education (e.g. "Fachhochschule","Wirtschaftsinformatik")

Apply for this position