*Senior Software Engineer - Data & APIs

Coderio
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Remote

Tech stack

API
Airflow
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Apache HTTP Server
Batch Processing
Databases
Data Architecture
Data Cleansing
ETL
Distributed Data Store
Distributed Systems
Github
Hadoop
Hive
Python
NoSQL
Scrum
Software Engineering
SQL Databases
Data Ingestion
Snowflake
Spark
Containerization
Kubernetes
Data Analytics
Star Schema
Cloudwatch
Amazon Web Services (AWS)
Software Version Control
Data Pipelines
Docker

Job description

In this role, as a Senior Software Engineer Data & APIs, you will design, develop, and maintain high-performance analytical applications and services that power large-scale data pipelines and analytics platforms. Your main focus will be on building Python-based solutions for data ingestion, transformation, and modeling, ensuring efficiency, scalability, and reliability across distributed systems. You will work closely with Data Engineers, Analysts, and cross-functional teams to optimize data workflows, improve ETL/ELT processes, and deliver high-quality datasets that enable advanced analytics and business insights within a cloud-based environment. What to Expect in This Role (Responsibilities) Contribute to all phases of the analytical application development life cycle, from design to deployment.Design, develop, and deliver high-volume data analytics applications with a focus on performance and scalability.Write well-structured, testable, and efficient code that aligns with technical and business requirements.Ensure all solutions comply with design specifications and best engineering practices.Support continuous improvement by researching emerging technologies, evaluating alternatives, and presenting recommendations for architectural review.

Requirements

5+ years of proven experience developing analytical or data-driven applications.5+ years of strong proficiency in Python and SQL, with hands-on experience in DBMS and Apache Iceberg.5+ years of expertise working with distributed data processing frameworks such as Apache Spark, Hadoop, Hive, or Presto.Deep understanding of data modeling techniques (Star schema, Snowflake) and data cleansing/manipulation processes.Good knowledge of NoSQL databases and handling semi-structured/unstructured data.Experience working within the AWS ecosystem (S3, Redshift, RDS, SQS, Athena, Glue, CloudWatch, EMR, Lambda, or similar).Experience with ETL/ELT batch processing workflows.Proficiency with version control tools (GitHub or similar).Advanced or fluent level of English (written and spoken). Nice to Have Experience in data architecture or pipeline optimization.Familiarity with containerization (Docker, Kubernetes).Exposure to Agile/Scrum methodologies.Knowledge of data orchestration tools (Airflow, Prefect).Understanding of data lakehouse architectures.

Benefits & conditions

100% remote Long-term commitment, with autonomy and impactStrategic and high-visibility role in a modern engineering cultureCollaborative international team and strong technical leadershipClear path to growth and leadership within Coderio

About the company

Coderio designs and delivers scalable digital solutions for global businesses. With a strong technical foundation and a product mindset, our teams lead complex software projects from architecture to execution. We value autonomy, clear communication, and technical excellence. We work closely with international teams and partners, building technology that makes a difference. Learn more:http://coderio.com

Apply for this position