Data Engineer
Role details
Job location
Tech stack
Job description
Are you passionate about making data move as smoothly as the world's supply chains? As our next Data Engineer, you'll be at the core of Freightos' mission to bring order and transparency to global shipping. You'll design fast, reliable SQL queries, build robust ETL pipelines, and own our data warehousing process-ensuring our shipping rate data is always accurate and ready for action. Managing databases in Docker containers, building APIs for seamless data access, and working closely with product and data teams, you'll help turn business needs into technical solutions and keep our data shipshape so world trade keeps moving., * Develop, test, and optimize SQL queries for performance and scalability, ensuring low latency and minimal errors.
- Create new ETL/ELT processes and optimize existing ones. Own data warehousing process and data quality monitoring, using the latest tools and frameworks to enhance efficiency.
- Deploying, managing, and optimizing databases within Docker containers, including creating and maintaining Docker images for databases like MySQL
- Build and maintain APIs and back-end services that enable seamless access to large datasets. Which also keeps us hAPI..
- Work closely with product and data teams to define and deliver technical solutions aligned with business goals.
- You'll make sure that the bytes behind the container boxes are shipshape, leading the collection, validation, and analysis of data behind global shipping rates.
- Large organizations can ship well over half a million containers a year. So one decimal in the wrong place is kinda a big deal. You'll help make sure that doesn't happen, implementing QA protocols to ensure accuracy, integrity and general awesomeness of our indexes and other data products.
Requirements
- 3-5+ years of experience as a Data Engineering
- Strong proficiency in SQL and Python programming experience.
- Experience with ETL and data orchestration tools like Airflow.
- Understanding of data warehouse modeling and experience designing and implementing complex, clean, and scalable data pipelines and transformed datasets.
- Understanding of RESTful APIs.
- High level of business English (written & verbal).
- Strong project management, problem solving, organizational skills, attention to detail, and ownership.
- Ability to work with stakeholders to define KPIs and translate business questions into technical requirements.
- Comfortable with ambiguity; you handle partial information, missing data, and broad requirements pragmatically and gracefully.
Preferred Qualifications
- Experience with Google Cloud Platform (GCP) or Amazon Web Services (AWS).
- Knowledge of statistical analysis, regression models, and forecasting techniques.
- Familiarity with data visualization tools (e.g. Domo, Tableau, Power BI)
- Experience working with remote teams
- Experience with low-latency databases such as TimescaleDB, Cassandra, ClickHouse, CockroachDB, or similar technologies.
- Experience in data governance and quality practices
- Experience using AI tools to accelerate development (e.g., writing SQL/Python, debugging pipelines, generating code, or automating workflows), with an understanding of their limitations and best practices.