Software/Data Engineer
Role details
Job location
Tech stack
Job description
We are looking for a Software Engineer with a certain background in Data who will help us deliver the different parts of our production-ready product while co-designing and implementing an architecture that can scale up with the product and the company. This role will primarily focus on software engineering tasks, ensuring the scalability, reliability, and performance of our AI-driven systems. You will work closely with Backend and Frontend Engineers, while contributing to data pipelines, APIs, and infrastructure that support it. If you enjoy solving complex problems with code, and have a growth mindset, we encourage you to apply For more insight into the technologies used by the engineering team at Clarity AI, please explore our Tech Stack What You'll Be Doing Working as a Software/Data Engineer - EU Taxonomy Team, you will be responsible for: Designing, developing, and maintaining data pipelines and services with a focus on simplicity, scalability, reliability, and performance. Writing high-quality, well-tested code using Python and SQL. Building and maintaining data pipelines (ETL/ELT) using tools like Airflow. Implementing comprehensive automated testing to ensure the quality and reliability of our data products. Collaborating with cross-functional teams to deliver impactful features. Participating in the design and architecture of our evolving data platform. Troubleshooting and resolving bugs and issues in a timely manner. Championing best practices for code and data quality and testing. Driving improvements in our development processes using Lean and Agile principles. Managing and scaling data products, ensuring they are meeting the evolving needs of the business.
Requirements
3+ years of experience as a Software Engineer, Backend Engineer or Data Engineer. Solid understanding of software engineering principles and best practices (e.g., clean code, SOLID principles, simple design, design patterns). Expertise in Python and SQL. Expertise with Augmented Programming tools (e.g., Cursor, GitHub Copilot, Windsurf, etc.) Proven experience in building and maintaining data pipelines in a cloud environment. Experience with data modeling and schema design. A strong testing mindset and a commitment to writing automated tests (unit, integration, end-to-end). Familiarity with containerized environments (e.g., Docker, Kubernetes) and cloud platforms. A product-oriented mindset and an interest in building data solutions that solve real business problems. Ability to collaborate effectively in diverse teams, using a variety of communication methods. Take ownership of your work, focus on delivering impactful solutions, and are proactive in problem-solving. Curious, adaptable, and motivated to contribute to a collaborative environment. Decisive and action-oriented, able to make rapid decisions even when they are short of information Highly motivated, independent, and deeply passionate about sustainability and impact Excellent oral and written English communication skills (minimum C1 level-proficient user) Nice To Have Experience working in a product-based company. Familiarity with other data technologies such as DBT, Snowflake, Tinybird/ClickHouse, MongoDB, DuckDB or PostgreSQL. Experience with orchestration tools such as Airflow, Dagster, etc. Experience with TDD (Test-Driven Development) and Trunk-Based Development. Experience in a start-up / scale-up What We Offer
Benefits & conditions
Competitive compensation, both in terms of base salary as well as equity plans that enable to you to share in our success Flexibility in ways of working both in terms of your schedule as well as your location, whether you prefer to work from home, the office, or abroad with access to a global network of co-working spaces Generous paid time off schemes, including vacation, sabbatical, religious observance and compensation days