Data Engineer
Role details
Job location
Tech stack
Job description
data architecture, and collaborate with teams to deliver insights that drive business decisions. Reporting directly into Head of Operations & AI, you'll play a key role in driving our data engineering strategy.At Ledgy, you will:Manage and optimize data infrastructure and ETL pipelines using Fivetran, Airbyte, and Google Cloud Platform, ensuring reliable data flow from multiple sources into our analytics ecosystem Develop, test, and maintain DBT models that transform raw data into analytics-ready datasets following best practices Create and manage LookMLmodels in Looker to enable self-service analytics for stakeholders across the company Drive continuous improvement of our data engineering practices, tooling, and infrastructure as a key member of the Operations team The job is a good fit if you have:2-3+ years experience building production data pipelines and analytics infrastructure, with DBT, SQL, and Python (Pandas, etc.) Experience implementing and managing ETL/ELT tools such as
Requirements
Fivetran or Airbyte Ideally hands-on experience with GCP (BigQuery) Proficiency in Looker, including LookML development Strong plus if you have experience using n8n or similar automation tools Experience with SaaS data sources (HubSpot, Stripe, Vitally, Intercom) Familiarity with AI-powered development tools (Cursor, DBT Copilot) and a strong interest in leveraging cutting-edge tools to improve workflow Strong problem-solving skills and ability to debug complex data issues Excellent communication skills with ability to explain technical concepts to non-technical stakeholders