Data Analytics Engineer I
Role details
Job location
Tech stack
Job description
You will be joining the Financial Infrastructure team, responsible for building and maintaining the core systems powering our internal financial ecosystem. Every year, we process hundreds of billions of events that have a financial impact on Checkout.com and our merchants. Our team is responsible for maintaining an accurate record of all financial data, the data integrity of our systems and ensuring our infrastructure meets regulatory and compliance obligations in a scalable, reliable and fault-tolerant manner.
As an Analytics Engineer, you will play a pivotal role in our mission to make our financial data capabilities world-class. You will work closely with our Finance and Treasury teams to translate their requirements into robust and intuitive data models. You will design and build the data pipelines necessary to process and transform large amounts of data that our systems generate. You will be responsible for ensuring the accuracy and reliability of these data pipelines, as Checkout continues to scale as a business. You will have ownership over these processes, allowing you to take charge in maintaining a high standard of data quality.
How You'll Make An Impact
- Design and build data pipelines to process data from our systems, services and applications.
- Implement monitoring and alerting frameworks to ensure data pipeline performance and reliability.
- Partner with other analytics engineers to design and implement scalable data models that support downstream business operations and analytical queries.
- Ensure data governance and security standards are maintained across our systems.
- Continuously evaluate and implement new technologies to improve our platform and systems.
- Collaborate with Finance stakeholders to translate business requirements into technical specifications and Service Level Agreements.
Requirements
Do you have experience in Tableau?, * 2+ years of experience in an Analytics Engineering or Data Engineering role with a focus on large scale data transformation and data warehousing.
- Excellent SQL coding skills.
- Experience with cloud-based data warehouse technologies such as Snowflake, Google BigQuery, or AWS Redshift.
- Experience with data transformation tools such as dbt, or Dataflow.
- Understanding of data modeling techniques.
- Experience with using visualisation platforms such as Looker, Tableau, or Apache Superset.
- Understanding of software engineering best practices and their application to data processing systems.
- Knowledge of Python, Java or Flink is a plus, but not a necessity.
- Strong attention to detail.
- Ability to work autonomously in a fast-paced and dynamic environment.
- Strong communication and interpersonal skills.