Data Engineer (Remote from Austria)
Role details
Job location
Tech stack
Job description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Data Engineer in Austria.This role offers the opportunity to shape and maintain a high-performing, scalable data infrastructure in a dynamic, fast-growing environment. You will be responsible for designing, developing, and optimizing data pipelines while ensuring data quality, security, and performance. Collaborating closely with development and operations teams, you will influence the evolution of the data warehouse platform and experiment with new tools to drive innovation. The position emphasizes hands-on work with cloud-based technologies, modern data warehousing solutions, and ELT frameworks. Ideal candidates are proactive, solution-oriented, and passionate about enabling data-driven decision-making across the organization. This is a chance to work in a flexible, creative, and supportive culture while making a tangible impact on product and business outcomes.Accountabilities
- Maintain, configure, and optimize the existing data warehouse platform and pipelines.
- Design and implement incremental data integration solutions prioritizing data quality, performance, and cost-efficiency.
- Drive innovation by experimenting with new technologies and recommending platform improvements.
- Collaborate with development and operations teams to ensure seamless data flow and integration.
- Implement and enforce data security, auditing, and monitoring best practices.
- Support the scaling and evolution of the data architecture to meet growing business needs.
Requirements
Requirements7+ years of experience in data warehousing, database administration, or database development.5+ years of hands-on experience as a Data Engineer using SQL and Python.Strong experience with cloud platforms such as AWS, GCP, or Azure, including containerization (Docker/Kubernetes).Proven ability to work with large datasets using tools like Snowflake, BigQuery, Redshift, Databricks, Vertica, Teradata, or Hadoop/Hive/Spark.Experience building maintainable, high-performance, and scalable data pipelines.Proficiency with ELT tools and data integration frameworks such as Airflow, DBT, S3, and REST APIs.Positive, solution-oriented mindset and willingness to learn new technologies.Excellent written and verbal communication skills in English.BenefitsCompetitive total compensation package.Strong work-life balance initiatives and flexible remote work environment.Autonomy and freedom to make decisions and propose improvements.Opportunities for professional growth, continuous learning, and