Data Engineer

Cloud Resources LLC
Dallas, United States of America
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Dallas, United States of America

Tech stack

Third Normal Form
Amazon Web Services (AWS)
Amazon Web Services (AWS)
Azure
Big Data
Google BigQuery
Cloud Storage
Computer Programming
Information Engineering
Data Governance
Data Infrastructure
ETL
Data Mining
Data Systems
Data Warehousing
Data Flow Control
Hadoop
Python
Performance Tuning
SQL Databases
Data Streaming
Data Processing
Google Cloud Platform
Cloud Platform System
Snowflake
Spark
GIT
Data Lake
Information Technology
Star Schema
Kafka
Software Version Control
Data Pipelines
Redshift
Databricks
Programming Languages

Job description

Position Description: We are seeking a highly experienced and skilled Staff/Senior Data Engineer to join our growing data team. In this pivotal role, you will be instrumental in designing, building, and maintaining our robust data infrastructure, ensuring the availability, reliability, and scalability of our data pipelines. You will work closely with data scientists, analysts, and product teams to transform raw data into actionable insights that power our business. If you are passionate about data, enjoy solving complex technical challenges, and thrive in a fast-paced environment, we encourage you to apply.

Your future duties and responsibilities

Design & Development: Lead the design, development, and implementation of scalable, high-performance, and reliable data pipelines using various ETL/ELT tools and programming languages (e.g., Python, Scala).

  • Data Modeling: Develop and optimize data models (dimensional, relational, columnar) for efficient storage, retrieval, and analysis of large datasets.
  • Infrastructure Management: Build, maintain, and optimize data warehousing solutions (e.g., Snowflake, Redshift, BigQuery, Databricks) and data lakes (e.g., S3, ADLS).
  • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and queries, ensuring optimal data flow and accessibility.
  • Data Governance & Quality: Implement and enforce data governance best practices, ensuring data quality, integrity, security, and compliance.
  • Collaboration: Work closely with data scientists, data analysts, and product managers to understand data requirements and translate them into technical solutions.
  • Automation: Automate data extraction, transformation, loading (ETL/ELT) processes, monitoring, and alerting.
  • Mentorship & Leadership: Mentor junior data engineers, provide technical guidance, and contribute to the overall growth and best practices of the data engineering team.
  • Innovation: Stay up to date with emerging data technologies and trends, evaluating and recommending new tools and approaches to improve our data platform.

Documentation: Create and maintain comprehensive documentation for data pipelines, models, and processes.

Requirements

Education Qualification: Bachelor''s degree in Computer Science or related field or higher with minimum 9 years of relevant experience., Required Qualifications To Be Successful In This Role Bachelor's or master's degree in computer science, Engineering, Data Science, or a related field.

  • 5+ years of professional experience as a Data Engineer, with a strong track record of designing and implementing complex data solutions.
  • Expert proficiency in SQL for data manipulation, analysis, and optimization.
  • Strong programming skills in Python for data engineering tasks.
  • Extensive experience with cloud-based data platforms such as AWS (S3, Glue, Lambda, Redshift, EMR), Azure (Data Lake, Data Factory, Synapse), or Google Cloud Platform (BigQuery, Dataflow, Cloud Storage).
  • Proven experience with data warehousing concepts and technologies (e.g., Snowflake, Redshift, BigQuery, Databricks).
  • Solid understanding of ETL/ELT processes and tools.
  • Experience with big data technologies like Spark, Hadoop, or Kafka.
  • Familiarity with data modeling techniques (star schema, snowflake schema, 3NF).
  • Experience with version control systems (e.g., Git).
  • Excellent problem-solving skills and the ability to troubleshoot complex data issues.
  • Strong communication and interpersonal skills to collaborate effectively with cross-functional teams.
  • Bachelor''s degree in Computer Science, Engineering, or a related quantitative field.

Apply for this position