Data Integration Engineer

NJTECH INC.
Tampa, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Tampa, United States of America

Tech stack

Sql Data Warehouse
Amazon Web Services (AWS)
Data analysis
JIRA
Unit Testing
Cloud Engineering
Profiling
Code Review
Data Integration
Data Systems
Data Warehousing
Information Lifecycle Management
Meta-Data Management
Performance Tuning
Scrum
PL-SQL
SQL Databases
Integration Testing
Virtualization Technology
Snowflake
Data Lineage
Performance Monitor
Data Management
Data Pipelines
Sql Tuning

Requirements

  • Minimum 8 years of experience in Data Management, with strong expertise in analytical data warehousing and enterprise-scale data platforms, contributing to the delivery of complex data solutions.
  • Minimum 3+ years of hands-on experience with Snowflake Cloud Data Warehouse, including use of advanced features and performance optimization for large, concurrent analytical workloads.
  • Strong experience with AWS cloud architecture, designing and supporting scalable, secure, and cost-effective data solution using core AWS Services.
  • Design, develop, and maintain complex SQL and PL/SQL procedures to support high-performance data integration and transformation processes.
  • Experience developing data integration solutions using Snowflake services such as Snowpark, Streams, Tasks, and Snowpipe. to implement both real-time and batch data pipelines within an use of advanced features and performance optimizing for large, concurrent analytical workloads.
  • Strong experience with AWS cloud architecture, designing and supporting scalable, secure, and cost-efficient data solutions using core AWS services.
  • Design, develop, and maintain complex SQL and PLISQL procedures to support high-performance data integration and transformation processes.
  • Experience developing data integration solutions using Snowflake services such as Snowpark, Stream, Tasks, and snowpipe to implement both real-time and batch data pipelines within an AWS ecosystem.
  • Strong knowledge of logical and physical data modeling, enabling effective analytics, reporting. and business intelligence solutions.
  • Hands-on experience developing dimensional data models, including slowly changing dimensions (SCDs) and multiple fact table designs, aligned with analytical best practices.
  • Solid experience across data lifecycle management, including data lineage, profiling, mapping, integeration, validation, cleansing, masking, subsetting, archiving, purging, virtualization, and metadata management.
  • Impllement and adhere to data quality and testing standards, supporting unit testing, system integeration testing, and UAT, to ensure production ready data solutions.
  • Perform SQL Performance tuning, proactive monitoring, and root cause analysis of data pipeline and platform issues, driving timely and effective resolution.
  • Solid experience across data lifecycle management, including data lineage, profiling, mapping, integration, validation, cleansing, masking, subsetting, archiving, purging, virtualization, and metadata management.
  • Implement and adhere to data quality and testing standards, supporting unit testing, system integration testing, and UAT, to ensure production-ready data solutions.
  • Perform SQL performance tuning, proactive monitoring, and root cause analysis of data pipeline and platform issues, driving timely and effective resolution.
  • Actively participate in Agile delivery teams, contributing to sprint planning, story implementation, estimation, and documentation using Jira.
  • Collaborate closely with peers, architects, and stakeholders, and provide guidance to junior engineers through code reviews and knowledge sharing.
  • Demonstrate strong verbal and written communication skills, effectively partnering with cross-functional technical and business teams.

Apply for this position