ETL Data Engineer job
TriOptus LLC
1 month ago
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
IntermediateJob location
Tech stack
Java
Agile Methodologies
Data analysis
Batch Processing
Big Data
ETL
Data Profiling
Data Warehousing
IBM DB2
Relational Databases
Hadoop
Hive
Shell
Oracle Applications
Scrum
SQL Databases
Subversion
Teradata
Management of Software Versions
Spark
Ab Initio
Bitbucket
Software Coding
REST
Jenkins
Control M
Requirements
- 5+ years of hands-on experience using Java
- 3+ years of hands-on experience with Spark (is a plus)
- 3+ years of hands-on experience implementing REST based services
- 5+ years of professional experience working as a developer in a Data Warehouse or other data oriented, batch processing environment
- 5+ years of experience in analysis of data or complex processes and systems, demonstrating strong analytical skills
- 5+ years of experience in writing and interpreting SQL & Unix Shell Scripts
- 5+ years of hands-on experience with relational databases like Oracle, Teradata, or DB2
The below skills are a plus:
- Experience with ETL Development Tools like AbInitio, Informatica
- Experience with CI/CD pipelines (Jules and Jenkins)
- Knowledge of big data technologies like Hadoop/HIVE is a plus
- Experience working in Agile environments - Scrum & Kanban
- Experience with Subversion or Bitbucket or similar source code versioning tools and coding standards - AIM, GFS
- Experience with scheduling tool such as Control-M or similar tool
- Responsible for writing complex SQLs for data analysis and data profiling
- Experience documenting business requirements, functional specifications, and test plans
- Ability to work with team in geographically distributed locations across multiple time zones
- Excellent written and verbal communication skills