Data System engineer job

TriOptus LLC
Alpharetta, United States of America
1 month ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Intermediate

Job location

Alpharetta, United States of America

Tech stack

Java
Agile Methodologies
Amazon Web Services (AWS)
Automation of Tests
Azure
Big Data
Cloud Computing
Databases
Information Engineering
ETL
Data Stores
Data Systems
Data Warehousing
IBM DB2
Database Design
Database Theory
Linux
DevOps
Event-Driven Programming
JSON
Python
Logical Data Models
Machine Learning
Metadata
NoSQL
Open Source Technology
Scrum
Systems Development Life Cycle
Rapid Application Development
Ruby
Software Engineering
Unstructured Data
XML
Data Processing
Scripting (Bash/Python/Go/Ruby)
Test Driven Development
Snowflake
SAP Sybase ASE
Information Technology
Collibra
Data Analytics
Non-relational Database
Terraform
Data Pipelines
Jenkins
Go

Job description

The team is ETL, data warehousing Looking for Data system engineer Cloud side SDLC can take place as design Batch streaming on cloud premises The databases use is Sybase IQ, DB2 Snowflake (Snowflake is highly preferred in database) Client platform is AWS they are moving to Azure, The Data System Engineer will be responsible for tasks such as data engineering, data modeling, ETL processes, data warehousing, and data analytics & science. Our platform run both on premise and on the cloud (AWS/Azure).

Knowledge/Skills: &bull Able to establish, modify or maintain data structures and associated components according to design &bull Understands and documents business data requirements &bull Able to come up with Conceptual and Logical Data Models at Enterprise, Business Unit/Domain Level &bull Understands XML/JSON and schema development/reuse, database concepts, database designs, Open Source and NoSQL concepts &bull Partners with Sr. Data Engineers and Sr. Data architects to create platform level data models and database designs &bull Takes part in reviews of own work and reviews of colleagues' work &bull Has working knowledge of the core tools used in the planning, analyzing, designing, building, testing, configuring and maintaining of assigned application(s) &bull Able to participate in assigned teams software delivery methodology (Agile, Scrum, Test-Driven Development, Waterfall, etc.) in support of data engineering pipeline development &bull Understands infrastructure technologies and components like servers, databases, and networking concepts &bull Write code to develop, maintain and optimized batch and event driven for storing, managing, and analyzing large volumes of structured and unstructured data both &bull Metadata integration in data pipelines &bull Automate build and deployment processes using Jenkins across all environments to enable faster, high-quality releases

Requirements

Data engineering experience moving large amounts of data in python Looking for DevOps exp. Jenkins experience. Financial exp. Is not needed is a good plus. Looking for mid-level candidates. 4-years' experience would also be fine. System design experience. NoSQL is a plus not a must have. Nice to have: Collibra, Terraform, Java, Golang, Ruby, Machine Learning Operation deployment must not have., Up to 4 years of software development experience in a professional environment and/or comparable experience such as: &bull Understanding of Agile or other rapid application development methods &bull Exposure to design and development across one or more database management systems DB2, SybaseIQ, Snowflake as appropriate &bull Exposure to methods relating to application and database design, development, and automated testing &bull Understanding of big data technology and NOSQL design and development with variety of data stores (document, column family, graph, etc.) &bull General knowledge of distributed (multi-tiered) systems, algorithms, and relational & non-relational databases &bull Experience with Linux and Python scripting as well as large scale data processing technology such as spark &bull Exposure to Big data technology and NOSQL design and coding with variety of data stores (document, column family, graph, etc.) &bull Experience with cloud technologies such as AWS and Azure, including deployment, management, and optimization of data analytics & science pipelines &bull Nice to have: Collibra, Terraform, Java, Golang, Ruby, Machine Learning Operation deployment &bull Bachelor's degree in computer science, computer science engineering, or related field required

Apply for this position