Data Engineering Developer
Paradigm
Seattle, United States of America
yesterday
Role details
Contract type
Permanent contract Employment type
Full-time (> 32 hours) Working hours
Regular working hours Languages
English Experience level
SeniorJob location
Seattle, United States of America
Tech stack
Java
Artificial Intelligence
Data analysis
Computer Programming
Databases
Information Engineering
Data Transformation
Data Structures
Relational Databases
Java Virtual Machine (JVM)
Python
Standard Sql
Scala
SQL Databases
Spark
Kotlin
Data Pipelines
Databricks
Job description
- You will be responsible for creating and maintaining optimal data pipeline architecture using Spark, Python, SQL, and other tools.
- You will assemble large, complex data sets that meet functional/non-functional business requirements.
- You will work with technical and non-technical teams to provide data-driven insights.
- You will identify issues related to data, data sources and processing.
- You will monitor reporting solutions & make sure that data sets are correct & troubleshoot any issues.
- You will be responsible for customer issue tickets - including reviewing existing solutions and looking holistically at our services to identify trends that may need a larger review.
- You will document methodologies, variations, and QA processes.
- You will intake, identify, document, and share findings and insights to stakeholders, both internal and external.
- You may work cross departmentally to be a data SME for other teams.
Requirements
Do you have experience in Stakeholder management?, Do you have a Bachelor's degree?, * You have a bachelor's degree (ideally in Comp Sci, Math, Economics, or Statistics).
- You have 5 years' experience in programming or data engineering.
- You are very knowledgeable in Python. Knowledge of other JVM languages like Kotlin, Java and Scala, is a plus.
- You have advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), as well as working familiarity with a variety of databases.
- You have a deep understanding of Apache Spark, especially as part of Azure Databricks.
- You are familiar with the Unity Catalog, Spark Declarative Pipelines (aka Delta Live Tables), and Delta Sharing.
- You have knowledge of technology related to data transformation and data structures.
- You think critically and are not afraid to be wrong (but you love learning from your successes and your mistakes).
- You have excellent requirement gathering and independent problem-solving skills.
- You understand AI as a tool to leverage, with a deep understanding of the required output.
- You have excellent communication skills (both written & verbal) - including the ability to translate technical concepts to people without technical expertise.
- You are organized with strong attention to detail and excellent follow-through.
- You are agile and able to adapt to changing demands and manage multiple priorities in a fast-paced environment.
- You are self-motivated and resourceful, with strong interpersonal skills; able to work individually as well as with a team.
About the company
At Paradigm, we make world-class software and deliver high-quality professional services for the building products industry. Our success is tied directly to our customers' success, so we do what it takes to make sure they're successful. And we know that we couldn't do it without our awesome employees.