Data Engineer

Stellent IT LLC
Cupertino, United States of America
8 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Compensation
$ 178K

Job location

Cupertino, United States of America

Tech stack

API
Amazon Web Services (AWS)
Azure
Big Data
Cloud Computing
Computer Programming
Information Engineering
Data Governance
ETL
Data Mining
Data Systems
Data Warehousing
Relational Databases
Document-Oriented Databases
Python
Simple Object Access Protocol (SOAP)
SQL Databases
Data Streaming
Google Cloud Platform
Robot Operating System
Data Ingestion
System Availability
Spark
PySpark
Information Technology
Kafka
Api Design
REST
Data Pipelines
Api Management

Job description

Position Overview: We are looking for a passionate and skilled Data Engineer with expertise in Spark, Python, SQL, and API development to design, develop, and maintain end-to-end data solutions. The ideal candidate will work closely with cross-functional teams to build scalable data pipelines, ensure data quality, and enable analytics and reporting., * Design, develop, and maintain scalable data pipelines using Apache Spark and Python.

  • Develop and optimize SQL queries for data extraction, transformation, and loading (ETL).
  • Build and implement end-to-end API integrations for data ingestion and dissemination.
  • Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements.
  • Ensure data accuracy, integrity, and security across all platforms.
  • Monitor and troubleshoot data pipeline issues, ensuring high availability and performance.
  • Document data workflows, architecture, and best practices., As a Robotics Software Engineer, you will play a pivotal role in developing data systems for advanced robotics applications. This position focuses on building and maintaining pipel…
  • 12 hours ago
  • Apply easily, Role Overview This early-career fullstack role is tailored for recent graduates eager to grow quickly by building real products. You will contribute to and gradually take ownership…
  • Just now
  • Apply easily

Requirements

  • Bachelor's or Master's degree in Computer Science, Data Engineering, or related field.
  • Proven experience with Spark (PySpark) for large-scale data processing.
  • Strong programming skills in Python for data manipulation and automation.
  • Extensive experience with SQL and relational databases.
  • Hands-on experience developing and consuming APIs (RESTful/SOAP).
  • Knowledge of data warehousing concepts and tools.
  • Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and teamwork skills.

Preferred Skills:

  • Experience with streaming data technologies (Kafka, Kinesis).
  • Knowledge of data governance and security best practices
  • Experience with CI/CD pipelines for data workflows

Benefits & conditions

  • $177,500 per year, SaidGig

  • San Jose, CA

  • $90.00 per hour Join a dynamic team dedicated to building an AI-native platform that transforms traditional spreadsheet operations into real-time dashboards and agentic workflows. This role involv…

  • 13 hours ago

About the company

© 2026 Careerjet All rights reserved

Apply for this position