Big Data Architect Spark / Cassandra

Argyll Info Tech Inc.,
Cincinnati, United States of America
5 days ago

Role details

Contract type
Temporary contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior
Compensation
$ 137K

Job location

Cincinnati, United States of America

Tech stack

Java
Azure
Big Data
Information Technology Consulting
Databases
Information Engineering
Data Systems
DevOps
Distributed Computing Environment
Distributed Data Store
Distributed Systems
Fault Tolerance
High-Level Architecture
NoSQL
Performance Tuning
Cloud Services
SQL Databases
Data Streaming
Data Processing
Apache Cassandra
Spark
Containerization
Kubernetes
Information Technology
Cassandra
Data Management
Data Pipelines
Docker

Job description

We are seeking an experienced Big Data Technology Architect with deep expertise in Apache Spark, Java, and distributed data processing systems. The ideal candidate will lead the architecture, design, and implementation of scalable data platforms, working with high-volume, real-time, and batch data pipelines. This role requires strong hands-on experience with NoSQL databases (Cassandra) and containerized environments (Kubernetes)., * Provide technology consulting and architectural guidance for big data solutions

  • Define solution architecture, scope, and effort estimation
  • Design and implement scalable data processing pipelines using Apache Spark
  • Develop high-performance applications using Java and distributed computing frameworks
  • Work with Apache Cassandra for large-scale data storage and retrieval
  • Deploy and manage applications in Kubernetes (K8s) environments
  • Collaborate with cross-functional teams to deliver end-to-end data solutions
  • Ensure performance optimization, scalability, and fault tolerance
  • Drive best practices in data engineering, architecture, and DevOps
  • Mentor team members and provide technical leadership, 2. Client Communication Act as primary technical point of contact, clarify requirements
  1. Solution Delivery Design and implement scalable big data solutions

Requirements

  • 10 years of experience in Big Data / Data Engineering

  • Strong expertise in:

  • Apache Spark (Core, SQL, Streaming)

  • Java development

Hands-on experience with:

  • Apache Cassandra (NoSQL database)
  • Kubernetes (K8s)

Strong understanding of:

  • Distributed systems and large-scale data processing
  • Data pipeline architecture (batch & streaming), * CI/CD pipelines
  • Docker & containerization
  • Kubernetes orchestration

Exposure to:

  • Azure Cloud platform
  • Cloud-native data architectures, * Total 15 years of IT experience
  • 10 years of relevant experience
  • Bachelor s or Master s degree in Computer Science, Engineering, or related field
  • Certifications: Not mandatory, * Strong hands-on Spark Architect (not just high-level design)
  • Experience building large-scale distributed data systems
  • Expertise in real-time and batch data processing
  • Proven ability to lead teams and interact with clients directly

Apply for this position