Software Engineer, AI Data

Jobgether
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Remote

Tech stack

Clean Code Principles
Artificial Intelligence
Airflow
Architectural Patterns
Google BigQuery
Information Engineering
Data Infrastructure
Data Structures
Database Queries
Distributed Computing Environment
Distributed Systems
Monitoring of Systems
Python
Cloud Services
Software Engineering
Strategies of Testing
Workflow Management Systems
AI Infrastructure
Datadog
Cloud Platform System
Reliability of Systems
Build Management
Containerization
PySpark
Data Management
REST
Data Pipelines
Apache Beam
Docker

Job description

This role offers the opportunity to design and build scalable, high-performance systems that power next-generation AI data platforms. You will work on mission-critical pipelines that support large-scale model training and evaluation, impacting millions of inference calls and hours of processed data. The role combines software engineering rigor with AI-focused infrastructure, providing a chance to shape technical execution, optimize data workflows, and drive innovation in a fast-paced, high-impact environment. You will collaborate closely with researchers, platform engineers, and other stakeholders to deliver reliable, maintainable, and cost-efficient systems that accelerate AI model development. This is an ideal position for engineers who thrive in a startup-like culture where ownership, technical excellence, and measurable impact are paramount. Accountabilities:

  • Architect and implement scalable AI data infrastructure to support model training and evaluation at scale
  • Build efficient, self-serve data processing pipelines leveraging cloud services and distributed systems
  • Design cost-effective storage, monitoring, and resource management solutions to maximize efficiency
  • Lead adoption of cutting-edge ML/AI tools and frameworks to enhance team velocity and system reliability
  • Streamline workflows, introduce new tooling, and maintain high-quality documentation for engineering processes
  • Troubleshoot and resolve complex technical issues while improving system performance, quality, and cost-efficiency
  • Participate in on-call rotations to ensure operational reliability of AI data platforms

Requirements

  • 5+ years of professional software engineering experience with strong Python and SQL proficiency
  • Solid understanding of software engineering fundamentals: data structures, algorithms, system design, architectural patterns, and testing strategies
  • Experience with RESTful APIs, distributed systems, and containerization (Docker) in cloud environments
  • Proven ability to deliver high-quality, maintainable code in collaborative team settings
  • Strong communication and stakeholder management skills, with the ability to explain technical concepts clearly
  • Startup mindset: able to navigate changing priorities, rapid iteration, and pragmatic decision-making, * Experience with GCP services (BigQuery, GCS, Cloud Run, GKE)
  • Familiarity with distributed processing frameworks (Apache Beam, PySpark)
  • Knowledge of workflow orchestration tools (Airflow, Prefect, Dagster)
  • Background in ML/AI infrastructure, monitoring tools (Datadog), or data engineering roles
  • Experience collaborating directly with researchers

Benefits & conditions

  • Competitive salary with equity grants and location-adjusted compensation
  • Fully remote work with flexible hours and autonomy over work-life balance
  • Comprehensive employer-paid health benefits
  • Access to cutting-edge AI tools and frameworks, fostering skill growth and innovation
  • Collaborative, high-impact environment with opportunities to shape technical strategy
  • Professional development opportunities including mentorship, training, and learning resources

Apply for this position