Staff Software Engineer, Data Infrastructure
Role details
Job location
Tech stack
Job description
You'll play a key role in the design, development, and operation of services that underpin data ingestion, transformation, storage, compute and orchestration at a massive scale. As a Staff Engineer, you'll be a Directly Responsible Individual (DRI) for multiple core data services, accountable for uptime, reliability, and performance, and a subject matter expert (SME) across multiple systems. Here are a few examples of what we work on:
- Enabling more realtime OLAP capabilities for internal and external customers
- Scaling our compute and orchestration layer.
- Creating and scaling agents to enable engineers and users on their workflow and data needs
What you will be doing:
- Design, build, and operate reliable and scalable data infrastructure powering Slack's analytics, ML, and data-driven decision-making.
- Serve as DRI for multiple core data services specifically on our analytics infrastructure (e.g., StarRocks, Pinot, and Trino), ensuring uptime, reliability, and our compute and orchestration services (e.g., Airflow, Temporal, EMR, Hive Metastore).
- Drive improvements in security, cost efficiency, and developer experience across our data infrastructure.
- Build automation and self-service tools that empower our team and other data teams to easily adopt and manage data workflows.
- Collaborate closely with data engineering, platform, and security teams to design scalable, well-governed solutions.
- Build and enhance our observability, monitoring, and alerting on our services via Grafana and related tooling.
- Partner with other staff and senior engineers to define best practices, technical standards, and support models for Slack's data ecosystem.
- Mentor and coach other engineers, modeling ownership, collaboration, and operational excellence.
Requirements
- U.S. Citizenship or Permanent Residency (Green Card holder). We are unable to provide visa sponsorship for this role.
- 10+ years of software, platform, or infrastructure engineering experience, including time spent supporting data-intensive systems or data platforms.
- Excellent communication skills and the ability to collaborate across cross-functional teams.
- Proven experience in building, deploying, and operating distributed infrastructure at scale.
- Strong technical background with big data and infrastructure technologies - such as Pinot, StarRocks, Trino, EMR, Airflow, Hive Metastore, Kubernetes, or equivalent systems.
- Proficiency in Python, Golang, Bash, and SQL.
- Proficiency with CI/CD (GitHub Actions), Vault, Terraform, Chef, and Grafana.
- Deep understanding of infrastructure reliability, observability, and cost efficiency principles.
- Hands-on experience supporting data pipelines or data engineering workflows is a strong plus.
- A strong sense of ownership and a drive to deliver high-impact, autonomous results.
Benefits & conditions
In the United States, compensation offered will be determined by factors such as location, job level, job-related knowledge, skills, and experience. Certain roles may be eligible for incentive compensation, equity, and benefits. Salesforce offers a variety of benefits to help you live well including: time off programs, medical, dental, vision, mental health support, paid parental leave, life and disability insurance, 401(k), and an employee stock purchasing program. More details about company benefits can be found at the following link: https://www.salesforcebenefits.com.Pursuant to the San Francisco Fair Chance Ordinance and the Los Angeles Fair Chance Initiative for Hiring, Salesforce will consider for employment qualified applicants with arrest and conviction records. At Salesforce, we believe in equitable compensation practices that reflect the dynamic nature of labor markets across various regions. The typical base salary range for this position is, $197,300 -