Senior Data Engineer - Analytics

Pacers Sports & Entertainment
Indianapolis, United States of America
2 days ago

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English
Experience level
Senior

Job location

Indianapolis, United States of America

Tech stack

Artificial Intelligence
Airflow
Data analysis
Azure
Code Review
Data Systems
Cursor (Graphical User Interface Elements)
Software Debugging
Web Development
Human-Computer Interaction
Python
Operational Databases
Power BI
Next.js
SQL Databases
Tableau
React
GIT
Data Layers
Build Management
Information Technology
Tools for Reporting
Front End Software Development
Virtual Agents
Looker Analytics
Software Version Control
Data Pipelines
Databricks

Job description

The Senior Data Engineer will join our small, high-leverage data team building the data and intelligence platform for Pacers Sports & Entertainment (PS&E). This is a modern full-stack data & analytics engineering role spanning data modeling, pipeline development, and the user-facing applications that put data in front of decision-makers. The role works across the entire surface from raw ingestion through governed consumption layers and into the products that stakeholders actively use. AI-assisted development is core to how our team operates, not a side experiment. The successful candidate will contribute to a team culture where engineers move fluidly across the full data and application stack and leverage modern tooling to punch above their weight.

This role carries an analytics specialty. The successful candidate will own the gold layer modeling, canonical metric definitions, and the semantic layer that business users across ticketing, partnerships, basketball operations, and corporate reporting depend on every day., * Design and build gold layer data models that serve ticketing, partnerships, basketball operations, fan engagement, and corporate reporting use cases.

  • Define and maintain canonical metric definitions and the semantic layer that prevents the same number from being reported five different ways across departments.
  • Establish and govern the consumption contracts between the data platform and downstream products, dashboards, and stakeholder workflows.
  • Build and maintain pipelines that move data from source systems through the medallion architecture in Databricks, including ingestion, transformation, and curation.
  • Partner with stakeholders across the business to translate their questions and needs into well-modeled, well-documented, production-grade data products.
  • Develop frontend or modern BI components for internal data products and analytics applications delivered by the team.
  • Participate in code review, testing, and engineering standards alongside the rest of the team.
  • Leverage AI-assisted coding tools to ship higher-quality work faster across the full stack.
  • Participate in a shared on-call rotation for production data systems, responding to incidents and contributing to postmortems and reliability improvements.
  • In every position, each employee is expected to align with PS&E's mission and core values along while actively participating in company-sponsored community outreach programs.
  • Other duties as assigned.

Requirements

To perform this job successfully, an individual must be able to perform each duty satisfactorily. The requirements listed above are representative of the knowledge, skill and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions., * Five or more years building data systems in production environments.

  • Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field, or equivalent practical experience.
  • Production experience in the consumption layer through either frontend web development (React, Next.js, or similar) or Analytics tools (Tableau, Power BI, Looker, Hex, or comparable).
  • Hands-on experience designing dimensional models and semantic layers that real business users have adopted at scale.

SKILLS:

  • Strong Python and SQL, with hands-on cloud lakehouse experience; Databricks preferred.
  • Production experience with dbt for transformation and modeling, and Airflow or comparable orchestration frameworks.
  • Familiarity with agentic AI patterns and production experience using AI coding assistants such as Claude Code, Cursor, or similar tools.
  • Comfort moving across the stack: building a pipeline, modeling a domain, writing a Python service, and standing up a simple web interface independently.
  • Working knowledge of Azure cloud services, version control with Git, and CI/CD patterns.
  • Translating ambiguous business requirements into concrete engineering decisions, balancing quality, stakeholder needs, and delivery speed.
  • Making architectural and modeling decisions independently while knowing when to consult the team.
  • Debugging issues across the full stack from raw source data through transformation logic into user-facing products.
  • Investigating root causes rather than symptoms, with a bias toward shipping working software over perfect specifications.

Apply for this position