DevOps Engineer (AI)
Role details
Job location
Tech stack
Job description
Bayview Asset Management is seeking an Infrastructure Engineer to support the development and building and scaling of AI solutions across the firm.
This role will sit within the AI Team and work closely with the IT team. However, they will own the infrastructure, data connectivity, and deployment pipelines required to build and operate AI products. The role is responsible for ensuring that data is accessible, systems are integrated, and solutions can be reliably deployed into production environments.
This is not a traditional IT support role. It requires hands-on ownership of data pipelines, system integrations, and DevOps practices specific to AI and data-driven products., AI Infrastructure & Deployment
- Build and manage infrastructure required for AI model development and deployment
- Establish and maintain CI/CD pipelines for AI applications
- Support model deployment, monitoring, and versioning
- Ensure production systems are stable, scalable, and performant
Platform & Systems Integration
- Connect AI solutions into existing enterprise systems and workflows
- Work closely with IT to align with enterprise architecture and standards
- Ensure interoperability between AI tools, data platforms, and business applications
DevOps & Reliability
- Implement DevOps best practices across AI projects
- Monitor system performance, uptime, and reliability
- Troubleshoot production issues and implement long-term fixes
- Maintain logging, observability, and alerting systems
Data Integration & Pipeline Development
- Design and build data pipelines to support AI use cases
- Integrate existing data across internal systems, vendors, and external sources
- Partner with data engineering, IT, and business teams to unlock critical datasets
Security & Compliance Alignment
- Ensure alignment with enterprise IT and AI governance standards
Requirements
- Strong experience with data pipelines, ETL processes, and system integrations
- Strong, hands-on experience with CI/CD, containerization (Docker) and orchestration (Dagster, Airflow)
- Deep operational experience with cloud platforms (AWS, Azure, or GCP)
- Proven knowledge of leveraging IaC (Terraform, Azure Bicep) to rapidly deploy and evolve complex distributed system resources on the cloud
- Ability to effectively improve and deploy observability across the stack
- Deep experience triaging, resolving and communicating with relevant teams and stakeholders during as part of incident response
- Ability to work across IT, engineering, and product teams, * Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, Quantitative Finance, or a related field is required
- 5-8+ years in data engineering, DevOps, or infrastructure roles
Benefits & conditions
- The compensation for this role is $140,000 - $170,000, this will depend on level of experience.
- There is a performance-based bonus structure available with this role.