AI & Data Engineer

IBM
Chicago, United States of America
yesterday

Role details

Contract type
Permanent contract
Employment type
Full-time (> 32 hours)
Working hours
Regular working hours
Languages
English

Job location

Chicago, United States of America

Tech stack

API
Artificial Intelligence
Amazon Web Services (AWS)
IBM System I
Azure
Cloud Computing
Code Generation
Continuous Integration
Data Architecture
Information Engineering
Software Debugging
DevOps
Distributed Computing Environment
Distributed Data Store
Intrusion Detection and Prevention
Python
Natural Language Processing
NumPy
Openshift
TensorFlow
Systems Integration
Unstructured Data
Data Processing
Google Cloud Platform
Enterprise Software Applications
Feature Engineering
Data Ingestion
PyTorch
Delivery Pipeline
Large Language Models
Spark
Pandas
Event Driven Architecture
Containerization
Scikit Learn
Kubernetes
Information Technology
HuggingFace
Production Code
Performance Monitor
Hashicorp
Integration Frameworks
Kafka
Build Process
Data Management
Machine Learning Operations
Api Design
Terraform
Data Pipelines
Docker
Confluent

Job description

At IBM Global Sales, we bring together innovation, collaboration, and expertise to help solve complex business challenges and drive meaningful outcomes. Working across industries and geographies, you will partner with colleagues, Independent Software Vendors (ISVs), Business Partners, and service providers to develop solutions that enable digital transformation and lasting impact.

A Build Engineering AI & Data Engineer is more than a developer - you are a hands-on builder responsible for turning data and AI concepts into real, working solutions that deliver measurable business value. Success in this role requires curiosity, strong technical depth, and the ability to collaborate effectively across ecosystem partners to translate ideas into scalable outcomes.

Working alongside Solution Architects, ISVs, Business Partners, and service providers, you will leverage the watsonx platform and modern data and AI technologies, as well as automation, observability, and FinOps platforms, to prototype, implement, and scale solutions. This role sits at the intersection of data engineering, AI development, ecosystem collaboration, and partner engagement, with a strong focus on execution across both pre-sales activities and post-sales implementation, supporting Build Engineering initiatives across the Americas GEO.

A key part of this role is supporting IBM's Build motion, where we co-create with ISVs, Business Partners, and service providers to validate, embed, and scale IBM technology across AI, Data, and Automation within the solutions they bring to market for their end customers. This work drives repeatability, accelerates adoption, and strengthens joint go-to-market outcomes.

Your role and responsibilities

AI & Data Solution Development & Prototyping

  • Build demos, Proof of Concepts (POCs), and Minimum Viable Products (MVPs) to validate use cases and demonstrate business value.
  • Develop data and AI-driven applications using foundation models, large language models (LLMs), and related technologies, including NLP, text-based solutions, and tooling such as Project Bob or similar technologies that accelerate pipeline creation, code generation, and deployment.
  • Rapidly iterate on prototypes based on partner and stakeholder feedback.

Implementation & Integration

  • Translate solution designs into production-ready code and deployable architectures.
  • Integrate AI and data capabilities into enterprise systems, APIs, business workflows, and partner platforms.
  • Work across structured and unstructured data sources, ensuring data is prepared and optimized for AI and analytics use cases.

Automation & Observability Integration

  • Integrate AI and data capabilities with enterprise automation, observability, and FinOps platforms to enable end-to-end workflows and outcomes.
  • Work with event streaming, infrastructure automation, secrets management, and cost/operations tooling to operationalize AI-driven use cases.
  • Build integrations across APIs and event-driven architectures to connect AI solutions with enterprise systems and partner platforms.
  • Support use cases such as incident detection, workflow automation, cost optimization, and performance monitoring.

Build Motion, Pre-Sales & Post-Sales Delivery

  • Support both pre-sales activities and post-sales implementations as part of IBM's Build motion.
  • In pre-sales, co-create with ISVs and Business Partners, alongside Solution Architects, to validate IBM technology through discovery, demos, prototypes, POCs, and MVPs.
  • In post-sales, co-create with partners to implement, integrate, optimize, and scale solutions in production environments to drive adoption and measurable outcomes.
  • Help embed IBM technology into partner platforms and offerings that are sold to their end customers.
  • Contribute reusable engineering patterns, accelerators, and assets that improve repeatability and scalability of joint solutions.

Data Engineering & Pipeline Development

  • Design, build, and optimize data pipelines to support AI models and analytics use cases.
  • Work with structured and unstructured data across batch and streaming architectures.
  • Implement data ingestion, transformation, and feature engineering processes.
  • Support modern data architectures including lakehouse, vector databases, and event streaming frameworks (e.g., Kafka/Confluent).
  • Enable data readiness for AI, including integration with retrieval-augmented generation (RAG) and orchestration pipelines.

Model Utilization & Optimization

  • Implement and optimize foundation models and LLMs for performance, scalability, and cost efficiency.
  • Apply prompt engineering, fine-tuning, and evaluation techniques.
  • Monitor outputs and continuously improve accuracy and reliability.

Delivery Execution & Collaboration

  • Partner with Solution Architects, Data Scientists, and ecosystem stakeholders to deliver high-quality outcomes.
  • Operate within agile delivery models, contributing to sprint execution and milestones.
  • Communicate progress, risks, and technical trade-offs clearly to stakeholders.

Testing, Deployment & Support

  • Support deployment across cloud, hybrid, and on-prem environments.
  • Conduct testing, validation, and debugging to ensure production readiness.
  • Provide technical support during deployment and early lifecycle adoption.

Documentation & Reusability

  • Document solution components, code, and implementation patterns.
  • Contribute to reusable assets, accelerators, and best practices.
  • Support knowledge sharing and onboarding across the team.

Requirements

  • Bachelor's degree in Computer Science, Artificial Intelligence, Data Science, or a related field.
  • Strong proficiency in Python; additional languages are a plus.
  • Experience with AI/ML frameworks such as TensorFlow, PyTorch, or Hugging Face.
  • Hands-on experience with data manipulation using Pandas, NumPy, and Scikit-learn.
  • Experience designing and working with data pipelines, data processing frameworks, and distributed data systems.
  • Exposure to foundation models, large language models, NLP, or similar AI technologies.
  • Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes, OpenShift).
  • Experience building APIs and integrating across systems.
  • Experience supporting technical delivery across pre-sales and/or post-sales environments.
  • Strong problem-solving, debugging, and optimization skills.
  • Ability to collaborate across technical and partner teams.
  • Demonstrated growth mindset and commitment to continuous learning.

Preferred technical and professional experience

  • Experience with IBM technologies such as watsonx.ai, watsonx.data, watsonx Orchestrate, and Project Bob, or similar AI, data, and automation technologies used to accelerate solution development and deployment.
  • Experience with enterprise platforms such as Confluent (Kafka), HashiCorp (Terraform, Vault), Apptio/Cloudability (FinOps), Instana (observability), Turbonomic (optimization), or similar technologies supporting event-driven architectures, automation, monitoring, and cost optimization.
  • Familiarity with RAG architectures, vector databases, and embeddings.
  • Experience with Spark, Kafka, or distributed data processing frameworks.
  • Exposure to DevOps/MLOps practices (CI/CD, model lifecycle management).
  • Experience working with ISVs, Business Partners, or ecosystem-led solutions.
  • Experience supporting both pre-sales validation and post-sales implementation.
  • Understanding of enterprise workflows, automation, and embedded solution patterns.

IBM is committed to creating a diverse environment and is proud to be an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, gender, gender identity or expression, sexual orientation, national origin, caste, genetics, pregnancy, disability, neurodivergence, age, veteran status, or other characteristics. IBM is also committed to compliance with all fair employment practices regarding citizenship and immigration status.

Apply for this position