Applied AI Software Engineer
Role details
Job location
Tech stack
Job description
In this role, you will work across the stack - from the system software and frameworks to end-user interfaces - to build Apple Intelligence features that enrich people's lives. This is a highly cross-functional role working with design, algorithms, software, services, privacy, security, performance, and sometimes even HW / Silicon teams across Apple to engineer end-to-end solutions.\n\nYou will need to work quickly and creatively to help demonstrate the viability of ideas and technologies while building robust, production-ready systems that millions of users will depend on. Your work will include building UI interfaces and system software using Swift, evaluating AI/ML algorithms for on-device integration, collaborating on user experience design, debugging complex cross-device interactions, and contributing to a culture of shipping high-quality production code. In addition to software engineering, you will also apply excellent UX intuition to identify and shape experience opportunities, iterate with design, and define technical requirements that drive development.
Requirements
BS / MS / PhD in Computer Science or equivalent experience \n5+ years in software development\nExcellent programming skills in any programming language (preferably at least one of Swift, Objective-C, C++) and strong understanding of data structures, memory management, and concurrency\nSkilled at debugging and triaging software defects including race conditions, deadlocks, synchronization issues in multi-threaded environments etc. \nSystems thinking, including ability to break down ambiguous problems and drive clarity on critical details\nStrong intuition for user experience + ability to translate customer needs into technical requirements\nSolid cross-functional collaboration and technical communication skills
Shipping customer-facing features or products to production at scale\nDeveloping for iOS/MacOS and/or using Apple system frameworks like SwiftUI and ARKit\nBuilding & shipping features using LLMs and/or Machine Learning algorithms, including on-device inference, data-driven validation, requirement definition, and collaboration with algorithm teams\nProcessing sensor data, (e.g. image/video, audio, motion), such as using computer vision / signal processing, working with vector transforms, and/or developing AR applications\nDeveloping & validating personalization features, including working with sensitive datasources such as conversation transcripts or Health data\nDeveloping under strict privacy and security constraints, including techniques like secure data processing, and/or privacy-by-design principles\nDeveloping system software or frameworks, including API definition and performance optimization (particularly for resource constrained or real-time systems)