Scientific Fellow, AI Safety, R&D Data Science and Digital Health
Role details
Job location
Tech stack
Job description
We are seeking a highly technical leader in AI safety for our Research & Development Data Science & Digital Health (DSDH) organization. Reporting directly to the Vice President of AI/ML & Digital Health, this role is responsible for embedding AI safety, robustness, and observability into the design, evaluation, and deployment of advanced AI systems across the DSDH portfolio and R&D use cases. These systems span foundation and predictive AI models, generative AI, and autonomous agentic systems supporting discovery, development, clinical, and regulatory workflows.
This is a hands-on, technical, deeply scientific fellow role, focused on shaping model and AI system design and evaluation while contributing to policy, compliance, and enterprise governance. The Scientific Fellow will work closely with AI scientists, engineers, AI Quality & Optimization, Global Regulatory Affairs, Quantitative Scientists, and Johnson & Johnson Technology (JJT) to ensure AI systems deployed in R&D workflows are safe, trustworthy, and fit-for-purpose as AI capability and autonomy scale., Strategic direction and research priorities
- Shape DSDH and IM R&D strategy for safe and trustworthy AI by defining multi-year research priorities, capability roadmaps, and investment recommendations for AI safety across discovery, development, clinical, and regulatory workflows.
- Represent AI safety as a senior scientific voice in function- and enterprise-level councils/working groups; set standards and priorities for safe scaling of GenAI and agentic systems, and provide technical leadership on safety principles and implementation for agentic and autonomous systems., * During all design phases, partner directly with AI and quantitative scientists across IM R&D, as well as with technical leads in JJT to:
- Identify potential failure modes, risks, appropriate levels of autonomy and human oversight,
- define safety-relevant observability signals, acceptable failure envelopes and mitigation strategies tailored to different R&D contexts (research, clinical, regulatory)
- ensure monitoring captures unsafe behaviors, not only performance drift.
- Design and execute safety-focused models and evaluations, including but not limited to stress testing for hallucinations, edge cases, and failure propagation in multi-step reasoning and agent workflows., * Provide technical leadership for AI safety in regulated environment, covering use cases, e.g. regulatory documentation for AI-enabled R&D processes and submissions, autonomous agents in GxP environments, etc..
- Influence internal policy and external best practices by contributing to guidance documents/points-to-consider for safe GenAI and agentic systems in pharma R&D, including participation in expert working groups and advisory panels.
- Track emerging risks, research, and best practices in AI safety and translate them into practical guidance for internal teams., * Develop business cases to secure investment for AI safety capabilities and lead execution of funded initiatives, * Drive J&J innovation in the field, leading to high visibility publications in top-tier AI conferences and journals, patents around AI safety in generative AI, reasoning, multi-agent systems, etc.
- Serve as an external ambassador for J&J IM R&D AI safety: invited talks and keynotes, conference leadership roles (area chair, workshop organizer), and participation in cross-industry consortia and standards bodies.
- Establish and lead strategic external collaborations with academic, industry, and governmental partners focused on AI safety in high-stakes biomedical and regulatory contexts.
Requirements
- PhD or equivalent advanced degree in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related field.
- Minimum of 10 years of post-academic, industry experience.
- Proven track record and strong hands-on experience with modern AI systems, including foundation models, multimodal generative AI, large reasoning models or agentic systems
- Extensive experience with AI safety, robustness, reliability, or evaluation in high-impact or high-stakes domains.
- Demonstrated ability to reason about system-level behavior, failure modes, and risk, beyond model accuracy and robustness alone.
- Excellent coding and software development capabilities.
- Experience working in highly interdisciplinary and matrixed environments spanning AI, data science, engineering, and life science.
- Strong communication skills and ability to influence AI model and systems design without formal authority.
- Experience in the Life Sciences, Healthcare, Pharmaceutical or Medical Tech sector is preferred., Preferred Skills: Artificial Intelligence (AI), Cognitive Computing, Compliance Management, Curious Mindset, Data Structures, Developing Others, Disruptive Innovations, Human-Computer Interaction (HCI), Human-Computer Relationships, Inclusive Leadership, Leadership, Machine Learning (ML), Process Improvements, Relationship Building, Research and Development, SAP Product Lifecycle Management, Scripting Languages, Tactical Planning You must create an Indeed account before continuing to the company website to apply