Torsten Stiller

Kill Switch or Moral Compass: Who Programs AI’s Conscience?

What if the most dangerous bug isn't in the code, but in the AI's moral compass? This talk shows how to program one.

Kill Switch or Moral Compass: Who Programs AI’s Conscience?
#1about 5 minutes

Learning from science fiction and early AI failures

Isaac Asimov's Three Laws of Robotics and an early e-commerce failure illustrate the long-standing challenge of embedding a moral compass in AI.

#2about 2 minutes

Why developers are the programmers of AI's conscience

Every design choice, from data selection to objective functions, embeds human values and biases into AI systems, making developers ethically responsible.

#3about 7 minutes

Real-world examples of AI causing unintended harm

Case studies from Google, Amazon, and Microsoft demonstrate how AI can amplify bias, generate hate speech, and create privacy nightmares without ethical guardrails.

#4about 7 minutes

Seven core principles for building responsible AI systems

A framework for ethical AI development includes fairness, transparency, accountability, privacy, safety, societal benefit, and maintaining human control.

#5about 4 minutes

A practical playbook for integrating ethics into development

Use techniques like pre-mortems, ethical user stories, red teaming, and the 'veil of ignorance' to proactively identify and mitigate ethical risks.

#6about 2 minutes

How the EU AI Act makes ethics a compliance issue

Upcoming regulations like the EU AI Act are transforming ethical AI from an ideal into a legal requirement, especially for high-risk systems.

#7about 3 minutes

Balancing the kill switch with a built-in moral compass

The ultimate goal is to build AI with an inherent moral compass so that the emergency kill switch is a last resort, not the primary safety strategy.

Related jobs
Jobs that call for the skills explored in this talk.

Featured Partners

From learning to earning

Jobs that call for the skills explored in this talk.