Machine Learning: Promising, but Perilous
Nura Kawa - 2 years ago
The promise of Machine Learning (ML) to solve data-driven problems at scale has created a growing interest in incorporating ML components into software systems. However, deploying ML models opens the door for additional security vulnerabilities, such as poisoning, privacy and adversarial attacks. A successful attack can have severe consequences, especially in safety-critical applications. In traditional software development there exist a plethora of security guidelines and principles. Their demonstrated effectiveness leads us to ask: How can we leverage these principles to develop secure and robust ML systems ? The challenge of this question is that unlike traditional software, ML is deployed in variable settings; thus, security of ML systems must be adaptable to environmental changes. This talk gives practitioners an overview of ML security landscape and introduces best practices to secure an ML system against potential attacks.
Jobs with related skills

Lead Software Engineer Data Intelligence (w/m/d)
BWI GmbH
·
22 days ago
Essen, Germany
Hybrid

Senior Machine Learning Engineer (m/w/d)
BWI GmbH
·
6 days ago
Newest jobs

Fullstack Engineer (f/m/d)
MARKT-PILOT GmbH
·
2 days ago

Lead Software Engineer (f/m/d)
MARKT-PILOT GmbH
·
2 days ago
Stuttgart, Germany
Hybrid
Related Videos