Dependable and Secure Machine Learning

More Information

UBC Calendar

Credits

EECE 571J

Objective:

This course is meant for graduate students to acquire a sound understanding in the security and reliability of machine learning (ML). Machine learning has seen growing adoption in safety-critical domains, such as autonomous vehicles (AVs), unmanned aerial vehicles. This thus poses new challenge for the security and reliability of ML systems. In particular, it is wellknown that the hardware faults are growing in frequency and can cause catastrophic consequences in such safety-critical applications (e.g., cause AV to miss the obstacles on its path). Similarly, several security vulnerabilities have been identified. In this course, we will investigate security attacks, defenses, and reliability techniques relevant to the ML domain.

This course is intended to be an in-depth and hands-on approach to learn about security and reliability techniques in ML. Students will read papers on both classical and modern techniques on ML security and dependability. Students will gain an in-depth insight into the security and reliability challenges in ML systems, and would complete a final project at the end of the course, in which they will propose techniques to advance the security/reliability of ML systems. It is hoped that students taking this course will integrate their research ideas in the project, though this is not required.

 

Organization:

Students will be assigned weekly readings of research articles and textbook chapters, and will need to summarize their understanding of the material in the form of written reports. They will meet with the instructor once a week to check their understanding of the material and to ensure they are making adequate progress. An important component of the course is the design of security/reliability techniques for ML systems. The exact techniques to be implemented will be decided upon by the instructor and the student based on the student’s interests. We will emphasize novel insight into the research problem as well as good software engineering principles for implementing the techniques.

 

Topics Covered:

The following is a tentative list of topics that will be covered.

1. White-box adversarial attack in ML

2. Poisoning attack in ML

3. Testing of ML systems

4. Fault injection in ML systems

5. Error-resilience techniques in ML systems.

6. Black-box adversarial attack

7. Security defense for ML

 

Evaluation: Evaluation will be based on three main components:

(1) Weekly summaries of the readings submitted by the student (20%),

(2) Mini-project implementing an attack/fault injection technique in ML applications (20%),

(3) Final project implementing a technique for investigating/enhancing the security/reliability of ML (60%).

 

Recommended Textbooks and Readings:

1. Adversarial Machine Learning, Joseph AD, Nelson B, Rubinstein BI, Tygar JD.

2. Deep Learning, Ian Goodfellow and Yoshua Bengio and Aaron Courville.

3. Conference proceedings of the following conferences:

a. Intl. Symposium on Security and Privacy (Oakland)

b. USENIX Security Symposium (SEC)

c. ACM Conference on Computer and Communications Security (CCS)

d. Conference on Computer Vision and Pattern Recognition (CVPR)

e. Intl. Conference on Machine Learning (ICML)

f. Neural Information Processing Systems annual meeting (NeuralPS)

g. IEEE/IFIP Intl. Conf. Dependable Systems and Networks (DSN)

h. IEEE Intl. Symposium on Software Reliability Engineering (ISSRE)

 

Professor: 

a place of mind, The University of British Columbia

Electrical and Computer Engineering
2332 Main Mall
Vancouver, BC Canada V6T 1Z4
Tel +1.604.822.2872
Fax +1.604.822.5949
Email:

Emergency Procedures | Accessibility | Contact UBC | © Copyright 2020 The University of British Columbia