[TOC]

  1. Title: Adversarial Attacks on Neural Network Policies
  2. Author: Sandy Huang et. al.
  3. Publish Year: 8 Feb 2017
  4. Review Date: Wed, Dec 28, 2022

Summary of paper

Motivation

Contribution

Limitation

Some key terms

adversarial example crafting with the fast gradient sign method

Applying FGSM to Policies

results

Vulnerability to white box attacks