Intelligent and adaptive attackers may exploit specific vulnerabilities exposed by machine learning techniques to violate system security. Research in adversarial learning not only investigates the security properties of learning algorithms against well-crafted attacks,but it also focuses on the development of mores secure learning algorithms. We attempt to form a premises on the vulnerabilities of machine learning classification models (Support Vector Machines and Decision Trees) by studying the classification error impact that introducing perturbation to images has on these models. We believe that this project will form a basis to investigate susceptibility of machine learning models against a range of adversarial attacks and encourage the development of a more secure and robust learning algorithms.
- Dataset preparation: The dataset used for this project is that of images; particularly traffic sign images. We have four classes of traffic sign images that we want a trained classification model to properly categorize into. The four traffic sign classes used are:
- Crosswalk
- Speedlimit25
- Stop
- Yield
Please Contact the owner of this repository to request access for the dataset