TY - JOUR AU - AB - Article history: Adversarial attacks are considered a potentially serious security threat for machine learning systems. Received 17 October 2020 Medical image analysis (MedIA) systems have recently been argued to be vulnerable to adversarial at- Revised 10 June 2021 tacks due to strong financial incentives and the associated technological infrastructure. Accepted 17 June 2021 Available online 18 June 2021 In this paper, we study previously unexplored factors affecting adversarial attack vulnerability of deep learning MedIA systems in three medical domains: ophthalmology, radiology, and pathology. We focus Keywords: on adversarial black-box settings, in which the attacker does not have full access to the target model Adversarial attacks and usually uses another model, commonly referred to as surrogate model, to craft adversarial exam- Medical imaging ples that are then transferred to the target model. We consider this to be the most realistic scenario for Deep learning MedIA systems. Firstly, we study the effect of weight initialization (pre-training on ImageNet or random Cybersecurity initialization) on the transferability of adversarial attacks from the surrogate model to the target model, i.e., how effective attacks crafted using the surrogate model are on the target model. Secondly, we study the influence of differences in development (training and TI - Adversarial attack vulnerability of medical image analysis systems: Unexplored factors JF - Medical Image Analysis DO - 10.1016/j.media.2021.102141 DA - 2021-10-01 UR - https://www.deepdyve.com/lp/unpaywall/adversarial-attack-vulnerability-of-medical-image-analysis-systems-Hx8bfTElrU DP - DeepDyve ER -