计算机科学
稳健性(进化)
对抗制
推论
云计算
人工智能
计算
深层神经网络
深度学习
分布式计算
机器学习
块(置换群论)
算法
生物化学
化学
几何学
数学
基因
操作系统
作者
Mina Amiri,Mohammad Hossein Rohban,Shaahin Hessabi
标识
DOI:10.1109/tmc.2023.3346877
摘要
Deep Neural Networks (DNNs) are very resource-demanding at inference time. Hence, one needs to be able to offload the model execution on the cloud as a solution. The problem is that we should use the same model on both resource-constrained devices and cloud sides. On the other hand, adversarial robustness is one of the main issues in many real-world applications, such as autonomous driving, where one desires model stability under imperceptible but adversarial input perturbations. However, adversarial training (AT) requires access to the actual model architecture and weights during the training. In our setup, two different deep models (suitable for each side) are broken into several blocks. Then, we select a combination of blocks to perform the computation according to the constraints in the inference time, and each block is executed on its respective side. Moreover, we propose a novel modified AT method that can virtually train all the mentioned blocks collectively. Rigorous evaluations of our method on CIFAR-10 and CIFAR-100 show that the proposed AT is effective in making the models robust under various offloading scenarios. Furthermore, we show that the more blocks of the large network are present in the selected model, the higher the final accuracy. To the best of our knowledge, our method is the first one, in which a heterogeneous offloading scheme under adversarial robustness is investigated.
科研通智能强力驱动
Strongly Powered by AbleSci AI