Authors

Adam Brzeski, Kamil Grinholc, Kamil Nowodworski, Adam Przybylek

Read online

https://link.springer.com/chapter/10.1007/978-3-030-30278-8_33

Abstract

As modern convolutional neural networks become increasingly deeper, they also become slower and require high computational resources beyond the capabilities of many mobile and embedded platforms. To address this challenge, much of the recent research has focused on reducing the model size and computational complexity. In this paper, we propose a novel residual depth-separable convolution block, which is an improvement of the basic building block of MobileNets. We modified the original block by adding an identity shortcut connection (with zero-padding for increasing dimensions) from the input to the output. We demonstrated that the modified architecture with the width multiplier (α) set to 0.92 slightly outperforms the accuracy and inference time of the baseline MobileNet (α=1) on the challenging Places365 dataset while reducing the number of parameters by 14%.