Global-connected network with generalized ReLU activation
Abstract
Recent Progress has shown that exploitation of hidden layer neurons in convolutional neural networks (CNN) incorporating with a carefully designed activation function can yield better classification results in the field of computer vision. The paper firstly introduces a novel deep learning (DL) architecture aiming to mitigate the gradient-vanishing problem, in which the earlier hidden layer neurons could be directly connected with the last hidden layer and fed into the softmax layer for classification. We then design a generalized linear rectifier function as the activation function that can approximate arbitrary complex functions via training of the parameters. We will show that our design can achieve similar performance in a number of object recognition and video action benchmark tasks, such as MNIST, CIFAR-10/100, SVHN, Fashion-MNIST, STL-10, and UCF YoutTube Action Video datasets, under significantly less number of parameters and shallower network infrastructure, which is not only promising in training in terms of computation burden and memory usage, but is also applicable to low-computation, low-memory mobile scenarios for inference.
Collections
Cite this version of the work
Zhi Chen, Pin-Han Ho
(2019).
Global-connected network with generalized ReLU activation. UWSpace.
http://hdl.handle.net/10012/15277
Other formats
The following license files are associated with this item: