Show simple item record

dc.contributor.authorChen, Zhi
dc.contributor.authorHo, Pin-Han
dc.date.accessioned2019-12-04 17:35:09 (GMT)
dc.date.available2019-12-04 17:35:09 (GMT)
dc.date.issued2019-12
dc.identifier.urihttps://doi.org/10.1016/j.patcog.2019.07.006
dc.identifier.urihttp://hdl.handle.net/10012/15277
dc.descriptionThe final publication is available at Elsevier via https://doi.org/10.1016/j.patcog.2019.07.006. © 2019. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/en
dc.description.abstractRecent Progress has shown that exploitation of hidden layer neurons in convolutional neural networks (CNN) incorporating with a carefully designed activation function can yield better classification results in the field of computer vision. The paper firstly introduces a novel deep learning (DL) architecture aiming to mitigate the gradient-vanishing problem, in which the earlier hidden layer neurons could be directly connected with the last hidden layer and fed into the softmax layer for classification. We then design a generalized linear rectifier function as the activation function that can approximate arbitrary complex functions via training of the parameters. We will show that our design can achieve similar performance in a number of object recognition and video action benchmark tasks, such as MNIST, CIFAR-10/100, SVHN, Fashion-MNIST, STL-10, and UCF YoutTube Action Video datasets, under significantly less number of parameters and shallower network infrastructure, which is not only promising in training in terms of computation burden and memory usage, but is also applicable to low-computation, low-memory mobile scenarios for inference.en
dc.language.isoenen
dc.publisherElsevieren
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectcomputer visionen
dc.subjectdeep learningen
dc.subjectactivationen
dc.subjectconvolution neural networken
dc.titleGlobal-connected network with generalized ReLU activationen
dc.typeArticleen
dcterms.bibliographicCitationZhi Chen, Pin-Han Ho, Global-Connected Network With Generalized ReLU Activation, Pattern Recognition (2019), doi: https://doi.org/10.1016/j.patcog.2019.07.006en
uws.contributor.affiliation1Faculty of Engineeringen
uws.contributor.affiliation2Electrical and Computer Engineeringen
uws.typeOfResourceTexten
uws.peerReviewStatusRevieweden
uws.scholarLevelFacultyen
uws.scholarLevelPost-Doctorateen


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

Attribution-NonCommercial-NoDerivatives 4.0 International
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International

UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages