| Big Transfer (BiT): General Visual Representation Learning (Dec 2019) |
99.37% |
| Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes -- from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance. |
|
| GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism (Nov 2018, arXiv 2018) |
99.00% |
| Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Mia Xu Chen, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, Zhifeng Chen Scaling up deep neural network capacity has been known as an effective approach to improving model quality for several different machine learning tasks. In many cases, increasing model capacity beyond the memory limit of a single accelerator has required developing special algorithms or infrastructure. These solutions are often architecture-specific and do not transfer to other tasks. To address the need for efficient and task-independent model parallelism, we introduce GPipe, a pipeline parallelism library that allows scaling any network that can be expressed as a sequence of layers. By pipelining different sub-sequences of layers on separate accelerators, GPipe provides the flexibility of scaling a variety of different networks to gigantic sizes efficiently. Moreover, GPipe utilizes a novel batch-splitting pipelining algorithm, resulting in almost linear speedup when a model is partitioned across multiple accelerators. We demonstrate the advantages of GPipe by training large-scale neural networks on two different tasks with distinct network architectures: (i) Image Classification: We train a 557-million-parameter AmoebaNet model and attain a top-1 accuracy of 84.4% on ImageNet-2012, (ii) Multilingual Neural Machine Translation: We train a single 6-billion-parameter, 128-layer Transformer model on a corpus spanning over 100 languages and achieve better quality than all bilingual models. |
|
| EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (May 2019, arXiv 2019) |
98.90% |
| Mingxing Tan, Quoc V. Le Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. Source code is at https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet. |
|
| A Survey on Neural Architecture Search (May 2019, arXiv 2019) |
98.67% |
| Martin Wistuba, Ambrish Rawat, Tejaswini Pedapati The growing interest in both the automation of machine learning and deep learning has inevitably led to the development of a wide variety of automated methods for neural architecture search. The choice of the network architecture has proven to be critical, and many advances in deep learning spring from its immediate improvements. However, deep learning techniques are computationally intensive and their application requires a high level of domain knowledge. Therefore, even partial automation of this process helps to make deep learning more accessible to both researchers and practitioners. With this survey, we provide a formalism which unifies and categorizes the landscape of existing methods along with a detailed analysis that compares and contrasts the different approaches. We achieve this via a comprehensive discussion of the commonly adopted architecture search spaces and architecture optimization algorithms based on principles of reinforcement learning and evolutionary algorithms along with approaches that incorporate surrogate and one-shot models. Additionally, we address the new research directions which include constrained and multi-objective architecture search as well as automated data augmentation, optimizer and activation function search. |
|
| AutoAugment: Learning Augmentation Policies from Data (May 2018, arXiv 2018) |
98.52% |
| Ekin D. Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, Quoc V. Le Data augmentation is an effective technique for improving the accuracy of modern image classifiers. However, current data augmentation implementations are manually designed. In this paper, we describe a simple procedure called AutoAugment to automatically search for improved data augmentation policies. In our implementation, we have designed a search space where a policy consists of many sub-policies, one of which is randomly chosen for each image in each mini-batch. A sub-policy consists of two operations, each operation being an image processing function such as translation, rotation, or shearing, and the probabilities and magnitudes with which the functions are applied. We use a search algorithm to find the best policy such that the neural network yields the highest validation accuracy on a target dataset. Our method achieves state-of-the-art accuracy on CIFAR-10, CIFAR-100, SVHN, and ImageNet (without additional data). On ImageNet, we attain a Top-1 accuracy of 83.5% which is 0.4% better than the previous record of 83.1%. On CIFAR-10, we achieve an error rate of 1.5%, which is 0.6% better than the previous state-of-the-art. Augmentation policies we find are transferable between datasets. The policy learned on ImageNet transfers well to achieve significant improvements on other datasets, such as Oxford Flowers, Caltech-101, Oxford-IIT Pets, FGVC Aircraft, and Stanford Cars. |
|
| XNAS: Neural Architecture Search with Expert Advice (Jun 2019, arXiv 2019) |
98.40% |
| Niv Nayman, Asaf Noy, Tal Ridnik, Itamar Friedman, Rong Jin, Lihi Zelnik-Manor This paper introduces a novel optimization method for differential neural architecture search, based on the theory of prediction with expert advice. Its optimization criterion is well fitted for an architecture-selection, i.e., it minimizes the regret incurred by a sub-optimal selection of operations. Unlike previous search relaxations, that require hard pruning of architectures, our method is designed to dynamically wipe out inferior architectures and enhance superior ones. It achieves an optimal worst-case regret bound and suggests the use of multiple learning-rates, based on the amount of information carried by the backward gradients. Experiments show that our algorithm achieves a strong performance over several image classification datasets. Specifically, it obtains an error rate of 1.6% for CIFAR-10, 24% for ImageNet under mobile settings, and achieves state-of-the-art results on three additional datasets. |
|
| Rethinking Recurrent Neural Networks and other Improvements for Image Classification (Jul 2020, arXiv 2020) |
98.36% |
| Nguyen Huu Phong, Bernardete Ribeiro For a long history of Machine Learning which dates back to several decades, Recurrent Neural Networks (RNNs) have been mainly used for sequential data and time series or generally 1D information. Even in some rare researches on 2D images, the networks merely learn and generate data sequentially rather than for recognition of images. In this research, we propose to integrate RNN as an additional layer in designing image recognition's models. Moreover, we develop End-to-End Ensemble Multi-models that are able to learn experts' predictions from several models. Besides, we extend training strategy and softmax pruning which overall leads our designs to perform comparably to top models on several datasets. The source code of the methods provided in this article is available in https://github.com/leonlha/e2e-3m and http://nguyenhuuphong.me. |
|
| ShakeDrop Regularization (Feb 2018, ICLR 2018) |
97.69% |
| Yoshihiro Yamada, Masakazu Iwamura, Takuya Akiba, Koichi Kise Overfitting is a crucial problem in deep neural networks, even in the latest network architectures. In this paper, to relieve the overfitting effect of ResNet and its improvements (i.e., Wide ResNet, PyramidNet, and ResNeXt), we propose a new regularization method called ShakeDrop regularization. ShakeDrop is inspired by Shake-Shake, which is an effective regularization method, but can be applied to ResNeXt only. ShakeDrop is more effective than Shake-Shake and can be applied not only to ResNeXt but also ResNet, Wide ResNet, and PyramidNet. An important key is to achieve stability of training. Because effective regularization often causes unstable training, we introduce a training stabilizer, which is an unusual use of an existing regularizer. Through experiments under various conditions, we demonstrate the conditions under which ShakeDrop works well. |
|
| Improved Regularization of Convolutional Neural Networks with Cutout (Aug 2017, arXiv 2017) |
97.44% |
| Terrance DeVries, Graham W. Taylor Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks. However, due to the model capacity required to capture such representations, they are often susceptible to overfitting and therefore require proper regularization in order to generalize well. In this paper, we show that the simple regularization technique of randomly masking out square regions of input during training, which we call cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Not only is this method extremely easy to implement, but we also demonstrate that it can be used in conjunction with existing forms of data augmentation and other regularizers to further improve model performance. We evaluate this method by applying it to current state-of-the-art architectures on the CIFAR-10, CIFAR-100, and SVHN datasets, yielding new state-of-the-art results of 2.56%, 15.20%, and 1.30% test error respectively. Code is available at https://github.com/uoguelph-mlrg/Cutout |
|
| Random Erasing Data Augmentation (Aug 2017, arXiv 2017) |
96.92% |
| Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with most of the CNN-based recognition models. Albeit simple, Random Erasing is complementary to commonly used data augmentation techniques such as random cropping and flipping, and yields consistent improvement over strong baselines in image classification, object detection and person re-identification. Code is available at: https://github.com/zhunzhong07/Random-Erasing. |
|
| Drop-Activation: Implicit Parameter Reduction and Harmonic Regularization (Nov 2018, arXiv 2018) |
96.55% |
| Senwei Liang, Yuehaw Khoo, Haizhao Yang Overfitting frequently occurs in deep learning. In this paper, we propose a novel regularization method called Drop-Activation to reduce overfitting and improve generalization. The key idea is to drop nonlinear activation functions by setting them to be identity functions randomly during training time. During testing, we use a deterministic network with a new activation function to encode the average effect of dropping activations randomly. Our theoretical analyses support the regularization effect of Drop-Activation as implicit parameter reduction and verify its capability to be used together with Batch Normalization (Ioffe and Szegedy 2015). The experimental results on CIFAR-10, CIFAR-100, SVHN, EMNIST, and ImageNet show that Drop-Activation generally improves the performance of popular neural network architectures for the image classification task. Furthermore, as a regularizer Drop-Activation can be used in harmony with standard training and regularization techniques such as Batch Normalization and Auto Augment (Cubuk et al. 2019). The code is available at \url{ https://github.com/LeungSamWai/Drop-Activation}. |
|
| Densely Connected Convolutional Networks (Aug 2016, arXiv 2016) |
96.54% |
| Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion. Whereas traditional convolutional networks with L layers have L connections - one between each layer and its subsequent layer - our network has L(L+1)/2 direct connections. For each layer, the feature-maps of all preceding layers are used as inputs, and its own feature-maps are used as inputs into all subsequent layers. DenseNets have several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. We evaluate our proposed architecture on four highly competitive object recognition benchmark tasks (CIFAR-10, CIFAR-100, SVHN, and ImageNet). DenseNets obtain significant improvements over the state-of-the-art on most of them, whilst requiring less computation to achieve high performance. Code and pre-trained models are available at https://github.com/liuzhuang13/DenseNet . |
|
| Fractional Max-Pooling (Dec 2014) |
96.53% |
| Benjamin Graham Convolutional networks almost always incorporate some form of spatial pooling, and very often it is alpha times alpha max-pooling with alpha=2. Max-pooling act on the hidden layers of the network, reducing their size by an integer multiplicative factor alpha. The amazing by-product of discarding 75% of your data is that you build into the network a degree of invariance with respect to translations and elastic distortions. However, if you simply alternate convolutional layers with max-pooling layers, performance is limited due to the rapid reduction in spatial size, and the disjoint nature of the pooling regions. We have formulated a fractional version of max-pooling where alpha is allowed to take non-integer values. Our version of max-pooling is stochastic as there are lots of different ways of constructing suitable pooling regions. We find that our form of fractional max-pooling reduces overfitting on a variety of datasets: for instance, we improve on the state-of-the art for CIFAR-100 without even using dropout. |
|
| Residual Networks of Residual Networks: Multilevel Residual Networks (Aug 2016, arXiv 2017) |
96.23% |
| Ke Zhang, Miao Sun, Tony X. Han, Xingfang Yuan, Liru Guo, Tao Liu A residual-networks family with hundreds or even thousands of layers dominates major image recognition tasks, but building a network by simply stacking residual blocks inevitably limits its optimization ability. This paper proposes a novel residual-network architecture, Residual networks of Residual networks (RoR), to dig the optimization ability of residual networks. RoR substitutes optimizing residual mapping of residual mapping for optimizing original residual mapping. In particular, RoR adds level-wise shortcut connections upon original residual networks to promote the learning capability of residual networks. More importantly, RoR can be applied to various kinds of residual networks (ResNets, Pre-ResNets and WRN) and significantly boost their performance. Our experiments demonstrate the effectiveness and versatility of RoR, where it achieves the best performance in all residual-network-like structures. Our RoR-3-WRN58-4+SD models achieve new state-of-the-art results on CIFAR-10, CIFAR-100 and SVHN, with test errors 3.77%, 19.73% and 1.59%, respectively. RoR-3 models also achieve state-of-the-art results compared to ResNets on ImageNet data set. |
|
| Wide Residual Networks (May 2016, arXiv 2017) |
96.20% |
| Sergey Zagoruyko, Nikos Komodakis Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https://github.com/szagoruyko/wide-residual-networks |
|
| Residual Attention Network for Image Classification (Apr 2017, arXiv 2017) |
96.10% |
| Fei Wang, Mengqing Jiang, Chen Qian, Shuo Yang, Cheng Li, Honggang Zhang, Xiaogang Wang, Xiaoou Tang In this work, we propose "Residual Attention Network", a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion. Our Residual Attention Network is built by stacking Attention Modules which generate attention-aware features. The attention-aware features from different modules change adaptively as layers going deeper. Inside each Attention Module, bottom-up top-down feedforward structure is used to unfold the feedforward and feedback attention process into a single feedforward process. Importantly, we propose attention residual learning to train very deep Residual Attention Networks which can be easily scaled up to hundreds of layers. Extensive analyses are conducted on CIFAR-10 and CIFAR-100 datasets to verify the effectiveness of every module mentioned above. Our Residual Attention Network achieves state-of-the-art object recognition performance on three benchmark datasets including CIFAR-10 (3.90% error), CIFAR-100 (20.45% error) and ImageNet (4.8% single model and single crop, top-5 error). Note that, our method achieves 0.6% top-1 accuracy improvement with 46% trunk depth and 69% forward FLOPs comparing to ResNet-200. The experiment also demonstrates that our network is robust against noisy labels. |
|
| Striving for Simplicity: The All Convolutional Net (Dec 2014, ICLR 2015) |
95.59% |
| Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, Martin Riedmiller Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches. |
|
| All you need is a good init (Nov 2015, ICLR 2016) |
94.16% |
| Dmytro Mishkin, Jiri Matas Layer-sequential unit-variance (LSUV) initialization - a simple method for weight initialization for deep net learning - is proposed. The method consists of the two steps. First, pre-initialize weights of each convolution or inner-product layer with orthonormal matrices. Second, proceed from the first to the final layer, normalizing the variance of the output of each layer to be equal to one. Experiment with different activation functions (maxout, ReLU-family, tanh) show that the proposed initialization leads to learning of very deep nets that (i) produces networks with test accuracy better or equal to standard methods and (ii) is at least as fast as the complex schemes proposed specifically for very deep nets such as FitNets (Romero et al. (2015)) and Highway (Srivastava et al. (2015)). Performance is evaluated on GoogLeNet, CaffeNet, FitNets and Residual nets and the state-of-the-art, or very close to it, is achieved on the MNIST, CIFAR-10/100 and ImageNet datasets. |
|
| Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree (Sep 2015, AISTATS 2016) |
93.95% |
| Chen-Yu Lee, Patrick W. Gallagher, Zhuowen Tu We seek to improve deep neural networks by generalizing the pooling operations that play a central role in current architectures. We pursue a careful exploration of approaches to allow pooling to learn and to adapt to complex and variable patterns. The two primary directions lie in (1) learning a pooling function via (two strategies of) combining of max and average pooling, and (2) learning a pooling function in the form of a tree-structured fusion of pooling filters that are themselves learned. In our experiments every generalized pooling operation we explore improves performance when used in place of average or max pooling. We experimentally demonstrate that the proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures. These benefits come with only a light increase in computational overhead during training and a very modest increase in the number of model parameters. |
|
| Spatially-sparse convolutional neural networks (Sep 2014) |
93.72% |
| Benjamin Graham Convolutional neural networks (CNNs) perform well on problems such as handwriting recognition and image classification. However, the performance of the networks is often limited by budget and time constraints, particularly when trying to train deep networks. Motivated by the problem of online handwriting recognition, we developed a CNN for processing spatially-sparse inputs; a character drawn with a one-pixel wide pen on a high resolution grid looks like a sparse matrix. Taking advantage of the sparsity allowed us more efficiently to train and test large, deep CNNs. On the CASIA-OLHWDB1.1 dataset containing 3755 character classes we get a test error of 3.82%. Although pictures are not sparse, they can be thought of as sparse by adding padding. Applying a deep convolutional network using sparsity has resulted in a substantial reduction in test error on the CIFAR small picture datasets: 6.28% on CIFAR-10 and 24.30% for CIFAR-100. |
|
| Scalable Bayesian Optimization Using Deep Neural Networks (Feb 2015, ICML 2015) |
93.63% |
| Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat, Ryan P. Adams Bayesian optimization is an effective methodology for the global optimization of functions with expensive evaluations. It relies on querying a distribution over functions defined by a relatively cheap surrogate model. An accurate model for this distribution over functions is critical to the effectiveness of the approach, and is typically fit using Gaussian processes (GPs). However, since GPs scale cubically with the number of observations, it has been challenging to handle objectives whose optimization requires many evaluations, and as such, massively parallelizing the optimization. In this work, we explore the use of neural networks as an alternative to GPs to model distributions over functions. We show that performing adaptive basis function regression with a neural network as the parametric form performs competitively with state-of-the-art GP-based approaches, but scales linearly with the number of data rather than cubically. This allows us to achieve a previously intractable degree of parallelism, which we apply to large scale hyperparameter optimization, rapidly finding competitive models on benchmark object recognition tasks using convolutional networks, and image caption generation using neural language models. |
|
| Deep Residual Learning for Image Recognition (Dec 2015) |
93.57% |
| Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. |
|
| Fast and Accurate Deep Network Learning by Exponential Linear Units (Nov 2015) |
93.45% |
| Djork-Arné Clevert, Thomas Unterthiner, Sepp Hochreiter We introduce the "exponential linear unit" (ELU) which speeds up learning in deep neural networks and leads to higher classification accuracies. Like rectified linear units (ReLUs), leaky ReLUs (LReLUs) and parametrized ReLUs (PReLUs), ELUs alleviate the vanishing gradient problem via the identity for positive values. However, ELUs have improved learning characteristics compared to the units with other activation functions. In contrast to ReLUs, ELUs have negative values which allows them to push mean unit activations closer to zero like batch normalization but with lower computational complexity. Mean shifts toward zero speed up learning by bringing the normal gradient closer to the unit natural gradient because of a reduced bias shift effect. While LReLUs and PReLUs have negative values, too, they do not ensure a noise-robust deactivation state. ELUs saturate to a negative value with smaller inputs and thereby decrease the forward propagated variation and information. Therefore, ELUs code the degree of presence of particular phenomena in the input, while they do not quantitatively model the degree of their absence. In experiments, ELUs lead not only to faster learning, but also to significantly better generalization performance than ReLUs and LReLUs on networks with more than 5 layers. On CIFAR-100 ELUs networks significantly outperform ReLU networks with batch normalization while batch normalization does not improve ELU networks. ELU networks are among the top 10 reported CIFAR-10 results and yield the best published result on CIFAR-100, without resorting to multi-view evaluation or model averaging. On ImageNet, ELU networks considerably speed up learning compared to a ReLU network with the same architecture, obtaining less than 10% classification error for a single crop, single model network. |
|
| Universum Prescription: Regularization using Unlabeled Data (Nov 2015) |
93.34% |
| Xiang Zhang, Yann LeCun This paper shows that simply prescribing "none of the above" labels to unlabeled data has a beneficial regularization effect to supervised learning. We call it universum prescription by the fact that the prescribed labels cannot be one of the supervised labels. In spite of its simplicity, universum prescription obtained competitive results in training deep convolutional networks for CIFAR-10, CIFAR-100, STL-10 and ImageNet datasets. A qualitative justification of these approaches using Rademacher complexity is presented. The effect of a regularization parameter -- probability of sampling from unlabeled data -- is also studied empirically. |
|
| Batch-normalized Maxout Network in Network (Nov 2015) |
93.25% |
| Jia-Ren Chang, Yong-Sheng Chen This paper reports a novel deep architecture referred to as Maxout network In Network (MIN), which can enhance model discriminability and facilitate the process of information abstraction within the receptive field. The proposed network adopts the framework of the recently developed Network In Network structure, which slides a universal approximator, multilayer perceptron (MLP) with rectifier units, to exact features. Instead of MLP, we employ maxout MLP to learn a variety of piecewise linear activation functions and to mediate the problem of vanishing gradients that can occur when using rectifier units. Moreover, batch normalization is applied to reduce the saturation of maxout units by pre-conditioning the model and dropout is applied to prevent overfitting. Finally, average pooling is used in all pooling layers to regularize maxout MLP in order to facilitate information abstraction in every receptive field while tolerating the change of object position. Because average pooling preserves all features in the local patch, the proposed MIN model can enforce the suppression of irrelevant information during training. Our experiments demonstrated the state-of-the-art classification performance when the MIN model was applied to MNIST, CIFAR-10, and CIFAR-100 datasets and comparable performance for SVHN dataset. |
|
| Competitive Multi-scale Convolution (Nov 2015) |
93.13% |
| Zhibin Liao, Gustavo Carneiro In this paper, we introduce a new deep convolutional neural network (ConvNet) module that promotes competition among a set of multi-scale convolutional filters. This new module is inspired by the inception module, where we replace the original collaborative pooling stage (consisting of a concatenation of the multi-scale filter outputs) by a competitive pooling represented by a maxout activation unit. This extension has the following two objectives: 1) the selection of the maximum response among the multi-scale filters prevents filter co-adaptation and allows the formation of multiple sub-networks within the same model, which has been shown to facilitate the training of complex learning problems; and 2) the maxout unit reduces the dimensionality of the outputs from the multi-scale filters. We show that the use of our proposed module in typical deep ConvNets produces classification results that are either better than or comparable to the state of the art on the following benchmark datasets: MNIST, CIFAR-10, CIFAR-100 and SVHN. |
|
| Recurrent Convolutional Neural Network for Object Recognition (CVPR 2015) |
92.91% |
| Ming Liang, Xiaolin Hu
In recent years, the convolutional neural network (CNN) has achieved great success in many computer vision tasks.
Partially inspired by neuroscience, CNN shares many properties with the visual system of the brain. A prominent difference
is that CNN is typically a feed-forward architecture while in the visual system recurrent connections are abundant.
Inspired by this fact, we propose a recurrent CNN (RCNN) for object recognition by incorporating recurrent
connections into each convolutional layer. Though the input is static, the activities of RCNN units evolve over time
so that the activity of each unit is modulated by the activities of its neighboring units. This property enhances
the ability of the model to integrate the context information, which is important for object recognition. Like other
recurrent neural networks, unfolding the RCNN through time can result in an arbitrarily deep network with a fixed
number of parameters. Furthermore, the unfolded network has multiple paths, which can facilitate the learning process.
The model is tested on four benchmark object recognition datasets: CIFAR-10, CIFAR-100, MNIST and SVHN.
With fewer trainable parameters, RCNN outperforms the state-of-the-art models on all of these datasets. Increasing
the number of parameters leads to even better performance. These results demonstrate the advantage of the recurrent
structure over purely feed-forward structure for object recognition. |
|
| HyperNetworks (Sep 2016, arXiv 2016) |
92.77% |
| David Ha, Andrew Dai, Quoc V. Le This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network. Hypernetworks provide an abstraction that is similar to what is found in nature: the relationship between a genotype - the hypernetwork - and a phenotype - the main network. Though they are also reminiscent of HyperNEAT in evolution, our hypernetworks are trained end-to-end with backpropagation and thus are usually faster. The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers. Our main result is that hypernetworks can generate non-shared weights for LSTM and achieve near state-of-the-art results on a variety of sequence modelling tasks including character-level language modelling, handwriting generation and neural machine translation, challenging the weight-sharing paradigm for recurrent networks. Our results also show that hypernetworks applied to convolutional networks still achieve respectable results for image recognition tasks compared to state-of-the-art baseline models while requiring fewer learnable parameters. |
|
| Learning Activation Functions to Improve Deep Neural Networks (Dec 2014, ICLR 2015) |
92.49% |
| Forest Agostinelli, Matthew Hoffman, Peter Sadowski, Pierre Baldi Artificial neural networks typically have a fixed, non-linear activation function at each neuron. We have designed a novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent. With this adaptive activation function, we are able to improve upon deep neural network architectures composed of static rectified linear units, achieving state-of-the-art performance on CIFAR-10 (7.51%), CIFAR-100 (30.83%), and a benchmark from high-energy physics involving Higgs boson decay modes. |
|
| cifar.torch (unpublished 2015) |
92.45% |
| Sergey Zagoruyko |
|
| Training Very Deep Networks (Jul 2015, NIPS 2015) |
92.40% |
| Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures. |
|
| Stacked What-Where Auto-encoders (Jun 2015) |
92.23% |
| Junbo Zhao, Michael Mathieu, Ross Goroshin, Yann LeCun We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised and unsupervised learning without relying on sampling during training. An instantiation of SWWAE uses a convolutional net (Convnet) (LeCun et al. (1998)) to encode the input, and employs a deconvolutional net (Deconvnet) (Zeiler et al. (2010)) to produce the reconstruction. The objective function includes reconstruction terms that induce the hidden states in the Deconvnet to be similar to those of the Convnet. Each pooling layer produces two sets of variables: the "what" which are fed to the next layer, and its complementary variable "where" that are fed to the corresponding layer in the generative decoder. |
|
| Multi-Loss Regularized Deep Neural Network (CSVT 2015) |
91.88% |
| Chunyan Xu, Canyi Lu, Xiaodan Liang, Junbin Gao, Wei Zheng, Tianjiang Wang, Shuicheng Yan
A proper strategy to alleviate overfitting is critical to a deep neural network (DNN). In this paper, we introduce the cross-loss-function regularization for boosting the generalization capability of the DNN, which results in the multi-loss regularized DNN (ML-DNN) framework. For a particular learning task, e.g., image classification, only a single-loss function is used for all previous DNNs, and the intuition behind the multiloss framework is that the extra loss functions with different theoretical motivations (e.g., pairwise loss and LambdaRank loss) may drag the algorithm away from overfitting to one particular single-loss function (e.g., softmax loss). In the training stage, we pretrain the model with the single-core-loss function and then warm start the whole ML-DNN with the convolutional parameters transferred from the pretrained model. In the testing stage, the outputs by the ML-DNN from different loss functions are fused with average pooling to produce the ultimate prediction. The experiments conducted on several benchmark datasets (CIFAR-10, CIFAR-100, MNIST, and SVHN) demonstrate that the proposed ML-DNN framework, instantiated by the recently proposed network in network, considerably outperforms all other state-of-the-art methods. |
|
| Deeply-Supervised Nets (Sep 2014) |
91.78% |
| Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, Zhuowen Tu Our proposed deeply-supervised nets (DSN) method simultaneously minimizes classification error while making the learning process of hidden layers direct and transparent. We make an attempt to boost the classification performance by studying a new formulation in deep networks. Three aspects in convolutional neural networks (CNN) style architectures are being looked at: (1) transparency of the intermediate layers to the overall classification; (2) discriminativeness and robustness of learned features, especially in the early layers; (3) effectiveness in training due to the presence of the exploding and vanishing gradients. We introduce "companion objective" to the individual hidden layers, in addition to the overall objective at the output layer (a different strategy to layer-wise pre-training). We extend techniques from stochastic gradient methods to analyze our algorithm. The advantage of our method is evident and our experimental result on benchmark datasets shows significant performance gain over existing methods (e.g. all state-of-the-art results on MNIST, CIFAR-10, CIFAR-100, and SVHN). |
|
| BinaryConnect: Training Deep Neural Networks with binary weights during propagations (Nov 2015, NIPS 2015) |
91.73% |
| Matthieu Courbariaux, Yoshua Bengio, Jean-Pierre David Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN. |
|
| On the Importance of Normalisation Layers in Deep Learning with Piecewise Linear Activation Units (Aug 2015) |
91.48% |
| Zhibin Liao, Gustavo Carneiro Deep feedforward neural networks with piecewise linear activations are currently producing the state-of-the-art results in several public datasets. The combination of deep learning models and piecewise linear activation functions allows for the estimation of exponentially complex functions with the use of a large number of subnetworks specialized in the classification of similar input examples. During the training process, these subnetworks avoid overfitting with an implicit regularization scheme based on the fact that they must share their parameters with other subnetworks. Using this framework, we have made an empirical observation that can improve even more the performance of such models. We notice that these models assume a balanced initial distribution of data points with respect to the domain of the piecewise linear activation function. If that assumption is violated, then the piecewise linear activation units can degenerate into purely linear activation units, which can result in a significant reduction of their capacity to learn complex functions. Furthermore, as the number of model layers increases, this unbalanced initial distribution makes the model ill-conditioned. Therefore, we propose the introduction of batch normalisation units into deep feedforward neural networks with piecewise linear activations, which drives a more balanced use of these activation units, where each region of the activation function is trained with a relatively large proportion of training samples. Also, this batch normalisation promotes the pre-conditioning of very deep learning models. We show that by introducing maxout and batch normalisation units to the network in network model results in a model that produces classification results that are better than or comparable to the current state of the art in CIFAR-10, CIFAR-100, MNIST, and SVHN datasets. |
|
| Spectral Representations for Convolutional Neural Networks (Jun 2015, NIPS 2015) |
91.40% |
| Oren Rippel, Jasper Snoek, Ryan P. Adams Discrete Fourier transforms provide a significant speedup in the computation of convolutions in deep learning. In this work, we demonstrate that, beyond its advantages for efficient computation, the spectral domain also provides a powerful representation in which to model and train convolutional neural networks (CNNs). We employ spectral representations to introduce a number of innovations to CNN design. First, we propose spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain. This approach preserves considerably more information per parameter than other pooling strategies and enables flexibility in the choice of pooling output dimensionality. This representation also enables a new form of stochastic regularization by randomized modification of resolution. We show that these methods achieve competitive results on classification and approximation tasks, without using any dropout or max-pooling. Finally, we demonstrate the effectiveness of complex-coefficient spectral parameterization of convolutional filters. While this leaves the underlying model unchanged, it results in a representation that greatly facilitates optimization. We observe on a variety of popular CNN configurations that this leads to significantly faster convergence during training. |
|
| Network in Network (Dec 2013, ICLR 2014) |
91.2% |
| Min Lin, Qiang Chen, Shuicheng Yan We propose a novel deep network structure called "Network In Network" (NIN) to enhance model discriminability for local patches within the receptive field. The conventional convolutional layer uses linear filters followed by a nonlinear activation function to scan the input. Instead, we build micro neural networks with more complex structures to abstract the data within the receptive field. We instantiate the micro neural network with a multilayer perceptron, which is a potent function approximator. The feature maps are obtained by sliding the micro networks over the input in a similar manner as CNN; they are then fed into the next layer. Deep NIN can be implemented by stacking mutiple of the above described structure. With enhanced local modeling via the micro network, we are able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers. We demonstrated the state-of-the-art classification performances with NIN on CIFAR-10 and CIFAR-100, and reasonable performances on SVHN and MNIST datasets. |
|
| Speeding up Automatic Hyperparameter Optimization of Deep Neural Networks by Extrapolation of Learning Curves (IJCAI 2015) |
91.19% |
| Tobias Domhan, Jost Tobias Springenberg, Frank Hutter
Deep neural networks (DNNs) show very strong performance on many machine learning problems,
but they are very sensitive to the setting of their hyperparameters. Automated hyperparameter optimization
methods have recently been shown to yield settings competitive with those found by human
experts, but their widespread adoption is hampered by the fact that they require more computational
resources than human experts. Humans have one advantage: when they evaluate a poor
hyperparameter setting they can quickly detect (after a few steps of stochastic gradient descent) that
the resulting network performs poorly and terminate the corresponding evaluation to save time. In
this paper, we mimic the early termination of bad runs using a probabilistic model that extrapolates
the performance from the first part of a learning curve. Experiments with a broad range of neural
network architectures on various prominent object recognition benchmarks show that our resulting approach
speeds up state-of-the-art hyperparameter optimization methods for DNNs roughly twofold,
enabling them to find DNN settings that yield better performance than those chosen by human experts. |
|
| Deep Networks with Internal Selective Attention through Feedback Connections (Jul 2014, NIPS 2014) |
90.78% |
| Marijn Stollenga, Jonathan Masci, Faustino Gomez, Juergen Schmidhuber Traditional convolutional neural networks (CNN) are stationary and feedforward. They neither change their parameters during evaluation nor use feedback from higher to lower layers. Real brains, however, do. So does our Deep Attention Selective Network (dasNet) architecture. DasNets feedback structure can dynamically alter its convolutional filter sensitivities during classification. It harnesses the power of sequential processing to improve classification performance, by allowing the network to iteratively focus its internal attention on some of its convolutional filters. Feedback is trained through direct policy search in a huge million-dimensional parameter space, through scalable natural evolution strategies (SNES). On the CIFAR-10 and CIFAR-100 datasets, dasNet outperforms the previous state-of-the-art model. |
|
| Regularization of Neural Networks using DropConnect (ICML 2013) |
90.68% |
| Li Wan, Matthew Zeiler, Sixin Zhang, Yann LeCun, Rob Fergus
We introduce DropConnect, a generalization of Dropout (Hinton et al., 2012), for regularizing
large fully-connected layers within neural networks. When training with Dropout,
a randomly selected subset of activations are set to zero within each layer. DropConnect
instead sets a randomly selected subset of weights within the network to zero.
Each unit thus receives input from a random subset of units in the previous layer.
We derive a bound on the generalization performance of both Dropout and DropConnect.
We then evaluate DropConnect on a range of datasets, comparing to Dropout, and
show state-of-the-art results on several image recognition benchmarks by aggregating multiple
DropConnect-trained models. |
|
| Maxout Networks (Feb 2013, ICML 2013) |
90.65% |
| Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, Yoshua Bengio We consider the problem of designing models to leverage a recently introduced approximate model averaging technique called dropout. We define a simple new model called maxout (so named because its output is the max of a set of inputs, and because it is a natural companion to dropout) designed to both facilitate optimization by dropout and improve the accuracy of dropout's fast approximate model averaging technique. We empirically verify that the model successfully accomplishes both of these tasks. We use maxout and dropout to demonstrate state of the art classification performance on four benchmark datasets: MNIST, CIFAR-10, CIFAR-100, and SVHN. |
|
| Improving Deep Neural Networks with Probabilistic Maxout Units (Dec 2013, ICLR 2014) |
90.61% |
| Jost Tobias Springenberg, Martin Riedmiller We present a probabilistic variant of the recently introduced maxout unit. The success of deep neural networks utilizing maxout can partly be attributed to favorable performance under dropout, when compared to rectified linear units. It however also depends on the fact that each maxout unit performs a pooling operation over a group of linear transformations and is thus partially invariant to changes in its input. Starting from this observation we ask the question: Can the desirable properties of maxout units be preserved while improving their invariance properties ? We argue that our probabilistic maxout (probout) units successfully achieve this balance. We quantitatively verify this claim and report classification performance matching or exceeding the current state of the art on three challenging image classification benchmarks (CIFAR-10, CIFAR-100 and SVHN). |
|
| Practical Bayesian Optimization of Machine Learning Algorithms (Jun 2012, NIPS 2012) |
90.5% |
| Jasper Snoek, Hugo Larochelle, Ryan P. Adams Machine learning algorithms frequently require careful tuning of model hyperparameters, regularization terms, and optimization parameters. Unfortunately, this tuning is often a "black art" that requires expert experience, unwritten rules of thumb, or sometimes brute-force search. Much more appealing is the idea of developing automatic approaches which can optimize the performance of a given learning algorithm to the task at hand. In this work, we consider the automatic tuning problem within the framework of Bayesian optimization, in which a learning algorithm's generalization performance is modeled as a sample from a Gaussian process (GP). The tractable posterior distribution induced by the GP leads to efficient use of the information gathered by previous experiments, enabling optimal choices about what parameters to try next. Here we show how the effects of the Gaussian process prior and the associated inference procedure can have a large impact on the success or failure of Bayesian optimization. We show that thoughtful choices can lead to results that exceed expert-level performance in tuning machine learning algorithms. We also describe new algorithms that take into account the variable cost (duration) of learning experiments and that can leverage the presence of multiple cores for parallel experimentation. We show that these proposed algorithms improve on previous automatic procedures and can reach or surpass human expert-level optimization on a diverse set of contemporary algorithms including latent Dirichlet allocation, structured SVMs and convolutional neural networks. |
|
| APAC: Augmented PAttern Classification with Neural Networks (May 2015) |
89.67% |
| Ikuro Sato, Hiroki Nishimura, Kensuke Yokoi Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset. |
|
| Deep Convolutional Neural Networks as Generic Feature Extractors (IJCNN 2015) |
89.14% |
| Lars Hertel, Erhardt Barth, Thomas Käster, Thomas Martinetz
Recognizing objects in natural images is an intricate problem involving multiple conflicting objectives.
Deep convolutional neural networks, trained on large datasets, achieve convincing results and are currently the state-of-the-art
approach for this task. However, the long time needed to train such deep networks is a major drawback. We tackled
this problem by reusing a previously trained network. For this purpose, we first trained a deep convolutional network
on the ILSVRC-12 dataset. We then maintained the learned convolution kernels and only retrained the classification part
on different datasets. Using this approach, we achieved an accuracy of 67.68% on CIFAR-100, compared to the previous
state-of-the-art result of 65.43%. Furthermore, our findings indicate that convolutional networks are able to learn generic
feature extractors that can be used for different tasks. |
|
| ImageNet Classification with Deep Convolutional Neural Networks (NIPS 2012) |
89% |
| Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton
We trained a large, deep convolutional neural network to classify the 1.3 million high-resolution
images in the LSVRC-2010 ImageNet training set into the 1000 different classes. On the test data,
we achieved top-1 and top-5 error rates of 39.7% and 18.9% which is considerably better than the
previous state-of-the-art results. The neural network, which has 60 million parameters and 500,000
neurons, consists of five convolutional layers, some of which are followed by max-pooling layers,
and two globally connected layers with a final 1000-way softmax. To make training faster, we used
non-saturating neurons and a very efficient GPU implementation of convolutional nets. To reduce
overfitting in the globally connected layers we employed a new regularization method that proved
to be very effective. |
|
| Empirical Evaluation of Rectified Activations in Convolution Network (May 2015, ICML workshop 2015) |
88.80% |
| Bing Xu, Naiyan Wang, Tianqi Chen, Mu Li In this paper we investigate the performance of different types of rectified activation functions in convolutional neural network: standard rectified linear unit (ReLU), leaky rectified linear unit (Leaky ReLU), parametric rectified linear unit (PReLU) and a new randomized leaky rectified linear units (RReLU). We evaluate these activation function on standard image classification task. Our experiments suggest that incorporating a non-zero slope for negative part in rectified activation units could consistently improve the results. Thus our findings are negative on the common belief that sparsity is the key of good performance in ReLU. Moreover, on small scale dataset, using deterministic negative slope or learning it are both prone to overfitting. They are not as effective as using their randomized counterpart. By using RReLU, we achieved 75.68% accuracy on CIFAR-100 test set without multiple test or ensemble. |
|
| Multi-Column Deep Neural Networks for Image Classification (Feb 2012, CVPR 2012) |
88.79% |
| Dan Cireşan, Ueli Meier, Juergen Schmidhuber Traditional methods of computer vision and machine learning cannot match human performance on tasks such as the recognition of handwritten digits or traffic signs. Our biologically plausible deep artificial neural network architectures can. Small (often minimal) receptive fields of convolutional winner-take-all neurons yield large network depth, resulting in roughly as many sparsely connected neural layers as found in mammals between retina and visual cortex. Only winner neurons are trained. Several deep neural columns become experts on inputs preprocessed in different ways; their predictions are averaged. Graphics cards allow for fast training. On the very competitive MNIST handwriting benchmark, our method is the first to achieve near-human performance. On a traffic sign recognition benchmark it outperforms humans by a factor of two. We also improve the state-of-the-art on a plethora of common image classification benchmarks. |
|
| ReNet: A Recurrent Neural Network Based Alternative to Convolutional Networks (May 2015) |
87.65% |
| Francesco Visin, Kyle Kastner, Kyunghyun Cho, Matteo Matteucci, Aaron Courville, Yoshua Bengio In this paper, we propose a deep neural network architecture for object recognition based on recurrent neural networks. The proposed network, called ReNet, replaces the ubiquitous convolution+pooling layer of the deep convolutional neural network with four recurrent neural networks that sweep horizontally and vertically in both directions across the image. We evaluate the proposed ReNet on three widely-used benchmark datasets; MNIST, CIFAR-10 and SVHN. The result suggests that ReNet is a viable alternative to the deep convolutional neural network, and that further investigation is needed. |
|
| An Analysis of Unsupervised Pre-training in Light of Recent Advances (Dec 2014, ICLR 2015) |
86.70 % |
| Tom Le Paine, Pooya Khorrami, Wei Han, Thomas S. Huang Convolutional neural networks perform well on object recognition because of a number of recent advances: rectified linear units (ReLUs), data augmentation, dropout, and large labelled datasets. Unsupervised data has been proposed as another way to improve performance. Unfortunately, unsupervised pre-training is not used by state-of-the-art methods leading to the following question: Is unsupervised pre-training still useful given recent advances? If so, when? We answer this in three parts: we 1) develop an unsupervised method that incorporates ReLUs and recent unsupervised regularization techniques, 2) analyze the benefits of unsupervised pre-training compared to data augmentation and dropout on CIFAR-10 while varying the ratio of unsupervised to supervised samples, 3) verify our findings on STL-10. We discover unsupervised pre-training, as expected, helps when the ratio of unsupervised to supervised samples is high, and surprisingly, hurts when the ratio is low. We also use unsupervised pre-training with additional color augmentation to achieve near state-of-the-art performance on STL-10. |
|
| Stochastic Pooling for Regularization of Deep Convolutional Neural Networks (Jan 2013) |
84.87% |
| Matthew D. Zeiler, Rob Fergus We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly picking the activation within each pooling region according to a multinomial distribution, given by the activities within the pooling region. The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation. |
|
| Improving neural networks by preventing co-adaptation of feature detectors (Jul 2012) |
84.4% |
| Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, Ruslan R. Salakhutdinov When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random "dropout" gives big improvements on many benchmark tasks and sets new records for speech and object recognition. |
|
| Discriminative Unsupervised Feature Learning with Exemplar Convolutional Neural Networks (Jun 2014, arXiv 2015) |
84.3% |
| Alexey Dosovitskiy, Philipp Fischer, Jost Tobias Springenberg, Martin Riedmiller, Thomas Brox Deep convolutional networks have proven to be very successful in learning task specific features that allow for unprecedented performance on various computer vision tasks. Training of such networks follows mostly the supervised learning paradigm, where sufficiently many input-output pairs are required for training. Acquisition of large training sets is one of the key challenges, when approaching a new task. In this paper, we aim for generic feature learning and present an approach for training a convolutional network using only unlabeled data. To this end, we train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled 'seed' image patch. In contrast to supervised network training, the resulting feature representation is not class specific. It rather provides robustness to the transformations that have been applied during training. This generic feature representation allows for classification results that outperform the state of the art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101, Caltech-256). While such generic features cannot compete with class specific features from supervised training on a classification task, we show that they are advantageous on geometric matching problems, where they also outperform the SIFT descriptor. |
|
| Discriminative Learning of Sum-Product Networks (NIPS 2012) |
83.96% |
| Robert Gens, Pedro Domingos
Sum-product networks are a new deep architecture that can perform fast, exact inference
on high-treewidth models. Only generative methods for training SPNs
have been proposed to date. In this paper, we present the first discriminative
training algorithms for SPNs, combining the high accuracy of the former with
the representational power and tractability of the latter. We show that the class
of tractable discriminative SPNs is broader than the class of tractable generative
ones, and propose an efficient backpropagation-style algorithm for computing the
gradient of the conditional log likelihood. Standard gradient descent suffers from
the diffusion problem, but networks with many layers can be learned reliably using
“hard” gradient descent, where marginal inference is replaced by MPE inference
(i.e., inferring the most probable state of the non-evidence variables). The
resulting updates have a simple and intuitive form. We test discriminative SPNs
on standard image classification tasks. We obtain the best results to date on the
CIFAR-10 dataset, using fewer features than prior methods with an SPN architecture
that learns local image structure discriminatively. We also report the highest
published test accuracy on STL-10 even though we only use the labeled portion
of the dataset. |
|
| Unsupervised Learning using Pretrained CNN and Associative Memory Bank (May 2018, arXiv 2018) |
83.1% |
| Qun Liu, Supratik Mukhopadhyay Deep Convolutional features extracted from a comprehensive labeled dataset, contain substantial representations which could be effectively used in a new domain. Despite the fact that generic features achieved good results in many visual tasks, fine-tuning is required for pretrained deep CNN models to be more effective and provide state-of-the-art performance. Fine tuning using the backpropagation algorithm in a supervised setting, is a time and resource consuming process. In this paper, we present a new architecture and an approach for unsupervised object recognition that addresses the above mentioned problem with fine tuning associated with pretrained CNN-based supervised deep learning approaches while allowing automated feature extraction. Unlike existing works, our approach is applicable to general object recognition tasks. It uses a pretrained (on a related domain) CNN model for automated feature extraction pipelined with a Hopfield network based associative memory bank for storing patterns for classification purposes. The use of associative memory bank in our framework allows eliminating backpropagation while providing competitive performance on an unseen dataset. |
|
| Stable and Efficient Representation Learning with Nonnegativity Constraints (ICML 2014) |
82.9% |
| Tsung-Han Lin, H. T. Kung
Orthogonal matching pursuit (OMP) is an efficient approximation algorithm for computing
sparse representations. However, prior research has shown that the representations computed by
OMP may be of inferior quality, as they deliver suboptimal classification accuracy on several image
datasets. We have found that this problem is caused by OMP’s relatively weak stability under
data variations, which leads to unreliability in supervised classifier training. We show that
by imposing a simple nonnegativity constraint, this nonnegative variant of OMP (NOMP) can
mitigate OMP’s stability issue and is resistant to noise overfitting. In this work, we provide extensive
analysis and experimental results to examine and validate the stability advantage of NOMP.
In our experiments, we use a multi-layer deep architecture for representation learning, where
we use K-means for feature learning and NOMP for representation encoding. The resulting learning
framework is not only efficient and scalable to large feature dictionaries, but also is robust
against input noise. This framework achieves the state-of-the-art accuracy on the STL-10 dataset. |
|
| Learning Invariant Representations with Local Transformations (Jun 2012, ICML 2012) |
82.2% |
| Kihyuk Sohn, Honglak Lee Learning invariant representations is an important problem in machine learning and pattern recognition. In this paper, we present a novel framework of transformation-invariant feature learning by incorporating linear transformations into the feature learning algorithms. For example, we present the transformation-invariant restricted Boltzmann machine that compactly represents data by its weights and their transformations, which achieves invariance of the feature representation via probabilistic max pooling. In addition, we show that our transformation-invariant feature learning framework can also be extended to other unsupervised learning methods, such as autoencoders or sparse coding. We evaluate our method on several image classification benchmark datasets, such as MNIST variations, CIFAR-10, and STL-10, and show competitive or superior classification performance when compared to the state-of-the-art. Furthermore, our method achieves state-of-the-art performance on phone classification tasks with the TIMIT dataset, which demonstrates wide applicability of our proposed algorithms to other domains. |
|
| Convolutional Kernel Networks (Jun 2014) |
82.18% |
| Julien Mairal, Piotr Koniusz, Zaid Harchaoui, Cordelia Schmid An important goal in visual recognition is to devise image representations that are invariant to particular transformations. In this paper, we address this goal with a new type of convolutional neural network (CNN) whose invariance is encoded by a reproducing kernel. Unlike traditional approaches where neural networks are learned either to represent data or for solving a classification task, our network learns to approximate the kernel feature map on training data. Such an approach enjoys several benefits over classical ones. First, by teaching CNNs to be invariant, we obtain simple network architectures that achieve a similar accuracy to more complex ones, while being easy to train and robust to overfitting. Second, we bridge a gap between the neural network literature and kernels, which are natural tools to model invariance. We evaluate our methodology on visual recognition tasks where CNNs have proven to perform well, e.g., digit recognition with the MNIST dataset, and the more challenging CIFAR-10 and STL-10 datasets, where our accuracy is competitive with the state of the art. |
|
| Discriminative Unsupervised Feature Learning with Convolutional Neural Networks (NIPS 2014) |
82% |
| Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, Thomas Brox
Current methods for training convolutional neural networks depend on large
amounts of labeled samples for supervised training. In this paper we present an
approach for training a convolutional neural network using only unlabeled data.
We train the network to discriminate between a set of surrogate classes. Each
surrogate class is formed by applying a variety of transformations to a randomly
sampled ’seed’ image patch. We find that this simple feature learning algorithm
is surprisingly successful when applied to visual object recognition. The feature
representation learned by our algorithm achieves classification results matching
or outperforming the current state-of-the-art for unsupervised learning on several
popular datasets (STL-10, CIFAR-10, Caltech-101). |
|
| Selecting Receptive Fields in Deep Networks (NIPS 2011) |
82.0% |
| Adam Coates, Andrew Y. Ng
Recent deep learning and unsupervised feature learning systems that learn from
unlabeled data have achieved high performance in benchmarks by using extremely
large architectures with many features (hidden units) at each layer. Unfortunately,
for such large architectures the number of parameters can grow quadratically in the
width of the network, thus necessitating hand-coded “local receptive fields” that
limit the number of connections from lower level features to higher ones (e.g.,
based on spatial locality). In this paper we propose a fast method to choose these
connections that may be incorporated into a wide variety of unsupervised training
methods. Specifically, we choose local receptive fields that group together those
low-level features that are most similar to each other according to a pairwise similarity
metric. This approach allows us to harness the advantages of local receptive
fields (such as improved scalability, and reduced data requirements) when we do
not know how to specify such receptive fields by hand or where our unsupervised
training algorithm has no obvious generalization to a topographic setting. We
produce results showing how this method allows us to use even simple unsupervised
training algorithms to train successful multi-layered networks that achieve
state-of-the-art results on CIFAR and STL datasets: 82.0% and 60.1% accuracy,
respectively. |
|
| Learning Smooth Pooling Regions for Visual Recognition (BMVC 2013) |
80.02% |
| Mateusz Malinowski, Mario Fritz
From the early HMAX model to Spatial Pyramid Matching, spatial pooling has
played an important role in visual recognition pipelines. By aggregating local statistics,
it equips the recognition pipelines with a certain degree of robustness to translation
and deformation yet preserving spatial information. Despite of its predominance in current
recognition systems, we have seen little progress to fully adapt the pooling strategy
to the task at hand. In this paper, we propose a flexible parameterization of the spatial
pooling step and learn the pooling regions together with the classifier. We investigate
a smoothness regularization term that in conjuncture with an efficient learning scheme
makes learning scalable. Our framework can work with both popular pooling operators:
sum-pooling and max-pooling. Finally, we show benefits of our approach for object
recognition tasks based on visual words and higher level event recognition tasks based
on object-bank features. In both cases, we improve over the hand-crafted spatial pooling
step showing the importance of its adaptation to the task. |
|
| Object Recognition with Hierarchical Kernel Descriptors (CVPR 2011) |
80% |
| Liefeng Bo, Kevin Lai, Xiaofeng Ren, Dieter Fox
Kernel descriptors provide a unified way to generate rich visual feature sets by turning pixel attributes into
patch-level features, and yield impressive results on many object recognition tasks. However, best results with kernel
descriptors are achieved using efficient match kernels in conjunction with nonlinear SVMs, which makes it impractical
for large-scale problems. In this paper, we propose hierarchical kernel descriptors that apply kernel descriptors
recursively to form image-level features and thus provide a conceptually simple and consistent way to generate imagelevel
features from pixel attributes. More importantly, hierarchical kernel descriptors allow linear SVMs to yield stateof-the-art
accuracy while being scalable to large datasets. They can also be naturally extended to extract features over
depth images. We evaluate hierarchical kernel descriptors both on the CIFAR10 dataset and the new RGB-D Object
Dataset consisting of segmented RGB and depth images of 300 everyday objects. |
|
| Learning with Recursive Perceptual Representations (NIPS 2012) |
79.7% |
| Oriol Vinyals, Yangqing Jia, Li Deng, Trevor Darrell
Linear Support Vector Machines (SVMs) have become very popular in vision as
part of state-of-the-art object recognition and other classification tasks but require
high dimensional feature spaces for good performance. Deep learning methods
can find more compact representations but current methods employ multilayer
perceptrons that require solving a difficult, non-convex optimization problem. We
propose a deep non-linear classifier whose layers are SVMs and which incorporates
random projection as its core stacking element. Our method learns layers of
linear SVMs recursively transforming the original data manifold through a random
projection of the weak prediction computed from each layer. Our method
scales as linear SVMs, does not rely on any kernel computations or nonconvex
optimization, and exhibits better generalization ability than kernel-based SVMs.
This is especially true when the number of training samples is smaller than the
dimensionality of data, a common scenario in many real-world applications. The
use of random projections is key to our method, as we show in the experiments
section, in which we observe a consistent improvement over previous –often more
complicated– methods on several vision and speech benchmarks. |
|
| An Analysis of Single-Layer Networks in Unsupervised Feature Learning (AISTATS 2011) |
79.6 % |
| Adam Coates, Honglak Lee, Andrew Y. Ng
A great deal of research has focused on algorithms for learning features from unlabeled
data. Indeed, much progress has been made on benchmark datasets like NORB and
CIFAR by employing increasingly complex unsupervised learning algorithms and deep
models. In this paper, however, we show that several simple factors, such as the number of
hidden nodes in the model, may be more important to achieving high performance than
the learning algorithm or the depth of the model. Specifically, we will apply several off-
the-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering,
and Gaussian mixtures) to CIFAR, NORB, and STL datasets using only single-layer
networks. We then present a detailed analysis of the effect of changes in the model
setup: the receptive field size, number of hidden nodes (features), the step-size (“stride”)
between extracted features, and the effect of whitening. Our results show that large
numbers of hidden nodes and dense feature extraction are critical to achieving high
performance—so critical, in fact, that when these parameters are pushed to their limits,
we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single
layer of features. More surprisingly, our best performance is based on K-means clustering,
which is extremely fast, has no hyperparameters to tune beyond the model structure
itself, and is very easy to implement. Despite the simplicity of our system, we achieve
accuracy beyond all previously published results on the CIFAR-10 and NORB datasets
(79.6% and 97.2% respectively). |
|
| PCANet: A Simple Deep Learning Baseline for Image Classification? (Apr 2014) |
78.67% |
| Tsung-Han Chan, Kui Jia, Shenghua Gao, Jiwen Lu, Zinan Zeng, Yi Ma In this work, we propose a very simple deep learning network for image classification which comprises only the very basic data processing components: cascaded principal component analysis (PCA), binary hashing, and block-wise histograms. In the proposed architecture, PCA is employed to learn multistage filter banks. It is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus named as a PCA network (PCANet) and can be designed and learned extremely easily and efficiently. For comparison and better understanding, we also introduce and study two simple variations to the PCANet, namely the RandNet and LDANet. They share the same topology of PCANet but their cascaded filters are either selected randomly or learned from LDA. We have tested these basic networks extensively on many benchmark visual datasets for different tasks, such as LFW for face verification, MultiPIE, Extended Yale B, AR, FERET datasets for face recognition, as well as MNIST for hand-written digits recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state of the art features, either prefixed, highly hand-crafted or carefully learned (by DNNs). Even more surprisingly, it sets new records for many classification tasks in Extended Yale B, AR, FERET datasets, and MNIST variations. Additional experiments on other public datasets also demonstrate the potential of the PCANet serving as a simple but highly competitive baseline for texture classification and object recognition. |
|
| Enhanced Image Classification With a Fast-Learning Shallow Convolutional Neural Network (Mar 2015) |
75.86% |
| Mark D. McDonnell, Tony Vladusich We present a neural network architecture and training method designed to enable very rapid training and low implementation complexity. Due to its training speed and very few tunable parameters, the method has strong potential for applications requiring frequent retraining or online training. The approach is characterized by (a) convolutional filters based on biologically inspired visual processing filters, (b) randomly-valued classifier-stage input weights, (c) use of least squares regression to train the classifier output weights in a single batch, and (d) linear classifier-stage output units. We demonstrate the efficacy of the method by applying it to image classification. Our results match existing state-of-the-art results on the MNIST (0.37% error) and NORB-small (2.2% error) image classification databases, but with very fast training times compared to standard deep network approaches. The network's performance on the Google Street View House Number (SVHN) (4% error) database is also competitive with state-of-the art methods. |
|