期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Multi-Exposure Motion Estimation Based on Deep Convolutional Networks 被引量:1
1
作者 Zhi-Feng Xie Yu-Chen Guo +2 位作者 Shu-Han Zhang Wen-Jun Zhang Li-Zhuang Ma 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第3期487-501,共15页
In motion estimation, illumination change is always a troublesome obstacle, which often causes severely per- formance reduction of optical flow computation. The essential reason is that most of estimation methods fail... In motion estimation, illumination change is always a troublesome obstacle, which often causes severely per- formance reduction of optical flow computation. The essential reason is that most of estimation methods fail to formalize a unified definition in color or gradient domain for diverse environmental changes. In this paper, we propose a new solution based on deep convolutional networks to solve the key issue. Our idea is to train deep convolutional networks to represent the complex motion features under illumination change, and further predict the final optical flow fields. To this end, we construct a training dataset of multi-exposure image pairs by performing a series of non-linear adjustments in the traditional datasets of optical.flow estimation. Our multi-exposure flow networks (MEFNet) model consists of three main components: low-level feature network, fusion feature network, and motion estimation network. The former two components belong to the contracting part of our model in order to extract and represent the multi-exposure motion features; the third component is the expanding part of our model in order to learn and predict the high-quality optical flow. Compared with many state- of-the-art methods, our motion estimation method can eliminate the obstacle of illumination change and yield optical flow results with competitive accuracy and time efficiency. Moreover, the good performance of our model is also demonstrated in some multi-exposure video applications, like HDR (high dynamic range) composition and flicker removal. 展开更多
关键词 motion estimation optical flow convolutional neural network multi-exposure
原文传递
A Novel Fine-Grained Method for Vehicle Type Recognition Based on the Locally Enhanced PCANet Neural Network 被引量:4
2
作者 Qian Wang You-Dong Ding 《Journal of Computer Science & Technology》 SCIE EI CSCD 2018年第2期335-350,共16页
In this paper, we propose a locally enhanced PCANet neural network for fine-grained classification of vehicles. The proposed method adopts the PCANet unsupervised network with a smaller number of layers and simple par... In this paper, we propose a locally enhanced PCANet neural network for fine-grained classification of vehicles. The proposed method adopts the PCANet unsupervised network with a smaller number of layers and simple parameters compared with the majority of state-of-the-art machine learning methods. It simplifies calculation steps and manual labeling, and enables vehicle types to be recognized without time-consuming training. Experimental results show that compared with the traditional pattern recognition methods and the multi-layer CNN methods, the proposed method achieves optimal balance in terms of varying scales of sample libraries, angle deviations, and training speed. It also indicates that introducing appropriate local features that have different scales from the general feature is very instrumental in improving recognition rate. The 7-angle in 180° (12-angle in 360°) classification modeling scheme is proven to be an effective approach, which can solve the problem of suffering decrease in recognition rate due to angle deviations, and add the recognition accuracy in practice. 展开更多
关键词 fine-grained classification PCANet local enhancement vehicle type recognition
原文传递
Photographic Appearance Enhancement via Detail-Based Dictionary Learning 被引量:2
3
作者 Zhi-Feng Xie Shi Tang +2 位作者 Dong-Jin Huang You-Dong Ding Li-Zhuang Ma 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第3期417-429,共13页
A number of edge-aware filters can efficiently boost the appearance of an image by detail decomposition and enhancement. However, they often fail to produce photographic enhanced appearance due to some visible artifac... A number of edge-aware filters can efficiently boost the appearance of an image by detail decomposition and enhancement. However, they often fail to produce photographic enhanced appearance due to some visible artifacts, especially noise, halos and unnatural contrast. The essential reason is that the guidance and the constraint of high-quality appearance are not sufficient enough in the process of enhancement. Thus our idea is to train a detail dictionary from a lot of high-quality patches in order to constrain and control the entire appearance enhancement. In this paper, we propose a novel learning-based enhancement method for photographic appearance, which includes two main stages: dictionary training and sparse reconstruction. In the training stage, we construct a training set of detail patches extracted from some high-quality photos, and then train an overcomplete detail dictionary by iteratively minimizing an?1-norm energy function. In the reconstruction stage, we employ the trained dictionary to reconstruct the boosted detail layer, and further formalize a gradient-guided optimization function to improve the local coherence between patches. Moreover, we propose two evaluation metrics to measure the performance of appearance enhancement. The final experimental results have demonstrated the effectiveness of our learning-based enhancement method. 展开更多
关键词 image enhancement dictionary learning edge-aware filter
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部