D5802 PPT
D5802 PPT
1. Data Augmentation
3. Object Detection
LITERATURE SURVEY
Lars Sommer et al[6] Search Area Reduction Fast-RCNN for It require object proposal method to generate the
Fast Vehicle Detection in Large Aerial candidate objects. For RSI , the object proposal
Imagery method is expensive in Fast R-CNN.
Z. Deng et al[10] An enhanced deep convolutional neural this methods require to annotate every object in
network for densely packed objects one training image, otherwise, unlabeled objects
detection in remote sensing images may be acted as the negative samples to train
the detection model.
ALGORITHM
Multi-Scale Image Block-level Fully Convolutional Neural Network
(MIF-CNN)
• It mainly combines the multi-scale structure and image block-level F-
CNN.
• Similar objects also have different sizes.
• It is used to detect the multiclass object with different sizes.
CONVOLUTION NEURAL NETWORK
SYSTEM REQUIREMENTS
Software Requirements:
• Languages used -Python
• Development Tool - Spyder3
• Operating System -Windows 10
Hardware Requirements:
• Processor –i3
• Hard Disk Capacity -250GB
• RAM Capacity -4GB
SYSTEM ARCHITECTURE
RESULTS
Accuracy for 60% Training and 40% Testing
Accuracy for 70% Training and 30% Testing
Accuracy for 60% Training and 40% Testing
80
70
60
ACCURACY 50
40
30
20
10
0
500 600 700
NO OF IMAGES
Accuracy Results
ACCURACY
No of Images Training-60% Training-70% Training-80%
Testing -40% Testing -30% Testing -20%
500 25 35 51
600 31 40 55
700 37 53 71
CONCLUSION
• Using multi-scale and block level image ,objects were detected fastly
compared to existing system.
FUTURE SCOPE
• In the future, to get more accurate detection results the training data
generation strategy and bounding box generation strategy can be
improved.
REFERENCES
[1]C. Tao, Y. Tan, H. Cai, and J. Tian, “Airport detection from large ikonos images using clustered sift keypoints and region
information,” IEEE Geoscience and Remote Sensing Letters, vol. 8, no. 1, pp. 128–132, 2011.
[2] G. Cheng and J. Han, “A survey on object detection in optical remote sensing images,” Isprs Journal of Photogrammetry
and Remote Sensing, vol. 117, pp. 11–28, 2016.
[3]G. Cheng, J. Han, P. Zhou, and L. Guo, “Multi-class geospatial object detection and geographic image classification based
on collection of part detectors,” Isprs Journal of Photogrammetry and Remote Sensing, vol. 98, no. 1, pp. 119–132, 2014.
[4] G. Zhang, X. Jia, and J. Hu, “Superpixel-based graphical model for remote sensing image mapping,” IEEE Transactions on
Geoscience and Remote Sensing, vol. 53, no. 11, pp. 5861–5871, 2015.
[5] J. Han, P. Zhou, D. Zhang, G. Cheng, L. Guo, Z. Liu, S. Bu, and J. Wu, “Efficient, simultaneous detection of multi-class
geospatial targets based on visual saliency modeling and discriminative learning of sparse coding,” Isprs Journal of
Photogrammetry and Remote Sensing, vol. 89, no. 1, pp. 37–48, 2014.
[6]Lars Sommer,Nicole Schmidt, Arne Schumann, Jurgen Beyerer, “Search Area Reduction Fast-RCNN for Fast Vehicle
Detection in Large Aerial Imagery”,IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 2, pp. 424–428, 2018.
[7]M. A. Hossain, X. Jia, and M. Pickering, “Subspace detection using a mutual information measure for hyperspectral image
classification,” IEEE Geoscience and Remote Sensing Letters, vol. 11, no. 2, pp. 424–428, 2014.
[8] Y. Cao, X. Niu, and Y. Dou, “Region-based convolutional neural networks for object detection in very high resolution
remote sensing images,” in 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge
Discovery (ICNC-FSKD), pp. 548–554, IEEE, 2016.
[9] Y. Long, Y. Gong, Z. Xiao, and Q. Liu, “Accurate object localization in remote sensing images based on convolutional
neural networks,” IEEE Transactions on Geoscience and Remote Sensing, vol. 55, no. 5, pp. 2486– 2498, 2017.
[10]Z. Deng, L. Lei, H. Sun, H. Zou, S. Zhou, and J. Zhao, “An enhanced deep convolutional neural network for densely
packed objects detection in remote sensing images,” International Workshop on Remote Sensing with IntelligentProcessing,
pp. 1–4, 2017.