Abstract
Conventional Raman spectroscopy, which is based on the qualitative or quantitative determination of substances, has been widely utilized in industrial manufacture and academic research. However, in traditional Raman spectroscopy, human experience plays a prominent role. Because of the massive amount of comparable information contained in the spectrograms of varying concentration media, the extraction of feature peaks is especially crucial. Although manual feature peak extraction in spectrograms might reduce signal dimensionality to a certain extent, it could also result in spectral information loss, misclassification, and underclassification of feature peaks. This research solves the problem by extracting a feature dimensionality reduction method based on an auto-encoder-attention mechanism, applying a deep learning approach to spectrogram feature extraction, and feeding the features into a neural network for concentration prediction. After rigorous testing, the model’s prediction accuracy may reach a unit concentration of 0.01 with a 13% error, providing a reliable aid to manual and timely culture medium replenishment. And through extensive comparison experiments, it is concluded that the self-encoder-based dimensionality reduction method is more accurate compared with the machine learning method. The research demonstrates that using Raman spectroscopy to deep learning can produce positive outcomes and has great potential.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Zheng, Y., Zhang, T., Zhang, J., et al.: Study on the effects of smoothing, derivative, and baseline correction on the quantitative analysis of PLS in near-infrared spectroscopy. Spectrosc. Spectral Anal. 2004(12), 1546–1548 (2004)
Martens, H., Naes, T.: Multivariate Calibration. Wiley, Hoboken (1992)
Norgaard, L., Saudland, A., Wagner, J., et al.: Interval partial least-squares regression (iPLS): a comparative chemometric study with an example from near-infrared spectroscopy. Appl. Spectrosc. 54(3), 413–419 (2000)
Chen, H., Tao, P., Chen, J., et al.: Waveband selection for NIR spectroscopy analysis of soil organic matter based on SG smoothing and MWPLS methods. Chemometr. Intell. Lab. Syst. 107(1), 139–146 (2011)
Li, H., Liang, Y., Xu, Q., et al.: Key wavelengths screening using competitive adaptive reweighted sampling method for multivariate calibration. Anal. Chim. Acta 648(1), 77–84 (2009)
Polanski, J., Gieleciak, R.: The comparative molecular surface analysis (CoMSA) with modified uniformative variable elimination-PLS (UVE-PLS) method: application to the steroids binding the aromatase enzyme. ChemInform 34(22), 656–666 (2003)
Technicolor, T., Related, S., Technicolor, T., et al.: ImageNet classification with deep convolutional neural networks [50] (2012)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Computer Science (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313 (2006)
Wold, S., Esbensen, K., Geladi, P.: Principal component analysis. Chemometr. Intell. Lab. Syst. 2(1–3), 37–52 (1987)
Roweis, S.: Neighborhood component analysis (2006)
Wang, J., Peng, L., Ran, R., et al.: A short-term photovoltaic power prediction model based on the gradient boost decision tree. Appl. Sci. 8(5), 689 (2018)
Breiman, L.: Random forest. Mach. Learn. 45, 5–32 (2001)
Vaswani, A., Shazeer, N., Parmar, N., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)
Jaderberg, M., Simonyan, K., Zisserman, A.: Spatial transformer networks. Adv. Neural Inf. Process. Syst. 28, 2017–2025 (2015)
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018)
Woo, S., Park, J., Lee, J.-Y., Kweon, I.S.: CBAM: convolutional block attention module. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11211, pp. 3–19. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01234-2_1
Guo, M.H., Xu, T.X., Liu, J.J., et al.: Attention mechanisms in computer vision: a survey. arXiv preprint arXiv:2111.07624 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bai, Y., Xu, M., Qian, P. (2022). Application of Auto-encoder and Attention Mechanism in Raman Spectroscopy. In: Huang, DS., Jo, KH., Jing, J., Premaratne, P., Bevilacqua, V., Hussain, A. (eds) Intelligent Computing Methodologies. ICIC 2022. Lecture Notes in Computer Science(), vol 13395. Springer, Cham. https://doi.org/10.1007/978-3-031-13832-4_57
Download citation
DOI: https://doi.org/10.1007/978-3-031-13832-4_57
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-13831-7
Online ISBN: 978-3-031-13832-4
eBook Packages: Computer ScienceComputer Science (R0)