НЕЙРОМЕРЕЖЕВА МОДЕЛЬ ДЕТЕКТУВАННЯ КОНТУРУ ОБЛИЧЧЯ ЛЮДИНИ
1. Adithya, U., Nagaraju, C. (2021). Object Motion Direction Detection and Tracking for Automatic Video Surveillance. International Journal of Education and Management Engineering (IJEME), 11, 2, 32–39. DOI: 10.5815/ijeme.2021.02.04.
2. Alpatov, B., Babayan, P. (2008). Selection of moving objects in a sequence of multispectral images in the presence of geometrically distorted ones. Herald of RGRTU, 23, 18–25.
3. Badrinarayanan, V., Kendall, A., Cipolla, R. (2017). SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. https://arxiv.org/pdf/1511.00561v2.pdf.
4. Bochkovskiy, A., Wang, C., Liao, H. (2004). YOLOv4: Optimal Speed and Accuracy. https://arxiv.org/pdf/2004.10934.pdf.
5. Deepa, I., Sharma, A. (2022). Multi-Module Convolutional Neural Network Based Optimal Face Recognition with Minibatch Optimization. International Journal of Image, Graphics and Signal Processing (IJIGSP), 14, 3, 32–46. DOI: 10.5815/ijigsp.2022.03.04.
6. Diwakar, D., Raj, D. (2022). Recent Object Detection Techniques: A Survey. International Journal of Image, Graphics and Signal Processing (IJIGSP), 14, 2, 47-60. DOI: 10.5815/ijigsp.2022.02.05.
7. Hengshuang, Z., Jianping, S., Xiaojuan, Q., Xiaogang, W., Jiaya, J. (2018). Pyramid Scene Parsing Network. https://arxiv.org/pdf/1612.01105.pdf.
8. Hoai Viet, V., Nhat Duy, H., (2022). Object Tracking: An Experimental and Comprehensive Study on Vehicle Object in Video. International Journal of Image, Graphics and Signal Processing (IJIGSP), 14, 1, 64–81. DOI: 10.5815/ijigsp.2022.01.06.
9. Horytov, A., Yakovchenko, S., (2017). Selection of parametrically defined objects on a low-resolution image. Management, computing and informatics, 2, 88–90.
10. Hu, Z., Tereykovskiy, I., Zorin, Y., Tereykovska, L., Zhibek, A. (2019). Optimization of convolutional neural network structure for biometric authentication by face geometry. Advances in Intelligent Systems and Computing, 754, 567–577.
11. Kong, T., Sun, F., Liu, H., Jiang, Y., Li, L., Shi, J. (2020). FoveaBox: Beyound Anchor-Based Object Detection, IEEE Trans. Image Process, 29, 7389–7398. https://doi.org/10.1109/TIP.2020.3002345.
12. Le Cun, Y. et al. (2017). Learning Hierarchical Features for Scene Labeling. URL: http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf (access date: 02/02/2017).
13. Muraviev, V. (2010). Models and algorithms of image processing and analysis for systems of automatic tracking of aerial objects. PhD thesis: special. 05.13.01 – system analysis, management and processing. Ryazan, 17.
14. Oniskiv, P., Lytvynenko, Y. (2019). Analysis of image segmentation methods. Theoretical and applied aspects of radio engineering, instrument engineering and computer technologies: materials of IV all-Ukrainian. Science and technology conf., pp. 48–49.
15. Perfil`ev, D. (2018). Segmentation Object Strategy on Digital Image. Journal of Siberian Federal University. Engineering & Technologies, 11(2), 213–220.
16. Ronneberger, O., Fischer, P., Brox, T. (2019). U-Net: Convolutional Networks for Biomedical Image Segmentation. https://arxiv.org/abs/1505.04597.
17. Shen, J. (2014). Motion detection in colour image sequence and shadow elimination. Visual Communications and Image Processing, 5308, 731–740.
18. Shkurat, O. (2020). Methods and information technology of processing archival medical images. PhD Thesis: 05.13.06. Kyiv, 211.
19. Stulov, N. (2006). Algorithms for the selection of basic features and methods of formation invariant to rotation, transfer, and rescaling of features of objects. PhD thesis: special. 05.13.01 – system analysis, management and processing. Vladimir, 16.
20. Tereikovskyi, O. (2022). The method of neural network selection of objects on raster images: master's thesis: 123 Computer Engineering. Kyiv, 104.
21. Toliupa, S., Kulakov, Y., Tereikovskyi, I., Tereikovskyi, O., Tereikovska, L., Nakonechnyi, V. (2020). Keyboard Dynamic Analysis by Alexnet Type Neural Network. IEEE 15th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, Pp. 416–420.
22. Toliupa, S., Tereikovskiy, I., Dychka, I., Tereikovska, L., & Trush, A. (2019). The Method of Using Production Rules in Neural Network Recognition of Emotions by Facial Geometry. 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT), pp. 323–327, doi: 10.1109/AIACT.2019.8847847.
23. Wang, H., Wang, X., Yu, L., & Zhong, F. (2019). Design of Mean Shift Tracking Algorithm Based on Target Position Prediction. 2019 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 1114–1119, doi: 10.1109/ICMA.2019.8816295.
24. Yudin, O., Toliupa, S., Korchenko, O., Tereikovska, L., Tereikovskyi, I., Tereikovskyi, O. (2020). Determination of Signs of Information and Psychological Influence in the Tone of Sound Sequences. IEEE 2nd International Conference on Advanced Trends in Information Theory, 276–280.
25. Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S. Z. (2018). Single-Shot Refinement Neural Network for Object Detection, in: 2018: pp. 4203–4212.
1. Adithya, U., Nagaraju, C. (2021). Object Motion Direction Detection and Tracking for Automatic Video Surveillance. International Journal of Education and Management Engineering (IJEME), 11, 2, 32–39. DOI: 10.5815/ijeme.2021.02.04.
2. Alpatov, B., Babayan, P. (2008). Selection of moving objects in a sequence of multispectral images in the presence of geometrically distorted ones. Herald of RGRTU, 23, 18–25.
3. Badrinarayanan, V., Kendall, A., Cipolla, R. (2017). SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. https://arxiv.org/pdf/1511.00561v2.pdf.
4. Bochkovskiy, A., Wang, C., Liao, H. (2004). YOLOv4: Optimal Speed and Accuracy. https://arxiv.org/pdf/2004.10934.pdf.
5. Deepa, I., Sharma, A. (2022). Multi-Module Convolutional Neural Network Based Optimal Face Recognition with Minibatch Optimization. International Journal of Image, Graphics and Signal Processing (IJIGSP), 14, 3, 32–46. DOI: 10.5815/ijigsp.2022.03.04.
6. Diwakar, D., Raj, D. (2022). Recent Object Detection Techniques: A Survey. International Journal of Image, Graphics and Signal Processing (IJIGSP), 14, 2, 47-60. DOI: 10.5815/ijigsp.2022.02.05.
7. Hengshuang, Z., Jianping, S., Xiaojuan, Q., Xiaogang, W., Jiaya, J. (2018). Pyramid Scene Parsing Network. https://arxiv.org/pdf/1612.01105.pdf.
8. Hoai Viet, V., Nhat Duy, H., (2022). Object Tracking: An Experimental and Comprehensive Study on Vehicle Object in Video. International Journal of Image, Graphics and Signal Processing (IJIGSP), 14, 1, 64–81. DOI: 10.5815/ijigsp.2022.01.06.
9. Horytov, A., Yakovchenko, S., (2017). Selection of parametrically defined objects on a low-resolution image. Management, computing and informatics, 2, 88–90.
10. Hu, Z., Tereykovskiy, I., Zorin, Y., Tereykovska, L., Zhibek, A. (2019). Optimization of convolutional neural network structure for biometric authentication by face geometry. Advances in Intelligent Systems and Computing, 754, 567–577.
11. Kong, T., Sun, F., Liu, H., Jiang, Y., Li, L., Shi, J. (2020). FoveaBox: Beyound Anchor-Based Object Detection, IEEE Trans. Image Process, 29, 7389–7398. https://doi.org/10.1109/TIP.2020.3002345.
12. Le Cun, Y. et al. (2017). Learning Hierarchical Features for Scene Labeling. URL: http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf (access date: 02/02/2017).
13. Muraviev, V. (2010). Models and algorithms of image processing and analysis for systems of automatic tracking of aerial objects. PhD thesis: special. 05.13.01 – system analysis, management and processing. Ryazan, 17.
14. Oniskiv, P., Lytvynenko, Y. (2019). Analysis of image segmentation methods. Theoretical and applied aspects of radio engineering, instrument engineering and computer technologies: materials of IV all-Ukrainian. Science and technology conf., pp. 48–49.
15. Perfil`ev, D. (2018). Segmentation Object Strategy on Digital Image. Journal of Siberian Federal University. Engineering & Technologies, 11(2), 213–220.
16. Ronneberger, O., Fischer, P., Brox, T. (2019). U-Net: Convolutional Networks for Biomedical Image Segmentation. https://arxiv.org/abs/1505.04597.
17. Shen, J. (2014). Motion detection in colour image sequence and shadow elimination. Visual Communications and Image Processing, 5308, 731–740.
18. Shkurat, O. (2020). Methods and information technology of processing archival medical images. PhD Thesis: 05.13.06. Kyiv, 211.
19. Stulov, N. (2006). Algorithms for the selection of basic features and methods of formation invariant to rotation, transfer, and rescaling of features of objects. PhD thesis: special. 05.13.01 – system analysis, management and processing. Vladimir, 16.
20. Tereikovskyi, O. (2022). The method of neural network selection of objects on raster images: master's thesis: 123 Computer Engineering. Kyiv, 104.
21. Toliupa, S., Kulakov, Y., Tereikovskyi, I., Tereikovskyi, O., Tereikovska, L., Nakonechnyi, V. (2020). Keyboard Dynamic Analysis by Alexnet Type Neural Network. IEEE 15th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, Pp. 416–420.
22. Toliupa, S., Tereikovskiy, I., Dychka, I., Tereikovska, L., & Trush, A. (2019). The Method of Using Production Rules in Neural Network Recognition of Emotions by Facial Geometry. 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT), pp. 323–327, doi: 10.1109/AIACT.2019.8847847.
23. Wang, H., Wang, X., Yu, L., & Zhong, F. (2019). Design of Mean Shift Tracking Algorithm Based on Target Position Prediction. 2019 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 1114–1119, doi: 10.1109/ICMA.2019.8816295.
24. Yudin, O., Toliupa, S., Korchenko, O., Tereikovska, L., Tereikovskyi, I., Tereikovskyi, O. (2020). Determination of Signs of Information and Psychological Influence in the Tone of Sound Sequences. IEEE 2nd International Conference on Advanced Trends in Information Theory, 276–280.
25. Zhang, S., Wen, L., Bian, X., Lei, Z., Li, S. Z. (2018). Single-Shot Refinement Neural Network for Object Detection, in: 2018: pp. 4203–4212.