Sažetak
Strojno učenje je predmet istraživanja brojnih znanstvenih i stručnih projekata, i važan sastavni dio sustava koji se koriste u medicini, bankarstvu, računalnoj sigurnosti, komunikaciji i brojnim drugim domenama. Jedno je od najaktivnijih područja istraživanja, s konstantnim napretkom i razvojem novih algoritama i pristupa, i poboljšanjem postojećih metoda. Značajan utjecaj na performanse modela strojnog učenja ima skup podataka nad kojem je napravljeno treniranje, odnosno kvaliteta podataka, ravnomjerna razdioba vrijednosti i veličina skupa. To predstavlja potencijalan problem kod metoda strojnog učenja koje zahtijevaju prethodno označene podatke, jer prikupljanje podataka može biti iznimno složeno, skupo i vremenski zahtjevno. U tom slučaju klasičan model strojnog učenja vrlo vjerojatno neće imati dobre performanse. Jedan od pristupa rješavanja ovog problema je primjena učenja prijenosom, u kojem model koristi skup podataka ne samo iz promatrane domene, već i iz druge, idealno srodne domene. U radu su simulirani uvjeti manje raspoloživosti skupa podataka, na kojem su analizirane performanse tri modela temeljena na neuronskim mrežama, od kojih se jedan temelji na prethodno istreniranom modelu. Opisan je postupak kreiranja skupova za treniranje i prezentirani su rezultati analize navedena tri modela s različitim veličinama skupova.Reference
Cao, L. (2017). Data science: a comprehensive overview. ACM Computing Surveys (CSUR), 50(3), 1-42.
He, K., Zhang, X., Ren, S., Sun, J. (2016). Deep residual learning for image recognition. U Proceedings of the IEEE conference on computer vision and pattern recognition (str. 770-778).
Hu, C., Shi, W. (2022). Impact of Scaled Image on Robustness of Deep Neural Networks. arXiv e-prints, arXiv-2209.
ImageNet. (2021). https://www.image-net.org/
Jain, A., Patel, H., Nagalapatti, L., Gupta, N., Mehta, S., Guttula, S., Munigala, V. (2020). Overview and importance of data quality for machine learning tasks. U Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining (str. 3561-3562).
Kaggle (2023). Kaggle: your machine learning and data science community. (2023). https://www.kaggle.com/
Li, Y. (2017). Deep Reinforcement Learning: An Overview. arXiv e-prints, arXiv-1701.
Mamaev, A. (2021). Flowers recognition. Kaggle. Preuzeto s https://www.kaggle.com/datasets/alxmamaev/flowers-recognition
Moerland, T. M., Broekens, J., Jonker, C. M. (2020). A Framework for Reinforcement Learning and Planning. U ICAPS 2020: 30th International Conference on Automated Planning and Scheduling (str. 50-52). Association for the Advancement of Artificial Intelligence (AAAI).
Molnar, C. (2020). Interpretable machine learning. Preuzeto s https://www.lulu.com
Paullada, A., Raji, I. D., Bender, E. M., Denton, E., Hanna, A. (2021). Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns, 2(11).
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L. C. (2018). Mobilenetv2: Inverted residuals and linear bottlenecks. U Proceedings of the IEEE conference on computer vision and pattern recognition (str. 4510-4520).
Sarker, I. H. (2021a). Machine learning: algorithms, Real-World applications and research directions. SN Computer Science, 2(3).
Sarker, I. H. (2021b). Deep Learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science, 2(6).
Shimodaira, H. (2000). Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2), 227–244.
TensorFlow Hub. (2023). TensorFlow. Preuzeto s https://www.tensorflow.org/hub
Wang, Z., Dai, Z., Póczos, B., Carbonell, J. (2019). Characterizing and avoiding negative transfer. U Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (str. 11293-11302).
Weiss, K., Khoshgoftaar, T. M., Wang, D. (2016). A survey of transfer learning. Journal of Big data, 3(1), 1-40.
Zhuang, F., Qi, Z., Duan, K., Xi, D., Zhu, Y., Zhu, H., He, Q. (2020). A comprehensive survey on transfer learning. Proceedings of the IEEE, 109(1), 43-76.
Ovaj rad licenciran je pod Creative Commons Attribution-NonCommercial 4.0 International License.
Copyright (c) 2023 Array