Mobile QR Code QR CODE

2024

Acceptance Ratio

21%

REFERENCES

1 
D. Jiang, ``Systems design of commodity image recognition based on convolution neural network,'' Journal of Beijing Polytechnic College, vol. 20, no. 3, pp. 4-6, 2021.URL
2 
L. Jiao, M. Wang, and Y. Huo, ``A traffic classification and recognition method based on multimodal deep learning,'' Radio Communications Technology, vol. 2, no. 2, pp. 13-17, 2021.URL
3 
C. Zhang and X. Zhang, ``Label embedding based multimodal multi-label emotion recognition,'' Cyber Security and Data Governance, vol. 7, pp. 41-44, 2022.URL
4 
P. Li, X. Wan, and S. Li, ``Image caption of space science experiment based on multi-modal learning,'' Optics and Precision Engineering, vol. 29, no. 12, pp. 12-16, 2021.DOI
5 
S. Sun, B. Guo, and X. Yang, ``Embedding consensus autoencoder for cross-modal semantic analysis,'' Computer Science, vol. 48, no. 7, pp. 93-98, 2021.URL
6 
W. Liu and S. Jiang, ``Product identification method based on unlabeled semi-supervised learning,'' Computer Applications and Software, vol. 2022, no. 7, pp. 39-44, 2022.URL
7 
Y. Wang, ``E-commerce commodity entity recognition algorithm based on big data,'' Microcomputer Applications, vol. 37, no. 6, pp. 80-83, 2021.URL
8 
M. J. A. Patwary, W. Cao, Z.-Z. Wang, and M. A. Haque, ``Fuzziness based semi-supervised multimodal learning for patient’s activity recognition using RGBDT videos,'' Applied Soft Computing, vol. 120, pp. 120-129, 2022.DOI
9 
S. Praharaj, M. Scheffel, H. Drachsler, and M. Specht, ``Literature review on co-located collaboration modeling using multimodal learning analytics—Can we go the whole nine yards?'' IEEE Transactions on Learning Technologies, vol. 14, no. 3, pp. 367-385, 2021.DOI
10 
L. Yu, C. Liu, J. Y. H. Yang, and P. Yang, ``Ensemble deep learning of embeddings for clustering multimodal single-cell omics data,'' Bioinformatics, vol. 39, no. 6, pp. 10-18, 2023.DOI
11 
C. Mi, T. Wang, and X. Yang, ``An efficient hybrid reliability analysis method based on active learning Kriging model and multimodal-optimization-based importance sampling,'' International Journal for Numerical Methods in Engineering, vol. 122, no. 24, pp. 7664-7682, 2021.DOI
12 
A. Rahate, R. Walambe, S. Ramanna, and K. Kotecha, ``Multimodal co-learning: Challenges, applications with datasets, recent advances and future directions,'' Information Fusion, vol. 81, pp. 203-239, 2022.DOI
13 
B. Bardak and M. Tan, ``Improving clinical outcome predictions using convolution over medical entities with multimodal learning,'' Artificial Intelligence in Medicine, vol. 117, pp. 102-112, 2021.DOI
14 
J. Xiong, F. Li, and X. Zhang, ``Re: Xiong et al.: Multimodal machine learning using visual fields and peripapillary circular OCT scans in detection of glaucomatous optic neuropathy (Ophthalmology. 2022; 129:171-180) reply,'' Ophthalmology, vol. 129, no. 4, pp. 129-139, 2022.DOI
15 
E. A. Smith, N. T. Hill, T. Gelb, et al., ``Identification of natural product modulators of Merkel cell carcinoma cell growth and survival'' Scientific Reports, vol. 11, no. 1, 13597, 2021.DOI
16 
Y. Pan, A. Braun, and I. Brilakis, ``Enriching geometric digital twins of buildings with small objects by fusing laser scanning and AI-based image recognition,'' Automation in Construction, vol. 140, 106633, 2022.DOI
17 
J. Qin, C. Wang, X. Ran, S. Yang, and B. Chen, ``A robust framework combined saliency detection and image recognition for garbage classification,'' Waste Management, vol. 140, pp. 193-203, 2022.DOI
18 
F. Long, ``Simulation of English text recognition model based on ant colony algorithm and genetic algorithm,'' Journal of Intelligent and Fuzzy Systems, vol. 40, no. 4, pp. 1-12, 2021.DOI
19 
B. Lu and Z. Chen, ``Live streaming commerce and consumers’ purchase intention: An uncertainty reduction perspective,'' Information & Management, vol. 58, 103509, 2021.DOI
20 
C.-D. Chen, Q. Zhao, and J.-L. Wang, ``How livestreaming increases product sales: Role of trust transfer and elaboration likelihood model,'' Behaviour & Information Technology, vol. 41, no. 3, pp. 558–573, 2022/DOI