Mobile QR Code QR CODE

REFERENCES

1 
Li B., Yan J., Wu W., Zhu Z., Hu X., Jun. 2018, High performance visual tracking with Siamese region proposal network, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 8971-8980DOI
2 
Li B., Wu W., Wang Q., Zhang F., Xing J., Yan J., Jun. 2019, SiamRPN++: evolution of Siamese visual tracking with very deep networks, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 4282-4291DOI
3 
Yu Y., Xiong Y., Huang W., Scott M. R., Jun. 2020, Deformable Siamese attention networks for visual object tracking, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 6728-6737DOI
4 
Bolme D. S., Beveridge J. R., Draper B. A., Lui Y. M., Jun. 2010, Visual object tracking using adaptive correlation filter, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 2544-2550DOI
5 
Henrique J. F., Caseiro R., Martins P., Batista J., Mar. 2015, High-speed tracking with kernelized correlation filters, IEEE Trans. Pattern Anal. Mach. Intell., Vol. 37, No. 5, pp. 583-596DOI
6 
Danelljan M., Robinson A., Khan F. S., Felsberg M., Oct. 2016, Beyond correlation filters: learning continuous convolution operators for visual tracking, in Proc. Eur. Conf. Comput. Vis., pp. 1-16DOI
7 
Danelljan M., Bhat G., Khan F. S., Felsberg M., Jun. 2017, ECO: efficient convolution operators for tracking, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 6638-6646DOI
8 
Bertinetto L., Valmadre J., Henriques J. F., Vedaldi A., Torr P. H. S., Nov. 2016, Fully-convolutional Siamese networks for object tracking, in Proc. Eur. Conf. Comput. Vis., pp. 850-865DOI
9 
Ren S., He K., Girshick R., Sun J., Jun. 2017, Faster R-CNN: towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell., Vol. 39, No. 6, pp. 1137-1149DOI
10 
Zhu Z., Wang Q., Li B., Wu W., Yan J., Hu W., Sep. 2018, Distractor-aware Siamese networks for visual object tracking, in Proc. Eur. Conf. Comput. Vis., pp. 101-117DOI
11 
Li Z., Yang J., Liu Z., Yang X., Jeon G., Wu W., Jun. 2019, Feedback network for image super-resolution, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 3862-3871DOI
12 
Kim J., Kim W., Dec. 2020, Attentive feedback feature pyramid network for shadow detection, IEEE Signal Process. Lett., Vol. 27, pp. 1964-1968DOI
13 
Zhao T., Wu X., Jun. 2019, Pyramid feature attention network for saliency detection, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 3085-3094DOI
14 
Hu J., Shen L., Sun G., Jun. 2018, Squeeze and excitation networks, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 7131-7141DOI
15 
Lin T-Y., Maire M., Belongie S., Bourdev L., Girshick R., Hays J., Perona P., Ramanan D., Zitnick C. L., Dollar P., Sep. 2014, Microsoft COCO: common objects in context, in Proc. Eur. Conf. Comput. Vis., pp. 740-755DOI
16 
Russakovsky O., Deng J., Su H., Krause J., Satheesh S., Ma S., Huang Z., Karpathy A., Khosla A., Bernstein M., Berg A. C., Fei-Fei L., 2015, ImageNet large scale visual recognition challenge, Int. J. Comput. Vis., Vol. 115, No. 3, pp. 211-252DOI
17 
Xu N., Yang L., Fan Y., Yang J., Yue D., Liang Y., Price B., Cohen S., Huang T., Sep. 2018, YouTube-VOS: sequence-to-sequence video object segmentation, in Proc. Eur. Conf. Comput. Vis., pp. 585-601DOI
18 
Real E., Shlens J., Mazzocchi S., Pan X., Vanhoucke V., Jul. 2017, YouTube-BoundingBoxes: a large high-precision human-annotated data set for object detection in video, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 5296-5305DOI
19 
Kristan M., et al. , Oct. 2016, The visual object tracking with VOT2016 challenge results, in Proc. Eur. Conf. Comput. Vis.DOI
20 
Kristan M., et al. , Sep. 2018, The sixth visual object tracking VOT2018 challenge results, in Proc. Eur. Conf. Comput. Vis.DOI
21 
Wang Q., Zhang L., Bertinetto L., Hu W., Torr P. H. S., Jun. 2019, Fast online object tracking and segmentation: a unifying approach, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 1328-1338DOI
22 
Yao H., Zhu D-L., Jiang B., Yu P., Oct. 2019, Negative log likelihood ratio loss for deep neural network classification, in Proc. Future Tech. Conf., pp. 276-282DOI
23 
Tan H., Zhang X., Zhang Z., Lan L., Zhang W., Luo Z., 2021, Nocal-Siam: Refining visual features and response with advanced non-local blocks for real-time Siamese tracking, IEEE Trans. Image Process., Vol. 30, pp. 2656-2668DOI
24 
Wang X., Girshick R., Gupta A., He K., Jun. 2018, Non-local neural networks, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 7794-7803DOI
25 
Zhou W., Wen L., Zhang L., Du D., Luo T., Wu Y., 2021, SiamCAN: Real-time visual tracking based on Siamese center-aware network, IEEE Trans. Image Process., Vol. 30, pp. 3597-3609DOI
26 
Jiang M., Zhao Y., Kong J., Aug. 2021, Mutual learning and feature fusion Siamese networks for visual object tracking, IEEE Trans. Circuits Syst. Video Technol., Vol. 31, No. 8, pp. 3154-3167DOI
27 
Li Q., Li Z., Lu L., Jeon G., Liu K., Yang X., Sep. 2019, Gated multiple feedback network for image super-resolution, in Proc. Brit. Mach. Vis. Conf., pp. 1-12DOI
28 
He K., Zhang X., Ren S., Sun J., Jun. 2016, Deep residual learning for image recognition, in Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit., pp. 770-778DOI