Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 7-Day Trial for You or Your Team.

Learn More →

Test-Time Adaptation for Video Frame Interpolation via Meta-Learning.

Test-Time Adaptation for Video Frame Interpolation via Meta-Learning. Video frame interpolation is a challenging problem that involves various scenarios depending on the variety of foreground and background motions, frame rate, and occlusion. Therefore, generalizing across different scenes is difficult for a single network with fixed parameters. Ideally, one could have a different network for each scenario, but this will be computationally infeasible for practical applications. In this work, we propose MetaVFI, an adaptive video frame interpolation algorithm that uses additional information readily available at test time but has not been exploited in previous works. We initially show the benefits of test-time adaptation through simple fine-tuning of a network and then greatly improve its efficiency by incorporating meta-learning. Thus, we obtain significant performance gains with only a single gradient update without introducing any additional parameters. Moreover, the proposed MetaVFI algorithm is model-agnostic which can be easily combined with any video frame interpolation network. We show that our adaptive framework greatly improves the performance of baseline video frame interpolation networks on multiple benchmark datasets. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png IEEE transactions on pattern analysis and machine intelligence Pubmed

Test-Time Adaptation for Video Frame Interpolation via Meta-Learning.

IEEE transactions on pattern analysis and machine intelligence , Volume 44 (12): 14 – Nov 8, 2022

Test-Time Adaptation for Video Frame Interpolation via Meta-Learning.


Abstract

Video frame interpolation is a challenging problem that involves various scenarios depending on the variety of foreground and background motions, frame rate, and occlusion. Therefore, generalizing across different scenes is difficult for a single network with fixed parameters. Ideally, one could have a different network for each scenario, but this will be computationally infeasible for practical applications. In this work, we propose MetaVFI, an adaptive video frame interpolation algorithm that uses additional information readily available at test time but has not been exploited in previous works. We initially show the benefits of test-time adaptation through simple fine-tuning of a network and then greatly improve its efficiency by incorporating meta-learning. Thus, we obtain significant performance gains with only a single gradient update without introducing any additional parameters. Moreover, the proposed MetaVFI algorithm is model-agnostic which can be easily combined with any video frame interpolation network. We show that our adaptive framework greatly improves the performance of baseline video frame interpolation networks on multiple benchmark datasets.

 
/lp/pubmed/test-time-adaptation-for-video-frame-interpolation-via-meta-learning-0nMWC2adN0

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

eISSN
1939-3539
DOI
10.1109/TPAMI.2021.3129819
pmid
34813468

Abstract

Video frame interpolation is a challenging problem that involves various scenarios depending on the variety of foreground and background motions, frame rate, and occlusion. Therefore, generalizing across different scenes is difficult for a single network with fixed parameters. Ideally, one could have a different network for each scenario, but this will be computationally infeasible for practical applications. In this work, we propose MetaVFI, an adaptive video frame interpolation algorithm that uses additional information readily available at test time but has not been exploited in previous works. We initially show the benefits of test-time adaptation through simple fine-tuning of a network and then greatly improve its efficiency by incorporating meta-learning. Thus, we obtain significant performance gains with only a single gradient update without introducing any additional parameters. Moreover, the proposed MetaVFI algorithm is model-agnostic which can be easily combined with any video frame interpolation network. We show that our adaptive framework greatly improves the performance of baseline video frame interpolation networks on multiple benchmark datasets.

Journal

IEEE transactions on pattern analysis and machine intelligencePubmed

Published: Nov 8, 2022

There are no references for this article.