* Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com> |
||
|---|---|---|
| .. | ||
| networks | ||
| utils | ||
| __init__.py | ||
| interpolation.py | ||
| README.md | ||
Frame Interpolation
For current version, we sample 1 frame out of 3 frames in the video. Although we are going to use VAE to avoid frame loss, we provide a frame interpolation tool to interpolate the video now. The frame interpolation tool is based on AMT.
Interpolation can be useful for scenery videos, but it may not be suitable for videos with fast motion.
Requirement
conda install -c conda-forge opencv
pip install imageio
Model
We use AMT as our frame interpolation model. After sampling, you can use frame interpolation model to interpolate your video smoothly.
Usage
The ckpt file will be automatically downloaded in user's .cache directory. You can use frame interpolation to your video file or a video folder.
- Process a video file
python -m tools.frame_interpolation.interpolation your_video.mp4
- Process all video file in target directory
python -m tools.frame_interpolation.interpolation your_video_dir --output_path samples/interpolation
The output video will be stored at output_path and its duration time is equal the total number of frames after frame interpolation / the frame rate
Command Line Arguments
input: Path of the input video. Video path or Folder path(with --folder)--ckpt: Pretrained model of AMT. Default path:~/.cache/amt-g.pth.--niter: Iterations of interpolation. Withminput frames,[N_ITER]=ncorresponds to2^n\times (m-1)+1output frames.--fps: Frame rate of the input video. (Default: 8)--output_path: Folder Path of the output video.