update video stats

This commit is contained in:
xyupeng 2024-06-15 12:03:42 +08:00
parent 54889b53ae
commit cba233cabb

View file

@ -97,16 +97,21 @@ The training mainly happens on 360p and 480p. We train the model for 23k steps,
### Third stage
In this stage, we collect 2M video clips with 5k hours from all kinds of sources, including:
In this stage, we collect ~2M video clips with a total length of 5K hours from all kinds of sources, including:
- Free-license videos from Pexels, Pixabay, Mixkit, etc.
- Free-license videos, including Pexels, Pixabay, Mixkit, etc.
- [MiraData](https://github.com/mira-space/MiraData): a high-quality dataset with long videos, mainly from games and city/scenic exploration.
- [Vript](https://github.com/mutonix/Vript/tree/main): a densely annotated dataset.
- And some other datasets.
While MiraData and Vript have captions from GPT, we use [PLLaVA](https://github.com/magic-research/PLLaVA) to caption the rest ones. We use 4 frames for each video captioning, and choose the 13B version. PLLaVA is more efficient and better finetuned for video than LLaVA. The accelerated PLLaVA is released in our tools.
Some statistics of the video data used in this stage are shown below. We present basic statistics of duration and resolution, as well as aesthetic score and optical flow score distribution.
We also extract tags for objects and actions from video captions and count their frequencies.
![stats](/assets/readme/report-03_video_stats.png)
![object_count](/assets/readme/report-03_object_count.png)
We mainly train on 480p and 720p in this stage, with a mask ratio 25%. The training config locates in [stage3.py](/configs/opensora-v1-2/train/stage3.py). We train the model for 15k steps, which is approximately 2 epochs.
While MiraData and Vript have captions from GPT, we use [PLLaVA](https://github.com/magic-research/PLLaVA) to caption the rest ones. Compared with LLaVA, which is only capable of single frame/image captioning, PLLaVA is specially designed and trained for video captioning. The accelerated PLLaVA is released in our tools. In practice, we use the pretrained PLLaVA 13B model and select 4 frames from each video for captioning.
We mainly train 720p and 1080p videos in this stage, aiming to extend the model's ability to larger resolutions. We use a mask ratio of 25% during training. The training config locates in [stage3.py](/configs/opensora-v1-2/train/stage3.py). We train the model for 15k steps, which is approximately 2 epochs.
## Easy and effective model conditioning