Open-Sora/docs/zh_CN/report_v1.md

50 lines
4.8 KiB
Markdown
Raw Normal View History

Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
# Open-Sora v1 技术报告
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
OpenAI的Sora在生成一分钟高质量视频方面非常出色。然而它几乎没有透露任何关于其细节的信息。为了使人工智能更加“开放”我们致力于构建一个开源版本的Sora。这份报告描述了我们第一次尝试训练一个基于Transformer的视频扩散模型。
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
## 选择高效的架构
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
为了降低计算成本我们希望利用现有的VAE模型。Sora使用时空VAE来减少时间维度。然而我们发现没有开源的高质量时空VAE模型。[MAGVIT](https://github.com/google-research/magvit)的4x4x4 VAE并未开源而[VideoGPT](https://wilson1yan.github.io/videogpt/index.html)的2x4x4 VAE在我们的实验中质量较低。因此我们决定在我们第一个版本中使用2D VAE来自[Stability-AI](https://huggingface.co/stabilityai/sd-vae-ft-mse-original))。
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
视频训练涉及大量的token。考虑到24fps的1分钟视频我们有1440帧。通过VAE下采样4倍和patch大小下采样2倍我们得到了1440x1024≈150万个token。在150万个token上进行全注意力计算将带来巨大的计算成本。因此我们使用时空注意力来降低成本这是遵循[Latte](https://github.com/Vchitect/Latte)的方法。
如图中所示在STDiTST代表时空我们在每个空间注意力之后立即插入一个时间注意力。这类似于Latte论文中的变种3。然而我们并没有控制这些变体的相似数量的参数。虽然Latte的论文声称他们的变体比变种3更好但我们在16x256x256视频上的实验表明相同数量的迭代次数下性能排名为DiT完整> STDiT顺序> STDiT并行≈ Latte。因此我们出于效率考虑选择了STDiT顺序。[这里](/docs/acceleration.md#efficient-stdit)提供了速度基准测试。
2025-03-05 03:27:32 +01:00
![Architecture Comparison](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/report_arch_comp.png)
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
为了专注于视频生成我们希望基于一个强大的图像生成模型来训练我们的模型。PixArt-α是一个经过高效训练的高质量图像生成模型具有T5条件化的DiT结构。我们使用[PixArt-α](https://github.com/PixArt-alpha/PixArt-alpha)初始化我们的模型并将插入的时间注意力的投影层初始化为零。这种初始化在开始时保留了模型的图像生成能力而Latte的架构则不能。插入的注意力将参数数量从5.8亿增加到7.24亿。
2025-03-05 03:27:32 +01:00
![Architecture](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/report_arch.jpg)
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
借鉴PixArt-α和Stable Video Diffusion的成功我们还采用了渐进式训练策略在366K预训练数据集上进行16x256x256的训练然后在20K数据集上进行16x256x256、16x512x512和64x512x512的训练。通过扩展位置嵌入这一策略极大地降低了计算成本。
我们还尝试在DiT中使用3D patch嵌入器。然而在时间维度上2倍下采样后生成的视频质量较低。因此我们将在下一版本中将下采样留给时间VAE。目前我们在每3帧采样一次进行16帧训练以及在每2帧采样一次进行64帧训练。
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
## 数据是训练高质量模型的核心
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
我们发现数据的数量和质量对生成视频的质量有很大的影响,甚至比模型架构和训练策略的影响还要大。目前,我们只从[HD-VG-130M](https://github.com/daooshee/HD-VG-130M)准备了第一批分割366K个视频片段。这些视频的质量参差不齐而且字幕也不够准确。因此我们进一步从提供免费许可视频的[Pexels](https://www.pexels.com/)收集了20k相对高质量的视频。我们使用LLaVA一个图像字幕模型通过三个帧和一个设计好的提示来标记视频。有了设计好的提示LLaVA能够生成高质量的字幕。
2025-03-05 03:27:32 +01:00
![Caption](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/report_caption.png)
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
由于我们更加注重数据质量,我们准备收集更多数据,并在下一版本中构建一个视频预处理流程。
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
## 训练细节
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
在有限的训练预算下我们只进行了一些探索。我们发现学习率1e-4过大因此将其降低到2e-5。在进行大批量训练时我们发现`fp16`比`bf16`不太稳定可能会导致生成失败。因此我们在64x512x512的训练中切换到`bf16`。对于其他超参数,我们遵循了之前的研究工作。
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
## 损失曲线
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
16x256x256 预训练损失曲线
2025-03-05 03:27:32 +01:00
![16x256x256 Pretraining Loss Curve](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/report_loss_curve_1.png)
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
16x256x256 高质量训练损失曲线
2025-03-05 03:27:32 +01:00
![16x256x256 HQ Training Loss Curve](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/report_loss_curve_2.png)
Release/v1.1 update (#305) * Update structure.md * Update report_v1.md * Update sample-ref.py (#75) * Update interpolation.py * Dev/pxy (#77) * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scoring/matching * update scene_cut * update scene_cut * update scene_cut[A * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * update scene_cut * m * m * m * m * m * m * m * m * m * m * m * m * m * m * update readme * update readme * extract frames using opencv everywhere * extract frames using opencv everywhere * extract frames using opencv everywhere * filter panda10m * filter panda10m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * m * ocr * add ocr * add main.sh * add ocr * add ocr * add ocr * add ocr * add ocr * add ocr * update scene_cut * update remove main.sh * update scoring * update scoring * update scoring * update README * update readme * update scene_cut * update readme * update scoring * update readme * update readme * update filter_panda10m * update readme * update readme * update launch.ipynb * update scene_cut * update scene_cut * update readme * update launch.ipynb * update readme * add 1.1 demo * update readme * add 1.1 demo * update readme * Update README.md --------- Co-authored-by: Yanjia0 <42895286+Yanjia0@users.noreply.github.com> Co-authored-by: YuKun Zhou <90625606+1zeryu@users.noreply.github.com> Co-authored-by: xyupeng <99191637+xyupeng@users.noreply.github.com>
2024-04-25 06:50:55 +02:00
16x512x512 高质量训练损失曲线
2025-03-05 03:27:32 +01:00
![16x512x512 HQ Training Loss Curve](https://github.com/hpcaitech/Open-Sora-Demo/blob/main/readme/report_loss_curve_3.png)