mirror of
https://github.com/hpcaitech/Open-Sora.git
synced 2026-04-10 04:37:45 +02:00
Fix typos in README.md (#654)
- Changed "Incoporate" to "Incorporate" - Changed "seperately" to "separately" - Changed "applicaiton" to "application" - Corrected two instances of "infomation" to "information"
This commit is contained in:
parent
ee25f847f9
commit
a82687c11e
|
|
@ -141,7 +141,7 @@ see [here](/assets/texts/t2v_samples.txt) for full prompts.
|
|||
|
||||
- [x] Training Video-VAE and adapt our model to new VAE.
|
||||
- [x] Scaling model parameters and dataset size.
|
||||
- [x] Incoporate a better scheduler (rectified flow).
|
||||
- [x] Incorporate a better scheduler (rectified flow).
|
||||
- [x] Evaluation pipeline.
|
||||
- [x] Complete the data processing pipeline (including dense optical flow, aesthetics scores, text-image similarity, etc.). See [the dataset](/docs/datasets.md) for more information
|
||||
- [x] Support image and video conditioning.
|
||||
|
|
@ -242,7 +242,7 @@ docker run -ti --gpus all -v .:/workspace/Open-Sora opensora
|
|||
| Diffusion | 1.1B | 30M | 70k | Dynamic | [:link:](https://huggingface.co/hpcai-tech/OpenSora-STDiT-v3) |
|
||||
| VAE | 384M | 3M | 1M | 8 | [:link:](https://huggingface.co/hpcai-tech/OpenSora-VAE-v1.2) |
|
||||
|
||||
See our **[report 1.2](docs/report_03.md)** for more infomation. Weight will be automatically downloaded when you run the inference script.
|
||||
See our **[report 1.2](docs/report_03.md)** for more information. Weight will be automatically downloaded when you run the inference script.
|
||||
|
||||
> For users from mainland China, try `export HF_ENDPOINT=https://hf-mirror.com` to successfully download the weights.
|
||||
|
||||
|
|
@ -256,7 +256,7 @@ See our **[report 1.2](docs/report_03.md)** for more infomation. Weight will be
|
|||
| mainly 144p & 240p | 700M | 10M videos + 2M images | 100k | [dynamic](/configs/opensora-v1-1/train/stage2.py) | [:link:](https://huggingface.co/hpcai-tech/OpenSora-STDiT-v2-stage2) |
|
||||
| 144p to 720p | 700M | 500K HQ videos + 1M images | 4k | [dynamic](/configs/opensora-v1-1/train/stage3.py) | [:link:](https://huggingface.co/hpcai-tech/OpenSora-STDiT-v2-stage3) |
|
||||
|
||||
See our **[report 1.1](docs/report_02.md)** for more infomation.
|
||||
See our **[report 1.1](docs/report_02.md)** for more information.
|
||||
|
||||
:warning: **LIMITATION**: This version contains known issues which we are going to fix in the next version (as we save computation resource for the next release). In addition, the video generation may fail for long duration, and high resolution will have noisy results due to this problem.
|
||||
|
||||
|
|
@ -298,7 +298,7 @@ pip install gradio spaces
|
|||
python gradio/app.py
|
||||
```
|
||||
|
||||
This will launch a Gradio application on your localhost. If you want to know more about the Gradio applicaiton, you can refer to the [Gradio README](./gradio/README.md).
|
||||
This will launch a Gradio application on your localhost. If you want to know more about the Gradio application, you can refer to the [Gradio README](./gradio/README.md).
|
||||
|
||||
To enable prompt enhancement and other language input (e.g., 中文输入), you need to set the `OPENAI_API_KEY` in the environment. Check [OpenAI's documentation](https://platform.openai.com/docs/quickstart) to get your API key.
|
||||
|
||||
|
|
|
|||
Loading…
Reference in a new issue