diff --git a/docs/commands.md b/docs/commands.md index 2c948de..d982db8 100644 --- a/docs/commands.md +++ b/docs/commands.md @@ -1,5 +1,6 @@ # Commands +- [Config](#Config) - [Inference](#inference) - [Inference with Open-Sora 1.2](#inference-with-open-sora-12) - [Inference with Open-Sora 1.1](#inference-with-open-sora-11) @@ -12,6 +13,35 @@ - [Training Hyperparameters](#training-hyperparameters) - [Search batch size for buckets](#search-batch-size-for-buckets) +## Config +Note that currently our model loading for vae and diffusion model supports two types: + +* load from local file path +* load from huggingface + +Our config supports loading from huggingface by default. +If you wish to load from a local path, you need to set `force_huggingface=True`, for instance: + +```python +# for vae +vae = dict( + type="OpenSoraVAE_V1_2", + from_pretrained="/root/commonData/OpenSora-VAE-v1.2", + micro_frame_size=17, + micro_batch_size=4, + force_huggingface=True, # NOTE: set here +) +# for diffusion model +model = dict( + type="STDiT3-XL/2", + from_pretrained="/root/commonData/OpenSora-STDiT-v3", + qk_norm=True, + enable_flash_attn=True, + enable_layernorm_kernel=True, + force_huggingface=True, # NOTE: set here +) +``` + ## Inference You can modify corresponding config files to change the inference settings. See more details [here](/docs/structure.md#inference-config-demos).