diff --git a/README.md b/README.md
index 29a5755..3e7bd4a 100644
--- a/README.md
+++ b/README.md
@@ -267,14 +267,14 @@ torchrun --nproc_per_node 1 --standalone scripts/diffusion/inference.py configs/
We implemented an inference scaling sampling method inspaired by [Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps](https://inference-scale-diffusion.github.io). You can spent more computational resources to get better results. Use it by specifying the sampling option.
```
-torchrun --nproc_per_node 4 --standalone scripts/diffusion/inference.py configs/diffusion/inference/768px_t2i2v_inference_scaling.py --save-dir samples --dataset.data-path assets/texts/sora.csv
+torchrun --nproc_per_node 4 --standalone scripts/diffusion/inference.py configs/diffusion/inference/t2i2v_768px_inference_scaling.py --save-dir samples --dataset.data-path assets/texts/sora.csv
```
-| Orignal |
num_subtree=3
num_scaling_steps=5
num_noise=1
time=16min |
num_subtree=7
num_scaling_steps=8
num_noise=1
time=1h |
+| Original |
num_subtree=3
num_scaling_steps=5
num_noise=1
time=16min |
num_subtree=7
num_scaling_steps=8
num_noise=1
time=1h |
|----------------------|----------------------------------------------------------------|----------------------------------------------------------------|
-| [Video Placeholder 1] | [Video Placeholder 2] | [Video Placeholder 3] |
-| [Video Placeholder 1] | [Video Placeholder 2] | [Video Placeholder 3] |
-
+|
|
|
|
+|
|
|
|
+|
|
|
|
### Reproductivity
@@ -297,7 +297,7 @@ We test the computational efficiency of text-to-video on H100/H800 GPU. For 256x
## Evaluation
-On [VBench](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard), Open-Sora 2.0 significantly narrows the gap with OpenAI’s Sora, reducing it from 4.52% → 0.69% compared to Open-Sora 1.2.
+On [VBench](https://huggingface.co/spaces/Vchitect/VBench_Leaderboard), Open-Sora 2.0 significantly narrows the gap with OpenAI's Sora, reducing it from 4.52% → 0.69% compared to Open-Sora 1.2.
