diff --git a/docs/installation.md b/docs/installation.md index e17d0ef..bf7c711 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -82,7 +82,7 @@ export PATH="/path/to/vbench:$PATH" You need to install VBench mannually by: ```bash # first clone their repo -cd .. +cd .. # assume you are in the Open-Sora root folder, you may install at other location but make sure the soft link paths later are correct git clone https://github.com/Vchitect/VBench.git cd VBench git checkout v0.1.2 @@ -92,10 +92,10 @@ vim vbench2_beta_i2v/utils.py # find `image_root` in the `load_i2v_dimension_info` function, change it to point to your appropriate image folder # last, create softlinks -cd Open-Sora # or `cd Open-Sora-dev` for development -ln -s ../VBench/vbench vbench -ln -s ../VBench/vbench2_beta_i2v vbench2_beta_i2v -# later you need to make sure to call evaluatio from your Open-Sora folder, else vbench, vbench2_beta_i2v cannot be found +cd ../Open-Sora # or `cd ../Open-Sora-dev` for development +ln -s ../VBench/vbench vbench # you may need to change ../VBench/vbench to your corresponding path +ln -s ../VBench/vbench2_beta_i2v vbench2_beta_i2v # you may need to change ../VBench/vbench_beta_i2v to your corresponding path +# later you need to make sure to run evaluation from your Open-Sora folder, else vbench, vbench2_beta_i2v cannot be found ``` diff --git a/eval/README.md b/eval/README.md index 9bc70d0..7ae6e32 100644 --- a/eval/README.md +++ b/eval/README.md @@ -46,9 +46,9 @@ python eval/loss/tabulate_rl_loss.py --log_dir path/to/log/dir First, generate the relevant videos with the following commands: ```bash -# vbench tasks (4a 4b 4c ...) -bash eval/sample.sh /path/to/ckpt num_frames model_name_for_log -4a -# launch 8 jobs at once (you must read the script to understand the details) +# vbench task, if evaluation all set start_index to 0, end_index to 2000 +bash eval/sample.sh /path/to/ckpt num_frames model_name_for_log -4 start_index end_index +# Alternatively, launch 8 jobs at once (you must read the script to understand the details) bash eval/vbench/launch.sh /path/to/ckpt num_frames model_name ``` @@ -70,13 +70,13 @@ python eval/vbench/tabulate_vbench_scores.py --score_dir path/to/score/dir ## VBench-i2v [VBench-i2v](https://github.com/Vchitect/VBench/tree/master/vbench2_beta_i2v) is a benchmark for short image to video generation (beta version). - +Similarly, install the VBench package following our [installation](../docs/installation.md)'s sections of "Evaluation Dependencies". ```bash # Step 1: generate the relevant videos -# vbench i2v tasks (5a 5b 5c ...) -bash eval/sample.sh /path/to/ckpt num_frames model_name_for_log -5a -# launch 8 jobs at once +# vbench i2v tasks, if evaluation all set start_index to 0, end_index to 2000 +bash eval/sample.sh /path/to/ckpt num_frames model_name_for_log -5 start_index end_index +# Alternatively, launch 8 jobs at once bash eval/vbench_i2v/launch.sh /path/to/ckpt num_frames model_name # Step 2: run vbench to evaluate the generated samples