Go to file
2025-03-11 17:01:42 +08:00
components refactored code to nextjs 2024-04-27 12:25:06 +08:00
data update gallery 2025-03-11 17:01:42 +08:00
pages refactored code to nextjs 2024-04-27 12:25:06 +08:00
public refactored code to nextjs 2024-04-27 12:25:06 +08:00
styles refactored code to nextjs 2024-04-27 12:25:06 +08:00
.gitignore refactored code to nextjs 2024-04-27 12:25:06 +08:00
next.config.mjs fixed basepath 2024-04-27 13:11:11 +08:00
package-lock.json open-sora v1.3 gallery (#787) 2025-02-20 16:46:00 +08:00
package.json open-sora v1.3 gallery (#787) 2025-02-20 16:46:00 +08:00
postcss.config.js refactored code to nextjs 2024-04-27 12:25:06 +08:00
README.md Update README.md 2024-04-27 13:33:39 +08:00
tailwind.config.js refactored code to nextjs 2024-04-27 12:25:06 +08:00
yarn.lock open-sora v1.3 gallery (#787) 2025-02-20 16:46:00 +08:00

Open-Sora Gallery

We rewrite the nerfies[https://github.com/google/nerfies] with React and Next.js to enable people without knowledge in html to edit the web content by simply changing the data files.

🚀 Deploy

You can follow the steps below to deploy this website to Github Pages.

20240427-133232

Wait for about one minute and visit the Github Pages again.

✏️ Edit Examples

The web content will display demo examples given in the data/examples.js file. This file contains an examples variable and its structure follows this format:

[
  // video examples
  {
    title: "Text To Video",
    items: [
      {
        prompt: "some prompt",
        inputs: [],
        output: {
          link: "link to a video on Streamable",
        },
      },
    ],
  },

  // another group of examples
  {
    title: "Animating Image",
    items: [
      {
        prompt: "some prompt",
        inputs: [
          {
            link: "link to a video on Streamable",
          },
        ],
        output: {
          link: "link to a video on Streamable",
        },
      },
    ],
  },
];

If you wish to add another video, you can basically append an object like the one below to the items field. This defines a single example and the fields are explained below.

{
    prompt: "some prompt",
    inputs: [],
    output: {
        link: "link to a video on Streamable",
    },
}
  • prompt: the prompt used to generate the video
  • inputs: it is list of objects in the format of { link: "link to a video on Streamable" }, these are the reference images/videos used to generate the final video. Streamable can display images as well.
  • output: it is a object in the format of { link: "link to a video on Streamable" }, this is the final video

Some examples for difference generation cases are given below:

  1. Text to Video
{
    prompt: "some prompt",
    inputs: [

    ],
    output: {
        link: "link to a video on Streamable",
    },
}
  1. Image to Video
{
    prompt: "some prompt",
    inputs: [
        {
            link: "link to an image on Streamable",
        }
    ],
    output: {
        link: "link to a video on Streamable",
    },
}
  1. Image Connecting
{
    prompt: "some prompt",
    inputs: [
        {
            link: "link to an image on Streamable",
        },
        {
            link: "link to an image on Streamable",
        }
    ],
    output: {
        link: "link to a video on Streamable",
    },
}
  1. Video Connecting
{
    prompt: "some prompt",
    inputs: [
        {
            link: "link to a video on Streamable",
        },
        {
            link: "link to a video on Streamable",
        }
    ],
    output: {
        link: "link to a video on Streamable",
    },
}