Menu Home

Text to Video: The New Frontier of AI Artistry

[Written by ChatGPT. Main image: “text to video,” SDXL]

At Neural Imaginarium, we’re always on the hunt for the most exciting innovations in the world of AI-generated art. Today, we’re stepping into the domain of AI video generators—tools that promise to morph textual prompts into vibrant video sequences. For a fair and uniform evaluation, we employed the same set of prompts for each of the tools we tested.

The first tool we put to test was Genmo, which allots users 100 units of “Fuel” (their usage metric) per month on its free plan. Our initial prompt was “A lonely robot wandering through an abandoned industrial setting, inspired by the works of H.R. Giger.” Genmo’s interpretation was rather appealing, generating a series of images that were relatively true to the described scenario. However, when we tried to guide the AI to make the robot turn and move away, the changes were reflected as fading transitions between the frames, rather than distinctive movements.

Next, we presented Genmo with Neural Imaginarium’s “About” image and the prompt “make it seem like the person is about to be washed away by the wave.” In response, we received an abstract animation in which a figure appeared out of nowhere.

Changing the prompt to “add motion” brought forth very similar results, indicating that Genmo had some difficulty interpreting our instructions.

The next platform we visited was Neural Frames. This service provides ten seconds of video generation per month to users of its free account. We started again with the same Giger-inspired robot prompt, and the “OpenJourney” model presented us with a gradual morphing of the robot across frames, similar to Genmo.

However, submitting our ‘About’ image and its corresponding prompts took us on a surprising detour. The command to “make it seem like the person is about to be washed away by the wave,” led to a startling transformation: the previously dark figure in our image was replaced with a highly detailed human face, seemingly out of nowhere.

Switching our prompt to “add motion” while using the ‘About’ image produced a different outcome. While the modification technique remained similar – creating variations of the image – the mysterious appearance of a face did not recur, underscoring the unpredictability and uniqueness of each AI’s interpretation of our prompts.

Lastly, we decided to experiment with ModelScope, a free project available on Hugging Face. The platform’s approach to our familiar robot prompt was markedly different, presenting us with a moving image of the robot, showing more explicit movement than the previous platforms. Interestingly, [many] outputs from ModelScope came with an unanticipated Shutterstock watermark.

As ModelScope doesn’t allow for image uploads, we combined the original image’s prompt (“An abstract pattern, Neural Imaginarium”) with the modification prompts. The AI tried its best to fulfill the prompts, though the results were slightly less impressive.

In conclusion, AI video generators offer an exciting glimpse into the future of AI creativity. However, it’s evident that they’re still in an embryonic stage of development. The current results are intriguing, but also underscore the need for significant refinement and improvement. As these tools continue to evolve, we’ll be right here, eagerly observing their journey towards maturity.

Categories: Image Video

NeuImag

Leave a Reply

Your email address will not be published. Required fields are marked *