WIDELENS

from

Multimodal search with AI

How to find anything in your videos with Imaginario AI’s multimodal search

As a former critic and journalist, I have my own growing collection of classics and evergreen films and TV shows. When I have time to kill, I tend to sift through and re-edit famous scenes to create inspiring clips and build a deeper connection with my audience (if and when I decide to publish these videos on TikTok 😎).

For example, when discussing themes of entrepreneurship, resilience, and managing a startup, I often find inspiration in scenes from films such as “The Founder” (2016) or “Persuit of Happiness” (2008) or from inspiring figures like Muhammad Ali, Elon Musk, Bruce Lee, Warren Buffet, and Marc Andreessen. Similarly, producers and filmmakers in the process of crafting a new script or re-editing scenes for social media are on the lookout for recurring themes, motifs, and artistic styles.

Searching and rewatching an extensive video catalog with the goal of finding inspiration in pre-production, recycling approved B-roll content for new projects, finding relevant scenes coming from dailies, or creating high-quality compilations for social, can take hours, even days. This is where advancements in video search and analysis technology come into play.

With Imaginario AI, creative teams and production companies have access to an end-to-end cognitive media platform with robust video search capabilities that understand context as a person does: across senses (vision, speech, sounds) and by understanding the passage of time. With a simple text or image query, Imaginario AI can locate exact moments needed for a project in seconds without the need for labels or technical complexity.

For example, you might be looking for a specific scene, such as Tom Cruise running shirtless on the beach, or perhaps finding the moment when the underdog football team wins a match. It doesn’t matter: we can find it. Imaginario AI is not only streamlining the creative process but also opening up new possibilities in post-production.

Use cases creators and production companies are tackling with Imaginario AI

  • Finding people, objects, actions, emotions, sounds, topics, keywords, and more across your entire library or a subset of it (specific folders, collections and videos)
  • Locating and recycling b-roll content for new projects (and save money in stock footage in the process!)
  • Adding subtitles and converting horizontal videos into TikToks, IG Reels, and more in minutes. You can also export your clips to Adobe Premiere for further customization
  • Locating any logos and text on screen to monitor brand exposure
  • Discovering contextual ad inventory to optimize campaigns
  • Identifying sensitive scenes to adapt video content to different markets, platforms, and audiences (e.g. violence, adult scenes, drug abuse, etc)
  • Moodboarding in pre-production

Just tell me how it works

  1. Create an account here. Use a Chrome or Chromium browser to log in.
  2. Video upload: Upload or ingest your videos to Imaginario AI via AWS S3 private buckets or your Google Drive account. Watch as Imaginario AI indexes them in minutes. Say goodbye to pre-defined and inflexible labels, complex queries, and begging your technical team to update your video search tags and systems for more accurate results
  3. Search: Go to the Search section and select the type of query you want to pursue: visual, audio, or speech. You can use any type of text query, or you can upload any image to search for specific faces, objects, and more in your videos
  4. Save your clips or add them to a collection
  5. Download or re-edit clips. You can download them in your drive, re-edit them on Imaginario’s Clip Studio (captions, resizing) or export your project to Adobe Premiere.


Article credits

Originally published on

Filed under

With image generation from

OpenAI

And TikTok creation from

Imaginario AI