Image generated with DALL-E. Prompt: Envision a video editor working in a modern room, its plain white walls bathed in the bright, natural light of day entering from the left side. The room is alive with the vibrancy of daytime, highlighting the editor's desk and the two large monitors before them. From our vantage point behind the editor, we see the left monitor displaying a vivid underwater scene, full of colorful corals and fish, while the right monitor shows a video editing software interface, detailed and arranged as one would expect from a professional using Adobe Premiere. To the editor's right, a smaller screen captures our attention with its rotating circle, symbolizing ongoing work. This daylight-filled environment, enhanced by a single lush houseplant, not only accentuates the editor's professionalism but also the serene and focused ambiance conducive to creative endeavors.

How personal AI assistants will supercharge productivity

There’s no denying artificial intelligence is evolving at an astonishing pace. Across so many disciplines – the written word, photography, audio, video, 3D rendering, automating workflows – AI can do things that were unthinkable just a few years ago. A combination of fantastic academic research, powerful infrastructure, entrepreneurial vigor, billions of cash invested and – importantly – a huge amount of training data, has supercharged the field.

However for all of the amazing things AI can do, the results can undeniably be generic. It makes sense – when you train a large language model on all of the written material you can find, it will tend to produce something that’s the average of all its inputs. If you combine all of your brightest paint colors together, you’re always going to end up with a shade of brown.

But despite this limitation, its undeniable that the current generation of AI tools are the genesis of something truly exciting. The questions for those of us working in the industry is, where are we going next? How can we focus our resources and research in the right direction to make sure that the tools we develop are truly useful to people and not just technical showcases?

As you can imagine this is something we spend quite a lot of time thinking about, and I wanted to lay out our thoughts on where we go next. How can we take this nascent technology and turn it into a productivity multiplier, and something that people want to use every day?

From general purpose to genuinely personal

In most fields its accepted that you will go through a period of entry-level training to find your feet before specializing in a specific area. You have to know how to do the basics before you can perform more advanced tasks. Right now, AI is in its 101 phase. It’s learning how to answer questions like a human, draw pictures and, in our case, understand videos.

But pretty soon it’s going to be time to start specializing, and developing AI systems that can perform very specific tasks with the nuance and care of a human. And because every human has their own nuances, we believe the future is highly specialized. Personal assistants will actively learn from you in real-time and update as new information is published online (or offline in private databases). They will understand how you work, when you work, who do you work with and can take care of the manual tasks in the background while you do the stuff that humans are best at – idea generation, human connection and new ways to solve problems.

Let me give you an example. If you edit a lot of video you probably have a workflow that you follow. Record your podcast, find b-roll footage, sync audio tracks, assemble a rough cut on Adobe Premiere, tidy up the dialogue to remove pauses and stutters, resize for mobile, add overlays or captions, then export. With an AI that already understands the fundamentals of video search and editing, your personal workflow becomes extra training data to create an intelligent assistant that saves you – specifically you – time.

At Imaginario AI, we strongly believe that we are moving from tool-based workflows to task-based ones with personal agents orchestrating these apps and recommending more efficient solutions. We call this Accelerant AI. The future will also be hybrid (cloud and on-prem storage and systems) with teams collaborating virtually (in 2D and 3D) from different parts of the world; they will be searching, creating and transforming content out of multiple systems and tools.

This AI could equally apply to other fields, and in writing (by a long way the area where AI is most advanced) we’re already seeing the beginnings of this trend. Custom LLM models and third-party plugins like those offered by OpenAI and Google’s Bard can be used to come up with article outlines, talking points and even do background research for you. It’s very easy to imagine a writing app with a built-in AI that watches you work, and over time offers helps with your most frequently-performed tasks. Your assistant does the busywork, and you’re left with the fun part; the creativity!

@imaginarioai Wondering what we're building? @Jozpug had a chat with @Joey Daoud about where we see the future of AI productivity. Check out the full interview on VP Land – https://youtu.be/4WOb5Y1Qcp0 #ai #artificialintelligence #videoproduction #virtualproduction #productivity ♬ Courtroom – Jacob Yoffee

Training data needs to be ringfenced

We’re big advocates of AI, but we’re also big advocates of privacy and copyright, especially in the Media and Creator space. That’s why we never use videos uploaded to our platform to train generative models or anything that could “leak” proprietary information. And if we’re going to embrace a future of hyper-personal AI assistants, we need to take the same approach.

How I edit videos is different to how you edit videos, and the style we’re looking for in the finished product is probably very different. So quite apart from privacy concerns, an AI video editing assistant that knows my style probably won’t be of much use to you. And, to take things full-circle, if all of our editing styles are used as training data for one omnipotent AI editing assistant, all our videos will end up looking the same. We’ll all be using brown paint.

However – big detour here – that does raise the possibility of high-profile creators opening access to their personal assistants to others to learn how to imitate their style. Want to produce features like Christopher Nolan or shorts like Casey Niestat? Maybe you’ll be able to license their assistant who will highlight a certain cut and say “Casey would have done it like this”.

With the new release of Sora, OpenAI’s text to video model, we are at the dawn of a new era in personalized content creation. Once this model becomes customizable (with opt-out for data training), from a simple text-prompt you will be able to re-create content you have produced in the past or at the very least modify it for a fraction of the cost (goodbye VFX and green screens). Who knows? Even license your style to other creators.

What’s this going to look like in real life?

As with all great technologies, in time these features will recede into the background and will become a normal part of your workflows. Remember when we moved from having to manually hit “save” every so often to auto-saves happening in the background? Same thing.

Imagine you want to make a new Tiktok on baking the absolute best banana bread. You shoot your cooking shots, a selfie video explaining your method, and some B-roll of your dog helping you out (because who doesn’t love some bonus dog content?). You import all of your footage into Imaginario AI, and tell your personal assistant you’re making a TikTok about baking banana bread.

Your assistant will analyze your clips and understand the correct order (perhaps by querying a separate GPT to obtain a template banana bread recipe), and quickly build a rough cut. Because you’re making a TikTok, it will understand that your selfie video is a cutout and overlay it on top of the action. Naturally, I will add captions. Granted, it might not understand where to put the dog.

Then your job becomes more akin to a director. All the busywork has been done for you, so you can spend your time finessing the final edit to fit your vision. Maybe you want to up the pace here, add some sound effects there, add a spontaneous cutaway somewhere else.

In short, what we’re talking about is a world where the annoying, fiddly, time-consuming parts of your workflow are largely automated in a way that respects your curation criteria and editing style, and with the context of the end product you’re looking for. Your first cut will be assembled in seconds. What’s left for you is humor, emotion, plot twists – the stuff humans are best at.

Article credits

Originally published on

With image generation from


And TikTok creation from

Imaginario AI