AI is changing the way we make, watch and interact with videos


The catch, however, was that Khan never actually spoke about these stores. An artificial intelligence (AI) algorithm took a single video of Khan and changed how his lips moved, combined with his tonal qualities, to essentially make him say the names of these neighbourhood stores in subsequent videos.

Mondelez is not the only company to try something like this. The use of AI in video is practically changing how firms look at everything from advertising campaigns to corporate learning material. In fact, Ashray Malhotra, co-founder and chief executive of Rephrase.ai— the company behind Mondelez’ ad —envisions a future where videos will be completely machine-driven.

To create Khan’s video, Rephrase used a branch of AI called Generative Adversarial Networks, or GANs, that can create new videos out of existing ones, and text documents. Malhotra said Rephrase is working with a top crypto firm in India to convert their blog posts into videos, while brands are also using the technology to create internal company documents into video-led corporate learning material for employees.

Though the term GANs doesn’t feature in common parlance as AI, the chances of you having interacted with one are high. The technology has a rather infamous use that the general public is more aware of — deepfakes. While companies like Rephrase.ai are using GANs for business and functional purposes, the fact that these algorithms can create ‘artificial humans’ and videos out of thin air is well documented.

In fact, in 2020 South Korean Samsung’s subsidiary, called STAR Labs, had announced a project called Neon, which was specifically meant to explore this. The project, which was then led by Indian technologist Pranav Mistry, created artificial humans who could be used for marketing, advertising and interactive purposes by brands.

GANs contain two machine learning (ML) algorithms — a creator and a discriminator. The creator creates an artificial video, while the discriminator’s job is to test whether it’s fake. When the creator is able to beat the discriminator, that becomes the output of the algorithm.

According to Rephrase’s estimates, the market for personalized videos across sales, human resources, marketing and more stands at over $90 billion worldwide. Malhotra noted that brands see the value of this because it allows them to achieve a scale that they couldn’t otherwise imagine. However, Mondelez’s commercials for shooting a hundred videos with Khan would certainly have been humongous, and much more time consuming, than what the company could do with AI.

But GANs, too, have their limitations. For instance, Malhotra said that while Rephrase’s system can create videos in virtually any language right now, it cannot make the characters on-screen move like a real human. Essentially, these algorithms manipulate the movement of the lips and aren’t great at dealing with other body parts just now, though Rephrase intends to start researching that in future too.

Personalized videos are not just about artificially-created videos. Homegrown Hyperstate Technologies, for instance, is taking a different route. Its no-code platform, called Kappa, allows brands to enhance their existing videos and make them interactive.

Also read: AI could cut hiring biases as companies make push to find workers, proponents say

Imagine an e-commerce firm shooting models showcasing new clothes. At the moment, companies put these videos out into the wild and hope users will come to their website. With Kappa, they can put a ‘buy’ or ‘learn more’ button right inside the video. Hyperstate has worked with companies like Standard Chartered Bank, Mercedes and more to create personalized videos.

For banks, the company can create an interactive video chatbot. Instead of mundane text-based chatbots that have practically run rampant post the pandemic, Kappa can be used by firms to replace the bot with a real human who answers questions. Users will watch a video of a person talking naturally, and be able to click on questions at key points in the video.

On the other hand, auto brands like Mercedes used Kappa to combine a host of videos into one. As a result, the user can watch a video of say the Mercedes S Class driving across a scenic landscape and decide to change the car’s colour mid-way, or just zoom into the vehicle’s interiors and more.

Prashanto Das, one of the founders of the Pune-based firm, envisions this as a ‘metaverse’ technology. In the sense that the metaverse that companies like Meta (formerly Facebook) and Microsoft envision is all about making digital spaces and experiences interactive. It’s not too tough to see how this can fit in.

There’s a bridge between firms like this too. While neither Rephrase, nor Hyperstate, discussed fully artificial videos just yet, their technologies can easily go hand in hand. A firm like Mercedes could, in theory, choose to create artificial humans using GANs and put them through a platform like Kappa to create fully personalized and literally artificial videos that look and feel real.

That future is still a few years away, but the path to AI changing the videos we watch is close enough. The idea that viewers could make a choice that would alter the progress of Netflix’s 2018 movie Bandersnatch seemed new at the time, but videos that are enhanced by AI are here to stay on Indian television and the web.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint.
Download
our App Now!!



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *