Beyond the glamorous red carpets and jubilant acceptance speeches, film industries worldwide hide a dirty little secret: Everyone’s secretly using AI. From a producer generating ideas on ChatGPT, to a set designer creating a concept for sets, or in cases where I’ve heard almost an entire film made as part of previz (pre-visualisation) before being actually shot, everyone’s quietly pushing buttons on existing and rapidly emerging AI tools. But ask them to go on record about it, and they’ll turn as lively as a corpse buried in a graveyard for a decade?
Why the hush-hush? Because in Hollywood, Bollywood and their global and regional cousins, using AI isn't as simple as boosting productivity, which it is for other fields; it is a moral minefield. Why? Because in the case of almost every AI company, the very data that powers their AI genies was often gathered, scoped, and outright stolen without the permission of creators, leading to multiple global courtroom dramas across the world, which are worthy of their own web series.
So, how does an industry like the media and entertainment one in India, built entirely on the foundation of creativity, navigate this ethically compromised new world order whose rules are being written and changed every day? I went to a fiery, no-holds-barred FICCI Frames roundtable on October 7 titled “AI & the Creator Economy – Adapting to the New Normal.”
No surprises here, but the roundtable revealed an industry split right down the middle. Some were in favour of allowing AI to unlock the creative potential of India, allowing the kid (AI) to run before we even put any shackles on its use. On the other hand, there were those who advocated caution. Even the examples from global content aggregators, as highlighted by those gathered, reflected this great AI divide.
Thus, on one hand, we have a company like Netflix, which, with a new move, has emerged as the cool, progressive kid on the block. They’ve proudly published an AI rulebook that basically says, "Go for it, but you gotta tell us!" This way of pointing out where AI was used permits AI-generated content, but makes transparency mandatory.
YouTube, on the other hand, is behaving like a strict class teacher. It’s blocking monetisation for fully AI-generated videos. But wait: isn't their parent company, Google, the king of AI tools? So, how is this family feud allowed? As other panellists pointed out, this was so as to stop the flood of soulless, AI-generated "listicle" sort of content. Creative uses of AI would be allowed to monetise. That actually shows a nuanced approach to handling AI-generated videos.
These two, among a few others, are the forward thinkers. The rest are still trying to figure out their AI policy, leading to a wild-wild-west kind of scenario where there’s no standardisation and what the platform says goes, creating not just a grey area of conflicting rules, but also a lacuna of clarity for content creators.
This is sort of the midpoint where the plot thickens because the question on everyone's lips was: If you create something with AI, who really owns it?
Naturally, the saga of Midjourney vs. Disney was bound to be brought up. When users started generating images like that of Darth Vader and the Minions, Disney didn't just get mad; it got their humongous legal team involved, slapping not just a lawsuit on Midjourney but also labels like "copyright free-rider" and "bottomless pit of plagiarism." This, as you’d expect, has sent a chill down every creator's spine. The million, perhaps billion-dollar question is this: If you create a character using Midjourney and you’re monetising it, can someone like Disney come after you?
The feeling is all too real for veterans like Munjal Shroff, the roundtable's convener, cofounder of Graphiti Multimedia, and one of India’s finest animators. He has seen AI replicate his iconic animated characters with 95% accuracy, leaving him feeling helpless. As one creator vented about AI companies using his work: "You will use my footage to train your engine and make money based on my data for perpetuity. How do you compensate me for that?”
So many conundrums and pitfalls, not just from a moral but also legal standpoint, have left the industry reeling from both sides: on one side, they see their works plagiarised by AI, and on the other, they are scared of being sued if they use generative AI. A potential solution comes from Google, which is offering legal indemnity to its enterprise Workspace customers using Gemini. Translation: If you get sued for what you create with their AI, Google’s lawyers have your back. It’s a game-changing move that addresses a massive fear for corporations. Yet does not answer the question of these LLMs plagiarising creators' work.
Amidst the chaos, there's potential for genuine magic as well. A music producer shared how AI "opened doors which was sealed shut for years," empowering indie artists to create without gatekeepers. But tread softly, cause when you use AI, you tread on someone else’s dreams and work. So, when you use B2B tools used by big studios for de-ageing A-list actors or dubbing films like War 2, this is operating with clear consent and contracts. It's clean, legal, and slick.
This reverses when we start using consumer-facing, generic AI tools that you and I use every day, like the ChatGPTs, Perplexity, Claude, Gemini or DeepSeeks of the world that are trained on a murky pool of uncompensated data, making them a legal liability waiting to happen, as I mentioned earlier.
This part was the most interesting for me, as it highlighted where India is lagging. One person in the roundtable talked of how, when you ask any of the image generators to create an ‘Indian woman’, the system created a native American woman. These AI models, trained on skewed global datasets, are failing at basic cultural representation. This glaring error highlights both a gap in the AI landscape and hence a massive opportunity: the urgent need to build local models trained on India’s rich and diverse culture, our content, our images and videos. Thankfully, DeepSeek’s sudden ascent in January forced the government of India to rethink its position in the global AI race, and thousands of crores under different initiatives have been allocated to defibrillate the Indian AI ecosystem.
The FICCI Frames roundtable was just the opening act. The industry is screaming for a sort of guidebook or rulebook that addresses consent, ownership and transparency when it comes to the use of AI. Some were of the opinion that instead of trying to impose shackles, it would be best to let people figure out what to do for themselves, which means that a guidebook encouraging the use of AI and how to approach it is the right way to go about it.
Having said it all, the message, at the end of the session, was clear: AI is not the villain in this story, but a powerful, untamed force. The challenge is to create frameworks that protect the creator without stifling the creation. Thanks to FICCI, the conversation has begun, and every day, the next scene in this AI soap opera is being written. And the next day, changed.