Not too long ago, the global audience had to limit themselves to content within their geographical boundaries. However, in the age of the internet, video-on-demand platforms have taken over the media industry, and the content we access is no longer obliged by the geography we are in.

In fact, our viewership experience is now excitingly international given that we now watch content produced in say Brazil, Germany, Korea and the USA with ease. Irrespective of where we live. For example, Money Heist, the Spanish crime drama television series, is touted to be the most-watched non-English series on Netflix with a global viewership of 44 million

However, content moderation is an aspect of paramount importance with international distribution. It is crucial for content distributors to adhere to a pre-determined set of laws and contextual practices, that determine what kind of content can be portrayed, based on the location.

What makes content moderation challenging for media enterprises?

Global Content moderation solutions market is expanding by 10% every year and Media & Entertainment accounts for over one-third of the total market.

Content needs to be moderated based on the cultural and contextual norms of a particular geographical location. An Example – A 60 minute season finale of the US TV show Game of Thrones had to be halved to allow it to conform to the censorship norms in India. 

From UA in India to TV-PG in the USA and MA15+ in Australia, the standards that define Television Content Rating systems across the world are painfully complex and diverse.

 For example, in the US, television rating is divided into TV-PG, TV 14, and TV-MA, where parental guidance is recommended for TV-PG. At the same time, TV-14 may contain programs that may be unsuitable for children under the age of 14. TV-MA is typically meant for adults and may be inappropriate for children under the age of 17.

Good luck following that.

Above: A representative comparison of current television content rating systems between four countries with the horizontal axis indicating age.

Regulating TV/ movie content to follow censorship regimes and standards of different countries has always been a strenuous task. Some level of profanity or violence which is considered normal in one country may be offensive to the people of another country. A considerable amount of effort is spent on making the content comply with various international censorship laws. This causes immense financial and logistical burden to broadcasters since the censorship laws are often quite complex and don’t follow the same pattern.

Why is it challenging for content moderators?

Due to the increasing pressure to adhere to compliance regulations, many broadcasters/ hire content moderators to thoroughly monitor content and check for objectionable material. 

However, it is observed that when these moderators are exposed to volumes of hateful, explicit content daily, it lead to stress and mental health issues. 

Additionally, human moderators couldn’t go through each content asset, given the large volumes of the content generated each on each platform. Also for platforms which rely on user-generated content, this is even difficult to achieve.

These challenges make it imperative for broadcasters to search for alternative methods of moderating data, like taking the help of Artificial Intelligence solutions.

How Artificial Intelligence is a much-needed step in augmenting the existing content moderation workflows

Content is often moderated before it is made available to the audience. However, in a few cases, there are misses, which are flagged by the users. The cost of not adhering to compliance is high, and the margin of error very low.

The video instances identified would require a second level of review. With the help of AI, most of this work can now be almost completely automated, thus eliminating the need for time-consuming manual effort.

The presence of restricted content like nudity, profanity, violence, the act of smoking and alcohol consumption can be detected and tagged at scale by new-age AI models.

 These models, once trained, can go on to identify not only the presence of the restricted content but also the levels of existence as pre-defined by various censorship standards. The use of machine learning will then enable this technology to identify the exact points in the given video content, where further editing is required. The model finally completes the task by allowing the production software to automatically use this tagged data to perform corrective actions such as;

  • Blurring the image or bleeping the word
  • Completely removing or cropping the image
  • Displaying a warning sign on the screen

Image Credits – poster for Sin City: A Dame to Kill For (2014)

The almost total automation of the censorship compliance process that AthenasOwl offers can not only hasten the delivery of existing international content to our screens. Still, it will also serve as an incentive for movie/TV studios to actively produce media for a wider audience.

To know other ways on how AI can assist in operational automation for media, read The broadcasters ultimate guide to post-production.