The Indian microblogging app Koo is a safe application for usage. They are taking steps to make it better. The app which was introduced in March 2020, has been the second-largest multilingual microblogging site accessible worldwide. A pioneer in language-based microblogging, Koo’s empower users to freely share their views and express themselves in a language of their choice through its immersive language experiences, democratizing the voices of people.

More than 20 languages are presently supported by Koo as per its website. More than 7,500 prominent figures in politics, sports, media, entertainment, and more actively use it. They use it to engage with their followers in many languages. It has more than 50 million downloads. Koo’s online and app platforms are used in more than 100 different countries.

Koo seeks to improve communication between people, provide them the ability to express themselves in the language of their choice, give them a platform for their voice, and remove linguistic hurdles that stand in the way of cross-lingual interaction.

Koo app

Image Source: 1000 Logos

The Indian microblogging platform Koo is working to develop a platform that uses machine learning and artificial intelligence (AI) to improve its content regulation.

Tech businesses are attempting to provide an effective method to prevent the dangers posed by issues like misinformation, impersonation, pornography, and violent graphic content at a time when fake news and AI-generated media content are prevalent on social media.

In order to make sure that its content moderation tactics are effective and help the platform become a safe place for all stakeholders, Koo has revealed features to help and protect the users. Although the social network’s interface resembles that of Twitter, the business insists that Koo stands out because of its dedication to creating a secure and egalitarian environment.

Also Read: How Are Creators’ Video Marketing Campaigns Increasing Sales?

The Features of Koo

On nudity and pornography: If a user uploads a nude image to their Koo account, they will immediately receive a warning that reads, “This Koo has been deleted due to GRAPHIC, OBSCENE OR SEXUAL CONTENT.”

The company claims that the entire procedure is automated and begins just seconds after the picture is posted. The user will receive a second message after the image has been deleted explaining why it was taken down and inviting them to submit an appeal using the redressal form if they feel that a mistake was made. These notifications will be displayed in the user’s selected preferred language.

Similarly to this, if a user uploads a video that contains explicit sexual content or nudity, it will be removed in about five seconds, depending on how long the video is and how long it takes to process it. Koo will notify the user once the video has been deleted. Koo will remove a user’s nude display picture if they post one, employing the same process.

“In countries like India and many others around the world, pornography is illegal. Someone posting nude pictures or pornography from an Indian IP is an illegal act, and platforms should take these off. But, with platforms from global companies, these things happen and we notice such content existing for a long time.”

Fake news: The platform asserts that it can immediately remove bogus news by running a detection cycle every half-hour. When a user spreads bogus news, the dashboard recognizes it and provides information tracing the content’s origins, providing the moderator with enough details to act right away. The phony news is accompanied by a notification that reads, in part: “Unverified or False Information: Reviews by a Fact Checker.”

If users think their content is authentic, they can also request a review. With Koo, there is an intent behind introducing these features. “We are a thoughts and opinions platform and we want people to come and engage with each other in a healthy way,” says Rajneesh Jaswal, Head of Legal & Policy at Koo, mentioned Indian Express.

During the demonstration, Rahul Satyakam, Senior Manager, Operations, contrasted Koo with Twitter. On the Elon Musk-owned site, Satyakam published identical content to demonstrate how it did not take any action against such posts. In a similar vein, Satyakam demonstrated how an offensive post that he shared a few days earlier on Twitter is still available to everyone.

Regarding posts containing violence: Koo will approve the post but add an extra layer of caution whenever a user publishes a picture with gore or graphic violence. A hazy image with the words “This content may not be appropriate for all users” will then show. Their curiosity has been marked with a warning message. Users will have the option to view, like, or comment on the photograph.

Koo has developed a sophisticated approach and eliminated the sweeping delete mechanism previously observed in terms of obscene content.

Impersonation: According to the platform’s developer, machine learning is growing better at spotting impersonation cases. AI is used to support the impersonation detection process, while human moderators still handle the majority of the manual work.

The platform’s impersonation dashboard, which is only intended for company employees, provides important details about the user and the VIP being impersonated. Soft Delete, one of its features, eliminates all information that could be construed as impersonation, including name and display picture.

Toxic Comments: For the demonstration, the user left an offensive comment on a post. Koo recognizes these postings and effectively conceals them. Users must select the Hidden Comments button in order to see such postings. This function operates almost immediately.

According to the business, Koo’s method of allowing individuals to express their opinions is by permitting comments, even though blocking them would equate to curtailing freedom of expression.

For its chosen Yellow Tick users, Koo has integrated ChatGPT in addition to adding safety checks to content. Users can use the AI chatbot’s prompts to make posts on any topic.