AI ‘Slop’ Channels Thrive on YouTube Despite Crackdown, Indian Channel ‘Bandar Apna Dost’ Leads the Surge
Despite YouTube’s repeated assurances that it is tightening rules around low-quality, mass-produced AI content, new research suggests such videos continue to flourish — and make serious money — on the platform. A recent study by video-editing company Kapwing highlights how YouTube’s recommendation engine is still actively surfacing what it terms “AI slop”, even to first-time users.
According to Kapwing’s analysis of content from 15,000 of the world’s most popular YouTube channels, more than one in five videos recommended to newly created accounts fall into this category. These videos are typically cheap to produce, highly repetitive, and optimised to maximise clicks and watch time rather than deliver meaningful value. The findings raise questions about how effective YouTube’s current enforcement mechanisms really are, particularly as generative AI tools become more accessible.
The scale of the phenomenon is striking. Kapwing identified 278 channels that upload nothing but AI slop. Collectively, these channels account for an estimated 63 billion views and around 221 million subscribers. While YouTube’s policies say such low-quality AI-generated videos are not eligible for monetisation, the report estimates these channels may still be earning nearly $117 million annually through indirect revenue streams.
India features prominently in the study’s findings. The most-viewed AI slop channel flagged by Kapwing is ‘Bandar Apna Dost’, which is believed to be based in India. The channel has already crossed 2.4 billion views in just a few months. Its videos usually feature an AI-generated rhesus monkey with human-like behaviour, often alongside a Hulk-like muscular character, locked in repetitive battles against demons. These dramatic, formulaic storylines are designed to keep viewers watching for longer.
Kapwing estimates that ‘Bandar Apna Dost’ alone could be generating about $4.25 million a year — roughly Rs 38 crore — even without official monetisation under YouTube’s standard ad policies.
To better understand how this content reaches audiences, researchers also tested YouTube’s recommendation system directly. After creating a brand-new account, they tracked the first 500 videos shown on the home feed. Of these, 104 were labelled as AI slop. Around one-third of the remaining recommendations were classified as “brain rot”, a term used to describe low-effort, highly repetitive videos created primarily to exploit the platform’s algorithms.
The report describes a rapidly expanding ecosystem built around generative AI. On one side are creators churning out dozens of AI-generated videos daily using free or inexpensive tools. On the other are individuals operating in grey areas, selling courses and “guaranteed viral” strategies that teach others how to replicate this engagement-farming model.
YouTube has taken some visible action. Earlier this month, the platform reportedly blocked two large channels uploading fake AI-generated movie trailers. However, critics argue these steps lag behind the pace at which such content is spreading.
Platforms themselves appear divided on AI content. During an earnings call in October, Meta CEO Mark Zuckerberg said AI would help create “yet another huge corpus of content” for Facebook and Instagram. YouTube, too, is investing heavily in AI, recently integrating Veo 3 into Shorts to enable in-app AI video creation.
Responding to Kapwing’s study, YouTube defended its stance, saying: “Generative AI is a tool, and like any tool it can be used to make both high- and low-quality content.” The company added that it remains focused on connecting users with high-quality videos and that all content must comply with community guidelines, regardless of how it is produced.