By: Fajeera Asif
Social media platforms claim to be neutral spaces for free expression. Yet, a recent study conducted by feminist researchers on tackling marginalization in online spaces found that tech companies consistently fail to protect marginalized users from digital violence in Pakistan. While these platforms comply with state censorship demands, they rarely intervene when organized hate campaigns target women, transgender individuals, and religious minorities. This failure not only allows harassment to escalate but also enables real-world harm, making it clear that stronger regulatory enforcement is necessary. But can these tech giants be forced to act? The research suggests that without legal and financial consequences, corporate accountability will remain elusive.
How Platforms Enable Hate and Harassment
The research highlights that social media platforms’ moderation systems are structurally biased against marginalized users, largely because they fail to regulate hate speech in Urdu and regional languages. While content moderation policies claim to tackle misinformation and harassment, the study found that hate speech targeting women, trans persons, and religious minorities in local languages spreads unchecked.
One of the starkest findings is the selective enforcement of content removal policies. Meta (Facebook & Instagram) complies with 82.3% of takedown requests from the Pakistani government, but these requests often target political dissent rather than hate speech against vulnerable groups. TikTok removes 94.4% of content flagged for “immorality” yet provides minimal user data to the FIA, making it difficult to track and prosecute organized harassment networks. Meanwhile, X (formerly Twitter) remains largely unresponsive to local abuse reports, allowing misogynistic, transphobic, and sectarian content to remain online indefinitely.
This failure to regulate hate speech allows orchestrated online violence to escalate into real-world threats. The research documents cases where viral hate campaigns led to physical assaults, arrests, and even forced displacement of religious minorities—all facilitated by platforms that failed to act despite clear policy violations. Doxxing, blasphemy accusations, and mass reporting attacks remain key digital weapons used against marginalized communities, yet platforms provide no meaningful protection mechanisms.
Why Tech Companies Refuse to Act
The research identifies three primary reasons social media companies fail to protect marginalized users in Pakistan.
First, profit-driven moderation ensures high-engagement content—including inflammatory and hateful posts—is prioritized by algorithms. Since outrage drives clicks, shares, and ad revenue, platforms let harmful content remain online longer, increasing visibility and engagement.
Second, fear of government retaliation discourages companies from regulating hate speech aligning with state-backed ideologies. The research found platforms quickly remove dissenting voices critical of the state but hesitate to act against digital violence fueled by political or religious factions.
Third, lack of regional moderation capacity means companies rely on AI-based moderation not trained to detect hate speech in Urdu and regional languages. This gap allows coordinated hate campaigns to operate freely, as harmful content fails to trigger automated detection and remains online until significant damage is done.
Even when survivors report harassment and threats, the research shows platforms rarely respond effectively. Takedown requests are often ignored, appeals from marginalized users are deprioritized, and repeat offenders continue using platforms without consequence. This forces many women, trans persons, and religious minorities into self-censorship or withdrawal from digital spaces.
Can Tech Companies Be Forced to Act? The Case for Regulation
The research makes it clear: self-regulation by social media companies has failed. Without legal, financial, and regulatory pressure, these platforms will keep prioritizing engagement metrics over user safety. The study calls for a combination of domestic legislation, international oversight, and grassroots activism to hold tech companies accountable.
One key recommendation is to mandate local-language content moderation. Platforms must be legally required to invest in Urdu and regional-language moderation teams, ensuring hate speech targeting marginalized groups is removed as effectively as English-language abuse. Without this, AI-driven moderation will keep failing Pakistani users.
Additionally, transparency requirements must be strengthened. The research proposes quarterly reports from social media companies detailing: Hate speech removal rates for Urdu and regional languages, response times for harassment complaints from Pakistani users, and a breakdown of content moderation actions, ensuring marginalized groups receive equal protection.
Another critical recommendation is creating a fast-track reporting system for marginalized groups. Current complaint mechanisms are slow and ineffective, leaving survivors vulnerable to escalating abuse while waiting for a response. A dedicated rapid-response abuse reporting pathway would ensure immediate takedown of content inciting violence against women, trans persons, and religious minorities.
Regulation Is the Only Path Forward
The research proves that social media companies cannot be trusted to protect marginalized users voluntarily. As long as hate speech and digital harassment remain profitable, tech giants will continue ignoring their role in amplifying online violence. Only through regulatory enforcement can these platforms be forced to act.