In a recent update on their website, Facebook-owned WhatsApp, arguably now the most popular messaging platform in the world, announced that as of December 7th this year, they will begin taking severe measures against spamming on their platform.
This action will not be limited to blocking or deleting spam messages / or accounts, but will include taking legal action against anybody who sends bulk or non-personal messages through the system, and anybody who helps them do so.
Even companies who claim publicly that they have the ability to send bulk messages on WhatsApp could be targeted.
The message on the site read, in part, “This serves as notice that we will take legal action against companies for which we only have off-platform evidence of abuse if that abuse continues beyond December 7, 2019, or if those companies are linked to on-platform evidence of abuse before that date.”
Take Note – Private Messages Only
Although they did not say why that date in particular had been selected, companies currently sending out bulk messages now have 6 months to prepare for and make the necessary changes to protect them from the anti-spam initiative, including verified WhatsApp Business Solution Providers, who have access to the system for business purposes.
Businesses are still welcome to use the platform, but only if they use it to talk to one customer at a time. Broadcast messages will no longer be allowed through the platform.
Even before this happens though, WhatsApp is cutting the limit on forwarded messages. Where previously you could forward a message to up to 20 recipients, this is now being changed to allow only 5 forwards.
Fighting Spam
The company explained that WhatsApp was built for private messages, to help people chat with friends and family, conduct business, or talk confidentially to a doctor, and not to build audiences or share things widely. Other platforms exist for that.
In a recently published white paper, they provided some details for the systems that they will use to identify and fight spam, including monitoring IP addresses previously linked to suspicious behaviour, and detecting automated use of the platform including flagging messages that are sent without having taken the time to be typed first, accounts which quickly create large numbers of groups, or which add thousands of users to groups in a short space of time.
Newly registered accounts that send lots of messages quickly, and high-intensity user behaviour that doesn’t conform with typical usage patterns, will also be flagged or banned.
In the last year, more than two million accounts were banned, 75% of which were flagged internally, rather than as a result of reports or complaints by users.