Facebook has published the latest version of his Community standards application report, which describes all of the policy and rule violations that Facebook has taken action against, both on Facebook and Instagram, over the past three months.
The report now covers 12 policy areas on Facebook and 10 on Instagram, with this latest version providing more information on Instagram content in particular.
As explained by Facebook:
“The report presents Instagram data in four problematic areas: hate speech, adult nudity and sexual activity, violent and graphic content, and bullying and harassment. For the first time, we are also sharing data on the number of calls that people are making about content. we’ve taken action against Instagram, and the number of decisions we’re reversing, either based on those calls or when we identify the problem ourselves. “
I mean, those two graphs should probably be on the same graph to show the number of calls versus approvals (it looks like Instagram has restored about a quarter of the content it initially deleted after the call), but still, the information is invaluable and helps provide some context as to the level of activity Facebook moderators are facing.
In terms of upward trends, on Facebook there appears to have been a slight increase in its application of drug-related posts in recent months:
This is partly due to better detection, but also to user activity.
The data also indicates that hate speech deletions have increased – although Facebook definitely attributes this to improved detection technology, which among other things has also enabled it to detect hate speech in more languages.
Also note – Facebook says fake accounts are still being created 5% of its monthly active users worldwide, despite Facebook improving its detection and removal process These last months. This means that there are approximately 130 million fake profiles active on the platform.
On Instagram, Facebook reported an increase in actions against nudity and content related to child exploitation, while it also saw a significant increase in the removal of terrorism-related posts.
Again, it is important to note that these graphs represent actions taken, which may be the result of improved processes as much as an increase in relative activity. Still, it seems like a big increase – could Instagram become a new target for this kind of material?
Instagram is also improved its detection of suicide and self-harm content, which has seen an increase in deletions on this front, while bullying on the platform remained stable from the previous quarter, a key area for the platform.
In addition to policy enforcement, Facebook also has reported that the government requests user data increased 9.5% in the last six months of 2019, in line with current trends in governments seeking insight into Facebook data.
Government agencies and their affiliates take Facebook more seriously, with an evolving understanding of the value of Facebook data, for various purposes. The lower trendline on the graph above shows how many of those requests Facebook provided a certain level of data in response, showing that Facebook has remained consistent in its approach. Although, inevitably, this does mean that more Facebook data is being provided, overall, in this regard.
The United States continues to submit the largest number of applications, followed by India, the United Kingdom, Germany and France.
Overall, the trends likely suggest that Facebook’s systems are improving, and with these improved detection metrics, it’s difficult to determine which items are experiencing significant increases in activity, as opposed to Facebook’s improving. just to find them. Some of the upward trends are concerning, but ideally Facebook is removing more of this type of content – and certainly the specific notes on removing more self-harm content from Instagram, for example, are a positive sign.