Some forms of abuse require platform-level views and metrics to identify. Platforms have access to many metrics on individual pieces of content that regular users have no insight into, as well as evaluations of the standard ranges for those metrics across the full platform. In most cases, platforms seek to keep this information to themselves - disclosing it could risk shedding too much light on platform operations and mechanisms, not to mention the privacy risks involved.
However, in some cases, platforms should consider allowing users to highlight these metrics to help contextualize information on the platform. Ex:
Importantly, in each of these cases, the context that the platform is providing is purely technical in nature, and is typically already collected by most major platforms. Adding the option for users to investigate questions of this kind doesn't require the platform to become an arbiter of truth, just a steward of technical metadata.
This broad category of intervention would allow users more visibility into the metadata that platforms collect, and could additionally allow them to crowdsource or highlight this information to other users. Similar to how Crowdsourced Annotations can work, users could pull out pieces of metadata that help them better situate the idea in context.