For most content based harms, the magnitude of harm typically stems not from the content itself, but scales with the volume of people that access the content, since the thing that causes harm is the person consuming the content, rather than the content simply existing. As an example, the financial harm that accrues to copyright holders is approximately linear with then number of people that gain access to the content illegally - this is the number of potential customers the IP holder has missed out on. This type of thinking can be applied to a wide range of (but not all) content based harms, including such diverse harms as copyright, malware, violent extremism, deepfakes, and health misinformation.
Platforms provide users the capacity to share content with one another, but do so under different paradigms of content sharing which fall along a spectrum of visibility.
Since a platform deciding which of these modalities of feature sharing they want to support is a design choice, tweaking which of these features are enabled is a way to dramatically reduce the potential for harm on a platform.
This leads us to a general principle of the design of content-sharing systems: Place limits on the visibility of a user's content aligned with their level of trust.
This provides a flexible way of thinking about product restrictions that limits the capacity for users to use platforms to generate harm, with a simple mad-lib formulation: "only allow visibility feature when a user has signal of trust".
As some examples of how this could be manifested as product rules/features:
While we often can responsibly use trust signals to restrict visibility features, sometimes there is legitimate question as to whether or not a platform should even support a given visibility level:
As always, the most surefire way to mitigate harm is to think carefully about what the bounds of expected and acceptable behavior should be for an application, and to expressly constrain users' on-platform behavior to what is expected.