While in some cases "more speech" can answer inaccurate or harmful speech, by default, the structure of information display on a platform doesn't put "counter speech" in direct conversation with the speech that it aims to address.
Some attempts to change this have been made in the realm of crowdsourced annotations - pieces of context that users can attach to primary content, which will appear in the view of any users that see the primary content. This is typically gated behind some kind of twin mechanism of aggregated feedback, only allowing the context to be attached if there is broad consensus about its utility.
These systems have the potential to be partial antidotes to misinformation on platforms, though they can't solve the thornier varieties of the problem. They're reactive, they can be abused, they take effort for the contributor, and they outsource the determination of accuracy to the consumers of the content, who are unlikely to be objective neutral jurors. Despite these flaws, systems like this are starting to gain traction, and where they work, they offer the best of all worlds: an approach that is content neutral, that scales, and that helps users better sort out truth from fiction.