Thanks for reading G! You raise some good points and I appreciate the time you took to reference several things.
The article certainly doesnt call for active censorship but there have been a couple of responses concerned about that idea which suggests it could be more clear. Here’s an attempt to explain: the graf on the Sedition act was cited as an extreme response in 1918 re: potential antiwar sentiment and a reason why concerns around curtailing free speech/censorship are valid (Wilson was a bit of a control freak, to put it lightly). The Clear and Present Danger test was one way to *measure* limits of free speech during a crisis. Was it perfect? of course not! But it’s cited as a way to describe how Holmes (and others) were *thinking* about measurement. There are standards in oped editorial meetings, but they often tend to be amorphous and subjective — so by definition not really a standard at all. This is why we don’t usually read about extremely hateful views, views that cause for mass harm (harm by commission at least). Oftentimes, though, there isn’t a rubric to measure the validity of a piece. Here I’m getting more to scientific validity. Of *course* we need contrarian views — the Kuhn graf, and the quote by the Hopkins professor in Carson’s book speaks to science evolving via these contrarian views. We definitely do *not* want groupthink.
That said, the key question/aim of this piece was to ask: is there a better way to judge the validity of these views, specifically: i) a way to judge the evidence incorporated (strong vs weak etc) ii) if the oped passes the validity test, does it then pass an imaginary test of ‘harm’ (this is less of a concern if the evidence is strong — data driven positions should prevail, and if they dont this is clearly censorship). But if the evidence is weak AND the position is harmful, it’s more suggestive of a view that just isn’t up to the standard *for the popular press* if the press is ALSO serving as the main source of public health information — this last bit is super key, and it’s similar to arguments about shoddy traditional reporting: triangulate and use data effectively, be honest about sources etc. So I’m not getting at opinions in general or “thought policing” by any means, but specifically providing a platform for opinions about an active pandemic *when those opinions are not backed by evidence, or are backed by low quality evidence.* Note that neither of these points get directly at “who” is writing these opeds — expertise is tricky, as I mentioned, but much of the criticism thus far tends to home in on the ‘who’(i.e. making it personal) whereas it seems more productive to pan out. Why? because it’s often way easier to blame an individual/make them a scapegoat instead of panning out and looking at the other, often more interesting, systemic factors at play.
Does this make sense? Perhaps the article be clearer with a sentence or two cementing the above! I appreciate the time you took to outline your concerns here, as it’s provided an opportunity to clarify a few key points for other readers wondering the same thing. Thanks for reading G. :)