andrewla
3 years ago
0
9
Unfortunately I was not able to locate a preprint for the paper itself, so we only have this article summarizing.

First I'll say that without preregistration of the methodology, there's a lot that is immediately suspicious.

> The researchers built an index based on 15 relevant questions in the NCHA, in which students were asked about their mental health in the past year

Why these 15? What was the "relevance" criteria?

To their credit, they don't just look at a summary metric of "mental health" which would be kind of absurd since the relative weighting is also arbitrary (although that appears to be the main conclusion). The article here notes several axes on which significant differences were found. Why these axes? What about other "mental health" metrics? Did they get better or stay neutral or just have no detectable effect?

Without preregistration it's almost impossible to determine exactly how cherry-picked these differences were, as with a large enough set of potential questions to choose from, you're going to find statistically significant trends on some of them by random chance.

The core methodology is to track the spread of Facebook to different colleges and compare mental health between schools that had Facebook and schools that did not yet have Facebook. This is surprisingly not terrible, but without insight into how the study controlled for the time axis and potential confounding variables about the non-random selection of schools for the rollout, it's difficult to say more.

andrewla3 years ago
I hate that I was baited into taking a closer look at this rather than just sticking with my trite dismissal. I did locate a preprint of the paper [1], but have not yet looked at it to determine if any of my above criticisms hold water.

Nonetheless I remain blithely confident that this study is not going to be the one to break the mold.

[1] https://www.econstor.eu/bitstream/10419/256787/1/1801812535.pdf

rajup andrewla3 years ago
So it is a low-effort shallow dismissal then?
lcnPylGDnU4H9OFrajup3 years ago
Certainly a dismissal but at this point it seems rather disingenuous to call it low-effort and shallow.

(Also, please consider this friendly piece of advice: check yourself!)

TainnorlcnPylGDnU4H9OF3 years ago
The follow-up comment is not low-effort and shallow, the original one was.

Not sure why OP considers themselves to have been "baited" when the conversation IMHO has been greatly improved by them substantiating their criticism (which may have its merit).

lcnPylGDnU4H9OFTainnor3 years ago
Fair points!

The comment I responded to was seeming to attribute those to OP's later comments, which would be unfair. The dismissal of the dismissal still comes across as low-effort and shallow.

maxbondlcnPylGDnU4H9OF3 years ago
I think their point was more that, after having obtained the preprint, they still weren't able to produce anything specific to the article, which demonstrates that their dismissal was low effort and unfair (and I think it's fair to call on them to admit that). The amount of effort required to dig into the methodology is high (and it's understandable if one doesn't want to spend their leisure time that way), but that's exactly why we can't go around spreading bullshit; it's so simple to do and takes so much effort to remediate, and often the damage is already done.

As always, it's better to go to another thread if a topic doesn't interest you, rather than disrespect people's time & energy by attacking the validity of the topic itself.

lcnPylGDnU4H9OFmaxbond3 years ago
I dunno, I guess we just disagree. By the time I made my comment, we had this: https://news.ycombinator.com/item?id=32942901 I think neither that comment, nor the commenter's self-reply should be considered lacking effort. (Arguably the attitude that comes across when they complain about being "baited" isn't great, but their intended meaning seems fine.)

I do not take offense to the response calling out OP's first comment as low-effort and shallow because it was both of those things. I just can't see the comment I responded to as defensible with such a strong combination of irony and infelicity.

nequo3 years ago
> Without preregistration it's almost impossible to determine exactly how cherry-picked these differences were

It is hard to credibly preregister studies that use observational data. It also seems hard to design an experiment around the roll-out of a social-media service that we know ahead of time to be successful.

Instead, what is usually done on observational data is (1) making clear what the statistical assumptions are that are required to establish causality, (2) testing possible violations of the assumptions, and (3) testing whether the data is consistent with alternative explanations.

So in such papers, results don't come for free. We need to think seriously about what reasonable theories we can have, and whether the data matches each theory.

> without insight into how the study controlled for the time axis and potential confounding variables about the non-random selection of schools for the rollout, it's difficult to say more.

The paper does also use alternative assumptions that lead to alternative statistical specifications. They also look at various intermediate outcomes to see if they are consistent with their proposed narrative. Such defensive writing is what blows the PDF up to almost 80 pages.

FollowingTheDao3 years ago
It is a working paper and you can find the whole paper here:

https://www.econstor.eu/handle/10419/256787