• Gaywallet (they/it)@beehaw.orgM
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    I think what’s surprising about it, is that this isn’t a laundry list of shitty journals. High quality journals have a fairly rigorous review process meant to surface and deal with exactly this kind of thing. The bigger journals are quite good at spotting simple techniques like omitting data or p-hacking, but it appears that at least historically they were less resistant to image manipulation. Although I’ve never been a prolific researcher going through the submitter process with a place with the amount of prestige that Science and Nature brings and it’s very possible that they lax the process for high profile people or those who submit regularly. Either way, I’m sure many journals are watching this unfold quite closely as there will be much to learn to make processes more resilient to issues like this.

    • RBG@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      quite good at spotting simple techniques like omitting data or p-hacking

      I don’t know about that. Spotting omitted data would only work if a key experiment is missing or if a reviewer suggests a control experiment that was actually done but not shown, or what do you mean?

      And how to spot p-hacking? That would only work if you’d be able to see all underlying raw data. Otherwise especially in high impact journals the p-values are always excellent when they need to be.

      • Pigeon@beehaw.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Not to mention these peer review processes rely on unpaid labor from professionals who are heavily incentivized to use their time for basically anything else. They skim.

        The replication crisis does not at all exclude highly regarded journals, unfortunately.

      • jarfil@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        That would only work if you’d be able to see all underlying raw data.

        A paper without the underlying raw data, is like a bicycle without wheels: you know it might’ve been useful at some point, but it isn’t anymore.

        Very few papers publish both the raw data, and the analysis tools used on it, for everyone to verify their results.

        The rest, are no different than a 4th grader writing down an answer, then when a teacher asks them to “show your work”, they come back with “no, trust me, my peers agree I’m right, you do your own work”.

        It’s extra sad when you contact a researcher directly for the data, and you get any of “it got lost in the last lab move”, “I’m only giving it if you show me how you going to process it first”, or some clearly spotty data backfilled from the paper’s conclusions.