Of course, not Tomi Lahren though…

    • the_stormcrow@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Hmm, not seeing that in the link provided…maybe it’s been updated, or an issue on my end?

        • the_stormcrow@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          Thanks kind internet stranger! Also, am I just obtuse, or did the last questions not have response data?

          As an unrelated aside, who the hell has that kind of time for a survey?!

    • Gorilladrums@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      9 months ago

      Okay, mild progress… But where’s the data for it?

      Also, this is from the methodology pdf:

      We used the following sources to recruit respondents: ● targeted advertisements using the Meta advertising >platform ● SMS text messages

      Regardless of which of these sources a respondent came from, they were directed to a survey hosted on SurveyMonkey’s website.

      Ads placed on social media targeted likely voters nationwide. Those who indicated that they were not registered to vote were terminated. Those who indicated they were over the age of 34 were terminated. As the survey fielded, Change Research used dynamic online sampling: adjusting ad budgets, lowering budgets for ads targeting groups that were overrepresented, raising budgets for ads targeting groups that were underrepresented. The survey was conducted in English.

      So this is a self reported online poll with 84 oddly phrased questions that was advertised primarily on Instagram and Facebook and conducted on a redirected website. The methodology seems dodgy, and even the people conducting it agree:

      We adopt The Pew Research Center’s convention for the term “modeled margin of error”(1) (mMOE) to indicate that our surveys are not simple random samples in the pure sense, similar to any survey that has either non-response bias or for which the general population was not invited at random. A common, if imperfect, convention for reporting survey results is to use a single, survey-level mMOE based on a normal approximation. This is a poor approximation for proportion estimates close to 0 or 1. However, it is a useful communication tool in many settings and is reasonable in places where the proportion of interest is close to 50%. We report this normal approximation for our surveys assuming a proportion estimate of 50%

      Thanks for the link regardless