When one says something like “most scholars think x” or “the theory of y has not convinced many experts”, how is that actually determined? Are there polls conducted regarding different theories?

  • TurboTurbo@feddit.nl
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Systematic reviews and meta-analysis combine information from multiple studies. In the former, these studies are interpreted together to see what the overall conclusion is. In the latter, data from multiple studies is actually reanalyzed to get an objective overall outcome. Some of these studies combine information from 10s or 100s of studies. Generally, most scientists believe the outcomes of these review papers to be the status quo of the field.

  • NetHandle@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Scholarly articles have ‘impact’ measurements. ie. The impact they have on that field. My understanding is that it’s a combination of # of times it’s been cited, # of times its been downloaded/read with a heavier weighting towards citation. You can filter articles by ‘impact’ in many library databases.
    A theory that is not well accepted will be cited less, even if it’s being cited to be debunked the citations still count as impact, however an article with a greater impact will be cited significantly more which suggests the theory is more compelling.

    As far as my understanding goes.

    • TurboTurbo@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      You are mixing up the journal impact factor (how many times an article from a journal is cited over the last (generally) two years for a particular scientific journal) and and articles number of citations (the total number an article has been refered to by other articles.

      Journals with a high JIF are generally harder to publish in, but this metric is quite misleading as it depends also on thr size of the field. Ophthalmology journals have overall smaller JIFs than Neurology journals.

      Over the last ~5 years the JIF has fallen out of faver for various reasons (read about them here), but unfortunately it is still being used as a measure of ‘how good your research is’.

      More importantly, (good) scientists don’t base their opinions on single studies, even if they are large. Surely, when there is only a single publication on a topic (e.g., right when the COVID pandemic startedany new researchon this topic received a lot of attention), that may have a large impact on their current beliefs and may spark new research, but more commonly, there are multiple studies on a single topic (e.g., now there are 100s of studies on the effect of COVID on cognitive function, isolation, etc) and a good scientist will try to keep up with all of them to form a conclusive opionin that weighs all these studies. It is common that at some point, a review article will be written up that summarizes the current knowledge from multiple studies (see my other comment).

  • Tangent@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    In modern media it pretty much just means they found two people who think that. If they want to get “official” they can arrange for polls to be done but those are very easily crafted to get the results they want.

  • Jo@readit.buzz
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Professional bodies or academics do sometimes survey their fields, especially when it’s politically important to make a point, eg

    Two thirds of economists say Coalition austerity harmed the economy

    Top economists warn ending social distancing too soon would only hurt the economy

    Rival schools of thought often organise letters implying that their stance is the ‘consensus’ (whether or not that claim is reasonable). Or a campaign to establish a new consensus is launched in an academic paper.

    For some fields, like medicine, various organisations produce guidelines, which are increasingly evidence-based rather than opinion-based (ie they look at the evidence rather than surveying professional opinion). The guidelines are not necessarily the consensus but if there are substantial errors or omissions these are likely to be protested and, where appropriate, corrected. Consensus groups are sometimes convened to produce statements with some weight but they are vulnerable to manipulation; I know of one which reconvened after new data were available and the chair (who was well-funded by the drug company) simply expelled everyone who’d changed their minds.

    So, there are some formal and informal mechanisms but it’s really very difficult to discern what ‘the’ consensus is from outside of a field (or even from outside of a very specific niche within a field). The sorts of claims you cite in your OP are often quite reasonable but they’re often also misleading (and quite difficult to prove either way). If anything important rests on the claim, you need to dig a bit (lot) deeper to find out if it’s reasonable. And, of course, bear in mind that facts change and today’s minority might be tomorrow’s majority.

  • lolola@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    I don’t think it’s an objective metric. Based on my experience, they talk amongst each other at research institutions, conferences, and through journal articles. If someone claims “most experts think x” when in reality most experts do not, then most experts hearing it will probably speak up about how wrong it is, shoot it down during peer review, or publish scathing critiques in response to it.

    A “most experts” proclamation that aligns with reality will also cite several prior publications that have also been read and cited widely, which shows the idea has kinda stood the test of time.

    Source: I been in the game a while, despite several attempts to escape. I do wonder if other fields have more objective approaches.