med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

416
active users

#OpenAlex

0 posts0 participants0 posts today
Continued thread

It took ten years, but HAL has started implementing a part of it (ccsd.cnrs.fr/en/2023/12/hal-s- ). Unlike @pintoch (antonin.delpeuch.eu/posts/retr ), I consider this a success for Dissemin.

But we need more. Every repository should do it, but given the effort it's more realistically done by consortial, national or international bodies. Some parts of the service, like #OpenAlex / #Unpaywall and #InternetArchiveScholar, are so expensive we can probably afford just one to be shared by all global users.

www.ccsd.cnrs.frHAL’s new service: deposit suggestions for automatic import of Open Access publications

Last year, we found that @OpenAlex was mistakenly marking some papers as retracted—misleading researchers. We @hauschke reported the issue, posted a preprint in March 2024, and… #OpenAlex fixed it almost instantly! 👏

:doi: doi.org/10.1177/01655515251322

Meanwhile, our peer-reviewed paper on this? Published today - over a year later. See the difference? Preprints matter. Open science works.

New #preprint 📢 - Can #OpenAlex compete with #Scopus in bibliometric analysis?

👉 arxiv.org/abs/2502.18427

@OpenAlex has broader coverage and shows higher correlation with certain expert assessments.

At the same time, it has issues with metadata completeness and document classification.

❗ Most intriguingly: it turns out that raw #citation counts perform just as well, and in some cases even better, than normalized indicators, which have long been considered the standard in #scientometrics.

arXiv.orgIs OpenAlex Suitable for Research Quality Evaluation and Which Citation Indicator is Best?This article compares (1) citation analysis with OpenAlex and Scopus, testing their citation counts, document type/coverage and subject classifications and (2) three citation-based indicators: raw counts, (field and year) Normalised Citation Scores (NCS) and Normalised Log-transformed Citation Scores (NLCS). Methods (1&2): The indicators calculated from 28.6 million articles were compared through 8,704 correlations on two gold standards for 97,816 UK Research Excellence Framework (REF) 2021 articles. The primary gold standard is ChatGPT scores, and the secondary is the average REF2021 expert review score for the department submitting the article. Results: (1) OpenAlex provides better citation counts than Scopus and its inclusive document classification/scope does not seem to cause substantial field normalisation problems. The broadest OpenAlex classification scheme provides the best indicators. (2) Counterintuitively, raw citation counts are at least as good as nearly all field normalised indicators, and better for single years, and NCS is better than NLCS. (1&2) There are substantial field differences. Thus, (1) OpenAlex is suitable for citation analysis in most fields and (2) the major citation-based indicators seem to work counterintuitively compared to quality judgements. Field normalisation seems ineffective because more cited fields tend to produce higher quality work, affecting interdisciplinary research or within-field topic differences.

I made a small change to my list of academic profile links at fietkau.science/publications earlier this month. Kicked out some Elsevier stuff in favor of OpenAlex. Their profile of me was already pretty good, all I had to do was let them know that Julian Fietkau from TU Berlin is someone else.

If you'd like to see me through the lens of the academic system, fietkau.science is hopefully your best bet, but any of these places will offer some usable slice of the picture.