Mark Dingemanse<p>I generally dislike puns in article titles but 'careless whisper', about the algorithmic harms of hallucinations in OpenAI's whisper tool, is pretty good <a href="https://doi.org/10.1145/3630106.3658996" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1145/3630106.365899</span><span class="invisible">6</span></a> <a href="https://scholar.social/tags/facct24" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>facct24</span></a> </p><p>Shocking findings: 38% of hallucinations include explicit harms (violence, inaccurate associations, false assertions of authority) and they are more likely to occur in e.g. aphasic speech — so really this is a bias amplifier <a href="https://scholar.social/tags/ASR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ASR</span></a> <a href="https://scholar.social/tags/LanguageTechnology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageTechnology</span></a></p>