med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

373
active users

#lrm

1 post1 participant0 posts today
Agustin V. Startari<p>🚨 New article: Protocol Without Prognosis</p><p>We introduce HCC &amp; RLI—two novel indicators of hedging collapse and responsibility leakage in diagnostic AI.</p><p>📘🔗 <a href="https://papers.ssrn.com/abstract=5348251" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">papers.ssrn.com/abstract=53482</span><span class="invisible">51</span></a><br>🔗 <a href="https://zenodo.org/records/15864937" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">zenodo.org/records/15864937</span><span class="invisible"></span></a><br>🔗 <a href="https://www.agustinvstartari.com/article-protocolwithoutprognosis" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">agustinvstartari.com/article-p</span><span class="invisible">rotocolwithoutprognosis</span></a></p><p><a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/ClinicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ClinicalAI</span></a> <a href="https://mastodon.social/tags/MedicalNLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MedicalNLP</span></a> <a href="https://mastodon.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/Linguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linguistics</span></a> <a href="https://mastodon.social/tags/medical" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>medical</span></a> <a href="https://mastodon.social/tags/health" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>health</span></a> <a href="https://mastodon.social/tags/healthcare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>healthcare</span></a> <a href="https://mastodon.social/tags/legal" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>legal</span></a> <a href="https://mastodon.social/tags/technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technology</span></a> <a href="https://mastodon.social/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://mastodon.social/tags/finance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finance</span></a> <a href="https://mastodon.social/tags/business" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>business</span></a> <br><a href="https://mastodon.social/tags/agustinvstartari" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>agustinvstartari</span></a> <a href="https://mastodon.social/tags/AIgovernance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIgovernance</span></a> <a href="https://mastodon.social/tags/LawFedi" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LawFedi</span></a> <a href="https://mastodon.social/tags/lawstodon" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lawstodon</span></a> <a href="https://mastodon.social/tags/politics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>politics</span></a> <a href="https://mastodon.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a></p>
Ramin Honary<a class="hashtag" href="https://fe.disroot.org/tag/llm" rel="nofollow noopener" target="_blank">#LLM</a> and statistical methods of <a class="hashtag" href="https://fe.disroot.org/tag/ai" rel="nofollow noopener" target="_blank">#AI</a> <em>still</em> cannot do symbolic computation <p><a href="https://www.theregister.com/2025/07/01/microsoft_copilot_joins_chatgpt_at/" rel="nofollow noopener" target="_blank">Richard Speed at the Register</a> is reporting that someone named Robert Caruso twice attempted to have a modern <a class="hashtag" href="https://fe.disroot.org/tag/lrm" rel="nofollow noopener" target="_blank">#LRM</a> (retreival-augmented <a class="hashtag" href="https://fe.disroot.org/tag/llm" rel="nofollow noopener" target="_blank">#LLM</a> based <a class="hashtag" href="https://fe.disroot.org/tag/ai" rel="nofollow noopener" target="_blank">#AI</a>), Chat GPT and Microsoft Copilot, to play a game of chess against the <strong><a href="https://en.wikipedia.org/wiki/Atari_2600" rel="nofollow noopener" target="_blank">Atari 2600</a></strong>, a gaming console first released in the year 1982. The Atari 2600 won.</p><p>Now, it is possible that if you want to optimize a LRM to play chess this can be done by including into it’s dataset a bunch of chess games this would probably improve the ability of the AI to win chess.</p><p>But this should be another blow against the argument that simply <a href="https://www.cs.utexas.edu/~eunsol/courses/data/bitter_lesson.pdf" rel="nofollow noopener" target="_blank">increasing the computing power</a> of these AI models is going to result in a “general intelligence” or <a class="hashtag" href="https://fe.disroot.org/tag/agi" rel="nofollow noopener" target="_blank">#AGI</a> . The way these AI systems “learn” is by copying humans, they still have not been designed to synthesize new ideas too far different from ideas expressed in the training data on which they were built.</p><p>I maintain that a better AI will be a smaller LLM cleverly combined with symbolic computation techniques such as proof assistants like <a href="https://lean-lang.org/download/" rel="nofollow noopener" target="_blank">Lean</a>, or dependently typed programming languages like <a href="https://minikanren.org/" rel="nofollow noopener" target="_blank">MiniKanren</a>. I also maintain that if trying to create a general intelligence, one that can think for humans, is a really bad idea that will only teach people to become too dependent on technology. Using AI to think is more similar to an addiction than a superpower.</p><p><a class="hashtag" href="https://fe.disroot.org/tag/tech" rel="nofollow noopener" target="_blank">#tech</a> <a class="hashtag" href="https://fe.disroot.org/tag/ai" rel="nofollow noopener" target="_blank">#AI</a> <a class="hashtag" href="https://fe.disroot.org/tag/retrocomputing" rel="nofollow noopener" target="_blank">#RetroComputing</a> <a class="hashtag" href="https://fe.disroot.org/tag/atari2600" rel="nofollow noopener" target="_blank">#Atari2600</a> <a class="hashtag" href="https://fe.disroot.org/tag/chess" rel="nofollow noopener" target="_blank">#Chess</a></p>
Ramin HonaryThere are rumors going around <p>Apparently there is some kind of failure mode triggered in <a class="hashtag" href="https://fe.disroot.org/tag/ai" rel="nofollow noopener" target="_blank">#AI</a> when you mention a secret word out of context in your post. I have not seen any elaboration on this. How does it work? In what situations is the failure mode triggered? What does the AI agent do when it fails? Is it an attack on the training data, or do you have to enter it into an AI agent by hand?</p><ul><li>Gravy</li></ul><p>So I am going to assume this is just a nonsensical rumor until someone can explain it to me. But just in case it turns out to be a real problem, I am making this post about it.</p><p>Source of the rumor (as far as I can tell) is “AJ Sadauskas” on BlueSky, shared by Brian Krebs here <a href="https://infosec.exchange/@briankrebs/114777986932318938" rel="nofollow noopener" target="_blank">https://infosec.exchange/@briankrebs/114777986932318938</a> <span class="h-card"><a class="u-url mention" href="https://infosec.exchange/@briankrebs" rel="nofollow noopener" target="_blank">@<span>briankrebs</span></a></span></p><p><a class="hashtag" href="https://fe.disroot.org/tag/tech" rel="nofollow noopener" target="_blank">#tech</a> <a class="hashtag" href="https://fe.disroot.org/tag/ai" rel="nofollow noopener" target="_blank">#AI</a> <a class="hashtag" href="https://fe.disroot.org/tag/llm" rel="nofollow noopener" target="_blank">#LLM</a> <a class="hashtag" href="https://fe.disroot.org/tag/lrm" rel="nofollow noopener" target="_blank">#LRM</a> <a class="hashtag" href="https://fe.disroot.org/tag/gravy" rel="nofollow noopener" target="_blank">#Gravy</a></p>
RichardInSandy<p>LLMs, LRMs, Complete Computational Collapse. Could be an interesting discussion with the Watson people tomorrow. <a href="https://c.im/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://c.im/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://c.im/tags/lrm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lrm</span></a> <a href="https://c.im/tags/watson" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>watson</span></a></p>
Jeff MacKinnon<p>If you are skeptical about LLMs and LRMs, this is a post for you. Written by a guy that has seen some things wrt technology and machine learning</p><p><a href="https://sourcinginnovation.com/wordpress/2025/06/18/got-a-headache-dont-take-an-aspirin-or-query-a-llm/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">sourcinginnovation.com/wordpre</span><span class="invisible">ss/2025/06/18/got-a-headache-dont-take-an-aspirin-or-query-a-llm/</span></a></p><p><a href="https://bluenoser.me/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://bluenoser.me/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://bluenoser.me/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://bluenoser.me/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a> <a href="https://bluenoser.me/tags/technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>technology</span></a> <a href="https://bluenoser.me/tags/neuralnetworks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuralnetworks</span></a> <a href="https://bluenoser.me/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a></p>
Marcel SIneM(S)US<p><a href="https://social.tchncs.de/tags/Apple" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Apple</span></a> :apple_inc: -Paper: Warum <a href="https://social.tchncs.de/tags/Reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Reasoning</span></a>-Modelle wohl nicht denken | Mac &amp; i <a href="https://www.heise.de/news/Apple-Paper-Warum-Reasoning-Modelle-wohl-nicht-denken-10437814.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">heise.de/news/Apple-Paper-Waru</span><span class="invisible">m-Reasoning-Modelle-wohl-nicht-denken-10437814.html</span></a> <a href="https://social.tchncs.de/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://social.tchncs.de/tags/LargeReasoningModel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LargeReasoningModel</span></a> <a href="https://social.tchncs.de/tags/LargeReasoningModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LargeReasoningModels</span></a> <a href="https://social.tchncs.de/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://social.tchncs.de/tags/Wissenschaft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Wissenschaft</span></a> <a href="https://social.tchncs.de/tags/science" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>science</span></a></p>
Wulfy<p>The educator panic over AI is real, and rational.<br>I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?</p><p>The "AI will kill critical thinking" crowd has a point, but they're missing context.<br>Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.</p><p>We don't hunt our own food or walk everywhere anymore.<br>We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.</p><p>The key is learning to think with AI rather than being replaced by it.<br>That means understanding both its capabilities and our irreplaceable human advantages.</p><p>1/3 </p><p><a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/Education" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Education</span></a> <a href="https://infosec.exchange/tags/FutureOfEducation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureOfEducation</span></a> <a href="https://infosec.exchange/tags/AIinEducation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinEducation</span></a> <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://infosec.exchange/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://infosec.exchange/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://infosec.exchange/tags/EdAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EdAI</span></a> <a href="https://infosec.exchange/tags/CriticalThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CriticalThinking</span></a> <a href="https://infosec.exchange/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://infosec.exchange/tags/Metacognition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Metacognition</span></a> <a href="https://infosec.exchange/tags/HigherOrderThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HigherOrderThinking</span></a> <a href="https://infosec.exchange/tags/Reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Reasoning</span></a> <a href="https://infosec.exchange/tags/Vygotsky" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Vygotsky</span></a> <a href="https://infosec.exchange/tags/Hutchins" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hutchins</span></a> <a href="https://infosec.exchange/tags/Sweller" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sweller</span></a> <a href="https://infosec.exchange/tags/LearningScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LearningScience</span></a> <a href="https://infosec.exchange/tags/EducationalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EducationalPsychology</span></a> <a href="https://infosec.exchange/tags/SocialLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SocialLearning</span></a> <a href="https://infosec.exchange/tags/TechforGood" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechforGood</span></a> <a href="https://infosec.exchange/tags/EticalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EticalAI</span></a> <a href="https://infosec.exchange/tags/AILiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AILiteracy</span></a> <a href="https://infosec.exchange/tags/PromptEngineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptEngineering</span></a> <a href="https://infosec.exchange/tags/AISkills" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISkills</span></a> <a href="https://infosec.exchange/tags/DigitalLiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalLiteracy</span></a> <a href="https://infosec.exchange/tags/FutureSkills" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureSkills</span></a> <a href="https://infosec.exchange/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://infosec.exchange/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a> <a href="https://infosec.exchange/tags/AILimitations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AILimitations</span></a> <a href="https://infosec.exchange/tags/SystemsThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SystemsThinking</span></a> <a href="https://infosec.exchange/tags/AIEvaluation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEvaluation</span></a> <a href="https://infosec.exchange/tags/MentalModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MentalModels</span></a> <a href="https://infosec.exchange/tags/LifelongLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LifelongLearning</span></a> <a href="https://infosec.exchange/tags/AIEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEthics</span></a> <a href="https://infosec.exchange/tags/HumanCenteredAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanCenteredAI</span></a> <a href="https://infosec.exchange/tags/DigitalTransformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalTransformation</span></a> <a href="https://infosec.exchange/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIRegulation</span></a> <a href="https://infosec.exchange/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://infosec.exchange/tags/Philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Philosophy</span></a></p>
Wulfy<p>AI isn't going anywhere. Time to get strategic:<br>Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.</p><p>This isn't some Silicon Valley fantasy:<br>Three decades of cognitive research already mapped out how this works:</p><p>Cognitive Load Theory:<br>Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.</p><p>Distributed Cognition:<br>Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.</p><p>Zone of Proximal Development<br>We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).<br>The table below shows what this looks like in practice:</p><p>2/3</p><p><a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/Education" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Education</span></a> <a href="https://infosec.exchange/tags/FutureOfEducation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureOfEducation</span></a> <a href="https://infosec.exchange/tags/AIinEducation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinEducation</span></a> <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://infosec.exchange/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://infosec.exchange/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://infosec.exchange/tags/EdAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EdAI</span></a> <a href="https://infosec.exchange/tags/CriticalThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CriticalThinking</span></a> <a href="https://infosec.exchange/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://infosec.exchange/tags/Metacognition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Metacognition</span></a> <a href="https://infosec.exchange/tags/HigherOrderThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HigherOrderThinking</span></a> <a href="https://infosec.exchange/tags/Reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Reasoning</span></a> <a href="https://infosec.exchange/tags/Vygotsky" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Vygotsky</span></a> <a href="https://infosec.exchange/tags/Hutchins" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hutchins</span></a> <a href="https://infosec.exchange/tags/Sweller" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sweller</span></a> <a href="https://infosec.exchange/tags/LearningScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LearningScience</span></a> <a href="https://infosec.exchange/tags/EducationalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EducationalPsychology</span></a> <a href="https://infosec.exchange/tags/SocialLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SocialLearning</span></a> <a href="https://infosec.exchange/tags/TechforGood" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechforGood</span></a> <a href="https://infosec.exchange/tags/EticalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EticalAI</span></a> <a href="https://infosec.exchange/tags/AILiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AILiteracy</span></a> <a href="https://infosec.exchange/tags/PromptEngineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptEngineering</span></a> <a href="https://infosec.exchange/tags/AISkills" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISkills</span></a> <a href="https://infosec.exchange/tags/DigitalLiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalLiteracy</span></a> <a href="https://infosec.exchange/tags/FutureSkills" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureSkills</span></a> <a href="https://infosec.exchange/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://infosec.exchange/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a> <a href="https://infosec.exchange/tags/AILimitations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AILimitations</span></a> <a href="https://infosec.exchange/tags/SystemsThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SystemsThinking</span></a> <a href="https://infosec.exchange/tags/AIEvaluation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEvaluation</span></a> <a href="https://infosec.exchange/tags/MentalModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MentalModels</span></a> <a href="https://infosec.exchange/tags/LifelongLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LifelongLearning</span></a> <a href="https://infosec.exchange/tags/AIEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEthics</span></a> <a href="https://infosec.exchange/tags/HumanCenteredAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanCenteredAI</span></a> <a href="https://infosec.exchange/tags/DigitalTransformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalTransformation</span></a> <a href="https://infosec.exchange/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIRegulation</span></a> <a href="https://infosec.exchange/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://infosec.exchange/tags/Philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Philosophy</span></a></p>
Wulfy<p>Critical reasoning vs Cognitive Delegation</p><p>Old School Focus:</p><p>Building internal cognitive capabilities and managing cognitive load independently.</p><p>Cognitive Delegation Focus:</p><p>Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.</p><p>We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.</p><p>3/3</p><p><a href="https://infosec.exchange/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://infosec.exchange/tags/Education" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Education</span></a> <a href="https://infosec.exchange/tags/FutureOfEducation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureOfEducation</span></a> <a href="https://infosec.exchange/tags/AIinEducation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinEducation</span></a> <a href="https://infosec.exchange/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://infosec.exchange/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://infosec.exchange/tags/Claude" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Claude</span></a> <a href="https://infosec.exchange/tags/EdAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EdAI</span></a> <a href="https://infosec.exchange/tags/CriticalThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CriticalThinking</span></a> <a href="https://infosec.exchange/tags/CognitiveScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CognitiveScience</span></a> <a href="https://infosec.exchange/tags/Metacognition" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Metacognition</span></a> <a href="https://infosec.exchange/tags/HigherOrderThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HigherOrderThinking</span></a> <a href="https://infosec.exchange/tags/Reasoning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Reasoning</span></a> <a href="https://infosec.exchange/tags/Vygotsky" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Vygotsky</span></a> <a href="https://infosec.exchange/tags/Hutchins" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Hutchins</span></a> <a href="https://infosec.exchange/tags/Sweller" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Sweller</span></a> <a href="https://infosec.exchange/tags/LearningScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LearningScience</span></a> <a href="https://infosec.exchange/tags/EducationalPsychology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EducationalPsychology</span></a> <a href="https://infosec.exchange/tags/SocialLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SocialLearning</span></a> <a href="https://infosec.exchange/tags/TechforGood" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechforGood</span></a> <a href="https://infosec.exchange/tags/EticalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EticalAI</span></a> <a href="https://infosec.exchange/tags/AILiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AILiteracy</span></a> <a href="https://infosec.exchange/tags/PromptEngineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PromptEngineering</span></a> <a href="https://infosec.exchange/tags/AISkills" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISkills</span></a> <a href="https://infosec.exchange/tags/DigitalLiteracy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalLiteracy</span></a> <a href="https://infosec.exchange/tags/FutureSkills" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureSkills</span></a> <a href="https://infosec.exchange/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://infosec.exchange/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a> <a href="https://infosec.exchange/tags/AILimitations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AILimitations</span></a> <a href="https://infosec.exchange/tags/SystemsThinking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SystemsThinking</span></a> <a href="https://infosec.exchange/tags/AIEvaluation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEvaluation</span></a> <a href="https://infosec.exchange/tags/MentalModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MentalModels</span></a> <a href="https://infosec.exchange/tags/LifelongLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LifelongLearning</span></a> <a href="https://infosec.exchange/tags/AIEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIEthics</span></a> <a href="https://infosec.exchange/tags/HumanCenteredAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanCenteredAI</span></a> <a href="https://infosec.exchange/tags/DigitalTransformation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalTransformation</span></a> <a href="https://infosec.exchange/tags/AIRegulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIRegulation</span></a> <a href="https://infosec.exchange/tags/ResponsibleAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ResponsibleAI</span></a> <a href="https://infosec.exchange/tags/Philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Philosophy</span></a></p>
Arie van Deursen<p>“Through extensive experimentation across diverse puzzles, we show that frontier Large Reason Models face a complete accuracy collapse beyond certain complexities. </p><p>Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.”</p><p><a href="https://machinelearning.apple.com/research/illusion-of-thinking" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">machinelearning.apple.com/rese</span><span class="invisible">arch/illusion-of-thinking</span></a></p><p><a href="https://mastodon.acm.org/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mastodon.acm.org/tags/lrm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lrm</span></a></p>
Dominik Steiger<p>"complete accuracy collapse beyond certain complexities"</p><p>Interesting paper about the current limits of AI.</p><p><a href="https://swiss.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://swiss.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://swiss.social/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> </p><p><a href="https://machinelearning.apple.com/research/illusion-of-thinking?utm_source=perplexity" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">machinelearning.apple.com/rese</span><span class="invisible">arch/illusion-of-thinking?utm_source=perplexity</span></a></p>
Ω 🌍 Gus Posey<p>Watching people get comfortable with generative AI as a replacement for creative expression is like reaching into a sock drawer and finding something warm and wet.</p><p><a href="https://mas.to/tags/BingImageCreator" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BingImageCreator</span></a> <a href="https://mas.to/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://mas.to/tags/DallE" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DallE</span></a> <a href="https://mas.to/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mas.to/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://mas.to/tags/Midjourney" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Midjourney</span></a> <a href="https://mas.to/tags/StableDiffusion" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>StableDiffusion</span></a></p>
Natasha Jay<p>I'm sorry but as a veteran of Mechwarrior games since MW2, LRM will always mean Long Range Missiles (not Large Reasoning Models)</p><p><a href="https://tech.lgbt/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://tech.lgbt/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tech.lgbt/tags/battletech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>battletech</span></a></p>
Jan Wildeboer 😷:krulorange:<p>On the limits of LLMs (Large Language models) and LRMs (Large Reasoning Models). The TL;DR: "Our findings reveal fundamental limitations in current models: despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds." Meaning: accuracy collapse.</p><p>Interesting paper from Apple. <a href="https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">ml-site.cdn-apple.com/papers/t</span><span class="invisible">he-illusion-of-thinking.pdf</span></a></p><p><a href="https://social.wildeboer.net/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://social.wildeboer.net/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://social.wildeboer.net/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a></p>
George Macgregor<p>New issue of the <a href="https://code4lib.social/tags/Code4Lib" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Code4Lib</span></a> Journal published: <a href="https://journal.code4lib.org/issues/issues/issue60" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">journal.code4lib.org/issues/is</span><span class="invisible">sues/issue60</span></a>. </p><p>Includes a contribution on <a href="https://code4lib.social/tags/OpenWEMI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenWEMI</span></a> vocabulary from Karen Coyle.</p><p>OpenWEMI: A Minimally Constrained Vocabulary for Work, Expression, Manifestation, and Item <a href="https://journal.code4lib.org/articles/18412" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">journal.code4lib.org/articles/</span><span class="invisible">18412</span></a> <a href="https://code4lib.social/tags/DCMI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DCMI</span></a> <a href="https://code4lib.social/tags/RDF" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RDF</span></a> <a href="https://code4lib.social/tags/FRBR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FRBR</span></a> <a href="https://code4lib.social/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://code4lib.social/tags/WEMI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WEMI</span></a></p>
TheDoctor<p> Sam Altman sieht in „Large Reasoning Models“ (LRMs) den nächsten KI-Durchbruch: Effizienter, präziser &amp; wissenschaftlich bahnbrechend. OpenAI plant zudem ein Comeback der Open-Source-Strategie. Spannende Zeiten! <a class="hashtag" href="https://bsky.app/search?q=%23AI" rel="nofollow noopener" target="_blank">#AI</a> <a class="hashtag" href="https://bsky.app/search?q=%23LRM" rel="nofollow noopener" target="_blank">#LRM</a> <a class="hashtag" href="https://bsky.app/search?q=%23OpenSource" rel="nofollow noopener" target="_blank">#OpenSource</a></p>
TheDoctor<p>Sam Altman sieht in „Large Reasoning Models“ (LRMs) den nächsten KI-Durchbruch: Effizienter, präziser &amp; wissenschaftlich bahnbrechend. OpenAI plant zudem ein Comeback der Open-Source-Strategie. Spannende Zeiten! <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a></p>
François Renaville 🇺🇦🇪🇺<p>Votre <a href="https://mastodon.social/tags/catalogue" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>catalogue</span></a> à l'ère du web 3.0. Cataloguer aujourd'hui... </p><p><a href="https://orbi.uliege.be/handle/2268/312552" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">orbi.uliege.be/handle/2268/312</span><span class="invisible">552</span></a></p><p><a href="https://mastodon.social/tags/Catalogues" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Catalogues</span></a> <a href="https://mastodon.social/tags/RDA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RDA</span></a> <a href="https://mastodon.social/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> <a href="https://mastodon.social/tags/Bibframe" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bibframe</span></a> <a href="https://mastodon.social/tags/TransitionBibliographique" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TransitionBibliographique</span></a> <a href="https://mastodon.social/tags/Marc" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Marc</span></a> <a href="https://mastodon.social/tags/Ontologies" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Ontologies</span></a> <a href="https://mastodon.social/tags/Webdedonn%C3%A9es" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Webdedonnées</span></a> <a href="https://mastodon.social/tags/FRBR" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FRBR</span></a></p>
Scientist Rebellion Germany<p>Auf Twitter via nitter.net:<br>Links.Rechts.Mitte.<br>@LRM_dietalkshow<br>Jul 11<br><a href="https://climatejustice.global/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> mit @mfleischhacker1 bei <a href="https://climatejustice.global/tags/ServusTV" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ServusTV</span></a></p><p>"Die Lage bei der <a href="https://climatejustice.global/tags/Klimakrise" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Klimakrise</span></a> ist so einfach, ich wundere mich, dass es immer noch nicht verstanden wird", sagt die Medienethikerin @PaganiniClaudia zum Thema: Täglicher <a href="https://climatejustice.global/tags/Klima" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Klima</span></a>-Alarm: Panik nach Plan?"<br><a href="https://nitter.net/lrm_dietalkshow/status/1678697134596698112?s=46&amp;t=5K0vBdZGNA8nQliIX9aPDg" rel="nofollow noopener" target="_blank"><span class="invisible">https://</span><span class="ellipsis">nitter.net/lrm_dietalkshow/sta</span><span class="invisible">tus/1678697134596698112?s=46&amp;t=5K0vBdZGNA8nQliIX9aPDg</span></a></p><p><a href="https://climatejustice.global/tags/KlimaKatastrophe" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KlimaKatastrophe</span></a> <a href="https://climatejustice.global/tags/KlimaWandel" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>KlimaWandel</span></a></p>
Jonathan Lenoir<p>From the LiDAR point cloud, we first generated the digital terrain model <a href="https://ecoevo.social/tags/DTM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DTM</span></a> at 50 cm resolution &amp; then ran a local relief model <a href="https://ecoevo.social/tags/LRM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LRM</span></a> on the DTM before digitizing all skid trails that are clearly visible ⬇️ across the entire <a href="https://ecoevo.social/tags/forest" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>forest</span></a> of <a href="https://ecoevo.social/tags/Compi%C3%A8gne" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Compiègne</span></a> 🇨🇵 (14,357 ha)</p>