med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

316
active users

#languagemodels

0 posts0 participants0 posts today
Europe Says<p><a href="https://www.europesays.com/2446123/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">europesays.com/2446123/</span><span class="invisible"></span></a> How South Korea plans to best OpenAI, Google, others with homegrown AI <a href="https://pubeurope.com/tags/AiResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AiResearch</span></a> <a href="https://pubeurope.com/tags/AITechnologies" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AITechnologies</span></a> <a href="https://pubeurope.com/tags/FoundationalModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FoundationalModels</span></a> <a href="https://pubeurope.com/tags/google" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>google</span></a> <a href="https://pubeurope.com/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://pubeurope.com/tags/OpenAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenAI</span></a> <a href="https://pubeurope.com/tags/SKTelecom" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SKTelecom</span></a> <a href="https://pubeurope.com/tags/SouthKorea" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SouthKorea</span></a> <a href="https://pubeurope.com/tags/SouthKorean" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SouthKorean</span></a> <a href="https://pubeurope.com/tags/TechCrunch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechCrunch</span></a></p>
Andrew Padilla<p><a href="https://techhub.social/tags/data" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>data</span></a> <a href="https://techhub.social/tags/database" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>database</span></a> <a href="https://techhub.social/tags/taxonomy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>taxonomy</span></a> <a href="https://techhub.social/tags/dashboard" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dashboard</span></a> <a href="https://techhub.social/tags/datapipeline" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>datapipeline</span></a> <a href="https://techhub.social/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a> </p><p>Language is political. Good read if you work with data. </p><p><a href="https://open.substack.com/pub/jessicatalisman/p/language-is-political?r=du68h&amp;utm_medium=ios" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">open.substack.com/pub/jessicat</span><span class="invisible">alisman/p/language-is-political?r=du68h&amp;utm_medium=ios</span></a></p>
Nick Byrd, Ph.D.<p>Are <a href="https://nerdculture.de/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a> vulnerable to <a href="https://nerdculture.de/tags/anchoring" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>anchoring</span></a> <a href="https://nerdculture.de/tags/bias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bias</span></a>?</p><p>Huang et al. generated the <a href="https://nerdculture.de/tags/SynAnchors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SynAnchors</span></a> dataset to find out.</p><p>Anchoring was more common in shallower layers of models.</p><p>A reflective reasoning strategy was usually most helpful.</p><p><a href="https://doi.org/10.48550/arXiv.2505.15392" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.48550/arXiv.2505.15</span><span class="invisible">392</span></a></p><p><a href="https://nerdculture.de/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://nerdculture.de/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://nerdculture.de/tags/tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tech</span></a> <a href="https://nerdculture.de/tags/edu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>edu</span></a></p>
Nick Byrd, Ph.D.<p>Do smaller <a href="https://nerdculture.de/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a> learn to reason better from longer chains of thought?</p><p>Luo et al. found<br>- Larger CoT data subsets didn't increase reflective reasoning (Figure 3)<br>- Accuracy didn't beat baseline until ≅32k CoT tokens (Figure 2)</p><p><a href="https://doi.org/10.48550/arXiv.2506.07712" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.48550/arXiv.2506.07</span><span class="invisible">712</span></a></p><p><a href="https://nerdculture.de/tags/CogSci" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CogSci</span></a> <a href="https://nerdculture.de/tags/edu" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>edu</span></a></p>
Nick Byrd, Ph.D.<p>What's <a href="https://nerdculture.de/tags/philosophy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>philosophy</span></a> got to offer fields like <a href="https://nerdculture.de/tags/computerScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computerScience</span></a>?</p><p>An <a href="https://nerdculture.de/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> system was trained via a method championed by a philosopher.</p><p>The result wasn't perfect, but it was better than "off-the-shelf neural <a href="https://nerdculture.de/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a>" at learning about human <a href="https://nerdculture.de/tags/ethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ethics</span></a>.</p><p><a href="https://doi.org/10.1038/s42256-024-00969-6" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1038/s42256-024-009</span><span class="invisible">69-6</span></a></p>
The-14<p>AI might now be as good as humans at detecting emotion, political leaning and sarcasm in online conversations<br><a href="https://mastodon.world/tags/Tech" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Tech</span></a> <a href="https://mastodon.world/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.world/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://mastodon.world/tags/GPT4" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GPT4</span></a> <a href="https://mastodon.world/tags/EmotionDetection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EmotionDetection</span></a> <a href="https://mastodon.world/tags/SarcasmDetection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SarcasmDetection</span></a> <a href="https://mastodon.world/tags/PoliticalBias" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PoliticalBias</span></a> <a href="https://mastodon.world/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.world/tags/TechEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechEthics</span></a> <a href="https://mastodon.world/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.world/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a> <a href="https://mastodon.world/tags/FutureOfAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureOfAI</span></a><br><a href="https://the-14.com/ai-might-now-be-as-good-as-humans-at-detecting-emotion-political-leaning-and-sarcasm-in-online-conversations/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">the-14.com/ai-might-now-be-as-</span><span class="invisible">good-as-humans-at-detecting-emotion-political-leaning-and-sarcasm-in-online-conversations/</span></a></p>
Harald Sack<p>In our <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ISE2025</span></a> lecture last Wednesday, we learned how in n-gram language models via Markov assumption and maximum likelihood estimation we can predict the probability of the occurrence of a word given a specific context (i.e. n words previous in the sequence of words).</p><p><a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodels</span></a> <a href="https://sigmoid.social/tags/lecture" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lecture</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fizise</span></a></span> @tabea <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sourisnumerique</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://mastodon.social/@KIT_Karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>KIT_Karlsruhe</span></a></span></p>
Nebraska.Code<p>Adam Barney is 'Demystifying LLMs: How They Work and Why It Matters' July 24th at Nebraska.Code().</p><p><a href="https://nebraskacode.amegala.com/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">nebraskacode.amegala.com/</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/Travefy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Travefy</span></a> <a href="https://mastodon.social/tags/LargeLanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LargeLanguageModels</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodels</span></a> <a href="https://mastodon.social/tags/TechTalk" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechTalk</span></a> <a href="https://mastodon.social/tags/tokenization" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>tokenization</span></a> <a href="https://mastodon.social/tags/embeddings" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>embeddings</span></a> <a href="https://mastodon.social/tags/attentionmechanisms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>attentionmechanisms</span></a> <a href="https://mastodon.social/tags/finetuning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>finetuning</span></a> <a href="https://mastodon.social/tags/engineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>engineering</span></a> <a href="https://mastodon.social/tags/softwareengineering" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>softwareengineering</span></a> <a href="https://mastodon.social/tags/softwareengineer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>softwareengineer</span></a> <a href="https://mastodon.social/tags/Nebraska" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nebraska</span></a> <a href="https://mastodon.social/tags/TechnologyConference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechnologyConference</span></a> <a href="https://mastodon.social/tags/lincolnnebraska" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>lincolnnebraska</span></a> <a href="https://mastodon.social/tags/devconference" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>devconference</span></a></p>
EPFL<p>🧠 🤖 Researchers from the Natural Language Processing Laboratory and NeuroAI Laboratory have discovered key ‘units’ in large AI models that seem to be important for language, mirroring the brain’s language system. When these specific units were turned off, the models got much worse at language tasks.</p><p><a href="https://social.epfl.ch/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://social.epfl.ch/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://social.epfl.ch/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a> </p><p>Read more: <a href="https://go.epfl.ch/LJx-en" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">go.epfl.ch/LJx-en</span><span class="invisible"></span></a></p>
Harald Sack<p>Building on the 90s, statistical n-gram language models, trained on vast text collections, became the backbone of NLP research. They fueled advancements in nearly all NLP techniques of the era, laying the groundwork for today's AI. </p><p>F. Jelinek (1997), Statistical Methods for Speech Recognition, MIT Press, Cambridge, MA</p><p><a href="https://sigmoid.social/tags/NLP" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NLP</span></a> <a href="https://sigmoid.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://sigmoid.social/tags/HistoryOfAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HistoryOfAI</span></a> <a href="https://sigmoid.social/tags/TextProcessing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TextProcessing</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/historyofscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>historyofscience</span></a> <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ISE2025</span></a> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@tabea" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>tabea</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sourisnumerique</span></a></span></p>
Harald Sack<p>Today, the 2nd lecture of <a href="https://sigmoid.social/tags/ISE2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ISE2025</span></a> took place with an introduction into Natural Language Processing, which will be subject of our lecture for the next 4 weeks.</p><p><a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/nlp" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>nlp</span></a> <a href="https://sigmoid.social/tags/informationextraction" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>informationextraction</span></a> <a href="https://sigmoid.social/tags/ocr" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ocr</span></a> <a href="https://sigmoid.social/tags/ner" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ner</span></a> <a href="https://sigmoid.social/tags/linguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>linguistics</span></a> <a href="https://sigmoid.social/tags/computationallinguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>computationallinguistics</span></a> <a href="https://sigmoid.social/tags/morphology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>morphology</span></a> <a href="https://sigmoid.social/tags/pos" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>pos</span></a> <a href="https://sigmoid.social/tags/ambiguity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ambiguity</span></a> <a href="https://sigmoid.social/tags/language" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>language</span></a> <span class="h-card" translate="no"><a href="https://wisskomm.social/@fiz_karlsruhe" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fiz_karlsruhe</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@fizise" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>fizise</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@tabea" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>tabea</span></a></span> <span class="h-card" translate="no"><a href="https://sigmoid.social/@enorouzi" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>enorouzi</span></a></span> <span class="h-card" translate="no"><a href="https://fedihum.org/@sourisnumerique" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>sourisnumerique</span></a></span> <a href="https://sigmoid.social/tags/AIart" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIart</span></a> <a href="https://sigmoid.social/tags/generativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>generativeAI</span></a> <a href="https://sigmoid.social/tags/machinetranslation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinetranslation</span></a> <a href="https://sigmoid.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodels</span></a> <a href="https://sigmoid.social/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a></p>
Nick Byrd, Ph.D.<p>Can popular, generalist <a href="https://nerdculture.de/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> answer questions as specialists?</p><p>Adopting each step of <a href="https://nerdculture.de/tags/diagnosis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>diagnosis</span></a> into a <a href="https://nerdculture.de/tags/ChainOfThought" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChainOfThought</span></a> prompt made small and large <a href="https://nerdculture.de/tags/languageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languageModels</span></a>' outperform both zero-shot and the fine-tuned OLAPH method on the <a href="https://nerdculture.de/tags/MedLFQA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MedLFQA</span></a> benchmark.</p><p><a href="https://doi.org/10.48550/arXiv.2503.03194" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.48550/arXiv.2503.03</span><span class="invisible">194</span></a> <a href="https://nerdculture.de/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p>
Hacker News<p>Beyond Quacking: Deep Integration of Language Models and RAG into DuckDB</p><p><a href="https://arxiv.org/abs/2504.01157" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2504.01157</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/BeyondQuacking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BeyondQuacking</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/RAG" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>RAG</span></a> <a href="https://mastodon.social/tags/DuckDB" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DuckDB</span></a> <a href="https://mastodon.social/tags/AIIntegration" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIIntegration</span></a></p>
Hacker News<p>LLMs Understand Nullability</p><p><a href="https://dmodel.ai/nullability-gentle/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">dmodel.ai/nullability-gentle/</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://mastodon.social/tags/Nullability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Nullability</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/Understanding" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Understanding</span></a></p>
Hacker News<p>Circuit Tracing: Revealing Computational Graphs in Language Models (Anthropic)</p><p><a href="https://transformer-circuits.pub/2025/attribution-graphs/methods.html" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">transformer-circuits.pub/2025/</span><span class="invisible">attribution-graphs/methods.html</span></a></p><p><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/CircuitTracing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CircuitTracing</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/ComputationalGraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalGraphs</span></a> <a href="https://mastodon.social/tags/Anthropic" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Anthropic</span></a> <a href="https://mastodon.social/tags/AIResearch" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIResearch</span></a></p>
Hacker News<p>AMD Announces "Instella" Open-Source 3B Language Models — <a href="https://www.phoronix.com/news/AMD-Intella-Open-Source-LM" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">phoronix.com/news/AMD-Intella-</span><span class="invisible">Open-Source-LM</span></a><br><a href="https://mastodon.social/tags/HackerNews" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HackerNews</span></a> <a href="https://mastodon.social/tags/AMD" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AMD</span></a> <a href="https://mastodon.social/tags/Instella" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Instella</span></a> <a href="https://mastodon.social/tags/OpenSource" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OpenSource</span></a> <a href="https://mastodon.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Technology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Technology</span></a></p>
Matasoft<p><a href="https://mastodon.world/tags/Spreadsheet" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Spreadsheet</span></a> <a href="https://mastodon.world/tags/AISpreadsheets" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AISpreadsheets</span></a> <a href="https://mastodon.world/tags/DataAutomation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataAutomation</span></a> <a href="https://mastodon.world/tags/BusinessIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BusinessIntelligence</span></a> <a href="https://mastodon.world/tags/SpreadsheetRevolution" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SpreadsheetRevolution</span></a> <a href="https://mastodon.world/tags/AIProductivity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIProductivity</span></a> <a href="https://mastodon.world/tags/DataExtraction" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataExtraction</span></a> <a href="https://mastodon.world/tags/AutomatedAnalytics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AutomatedAnalytics</span></a> <a href="https://mastodon.world/tags/WorkSmarter" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WorkSmarter</span></a> <a href="https://mastodon.world/tags/AITools2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AITools2025</span></a> <a href="https://mastodon.world/tags/AITools" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AITools</span></a> <a href="https://mastodon.world/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.world/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.world/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.world/tags/BusinessEfficiency" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BusinessEfficiency</span></a> <a href="https://mastodon.world/tags/DataProcessing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataProcessing</span></a> <a href="https://mastodon.world/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mastodon.world/tags/FutureOfWork" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FutureOfWork</span></a> <a href="https://mastodon.world/tags/SpreadsheetAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SpreadsheetAI</span></a> <a href="https://mastodon.world/tags/DataManagement" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataManagement</span></a> <a href="https://mastodon.world/tags/ProductivityHack" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ProductivityHack</span></a> <a href="https://mastodon.world/tags/AIInnovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIInnovation</span></a> <a href="https://mastodon.world/tags/BusinessAutomation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BusinessAutomation</span></a> <a href="https://mastodon.world/tags/DataAnalytics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataAnalytics</span></a> <a href="https://mastodon.world/tags/WorkLifeBalance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WorkLifeBalance</span></a> <a href="https://mastodon.world/tags/LLMTechnology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMTechnology</span></a> <a href="https://mastodon.world/tags/SmartSpreadsheets" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SmartSpreadsheets</span></a> <a href="https://mastodon.world/tags/DataDriven" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataDriven</span></a> <a href="https://mastodon.world/tags/AIAssistant" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIAssistant</span></a> <a href="https://mastodon.world/tags/OfficeAutomation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>OfficeAutomation</span></a> </p><p><a href="https://matasoft.hr/qtrendcontrol/index.php/un-perplexed-spready" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">matasoft.hr/qtrendcontrol/inde</span><span class="invisible">x.php/un-perplexed-spready</span></a></p>
Leanpub<p>New 📚 Release! The Hundred-Page Language Models Book: Hands-on with PyTorch by Andriy Burkov <a href="https://mastodon.social/tags/books" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>books</span></a> <a href="https://mastodon.social/tags/ebooks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ebooks</span></a> <a href="https://mastodon.social/tags/machinelearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>machinelearning</span></a> <a href="https://mastodon.social/tags/languagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodels</span></a> <a href="https://mastodon.social/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mastodon.social/tags/bestsellers" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>bestsellers</span></a> <a href="https://mastodon.social/tags/newreleases" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>newreleases</span></a></p><p>Master language models through mathematics, illustrations, and code―and build your own from scratch!</p><p>Find it on Leanpub!</p><p>Link: <a href="https://leanpub.com/theLMbook" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">leanpub.com/theLMbook</span><span class="invisible"></span></a></p>
PsyPost<p>Large language models outperform experts in predicting neuroscience discoveries <a href="https://www.psypost.org/large-language-models-outperform-experts-in-predicting-neuroscience-discoveries/?utm_source=dlvr.it&amp;utm_medium=mastodon" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">psypost.org/large-language-mod</span><span class="invisible">els-outperform-experts-in-predicting-neuroscience-discoveries/?utm_source=dlvr.it&amp;utm_medium=mastodon</span></a> <a href="https://mstdn.social/tags/Neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Neuroscience</span></a> <a href="https://mstdn.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mstdn.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mstdn.social/tags/LanguageModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LanguageModels</span></a> <a href="https://mstdn.social/tags/Research" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Research</span></a></p>
Denny Vrandečić<p>What do I think about large language models? Let's use two quotes:</p><p>"Our languages were made for telling stories, not for representing ideas very accurately." - Alan Kay</p><p>"All models are wrong, but some are useful." - George Box</p><p><a href="https://mas.to/tags/ai" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ai</span></a> <a href="https://mas.to/tags/llm" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llm</span></a> <a href="https://mas.to/tags/languagemodels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>languagemodels</span></a> <a href="https://mas.to/tags/language" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>language</span></a> <a href="https://mas.to/tags/models" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>models</span></a></p>