med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

352
active users

#ollama

5 posts5 participants0 posts today

Well, last night I ran #Ollama on #endeavouros running on my #framewor16 and it worked better than expected. Very fast responses. I'm hoping to use it with #playwrightMCP to effectively fuzz my web changes in the background while I work. Just need to speed up the cycles between developing and QA at work.

Anyone with experience here, id love some advice on hooking up the mcp server. My first few attempts were met with vexing issues.

Getting closer to my goal of using a local LLM to help in prepping recording scripts for work. It should shave off about 30-50% of prep time, so well worth doing considering the amount of work I've got on at the moment.

As ever, context window limitations mean that with long prompts it forgets its system prompt so doesn't do its job. Smaller batches are more successful.

The dream of just loading a full script into an LLM and saying 'do this' is still a dream.

#llm#llms#ai

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

While I remain skeptical of the current AI hype cycle (AI on everything, in everything), that doesn’t mean I have written off AI as a valid technology with real-world use cases. I like to avoid being polarised (with a few exceptions). See my latest blog post where we use the free and open source Ollama to run an LLM locally on a Linux PC. Like it or not, AI is here to stay, so let’s understand how it works and take back control from Big Tech. #AI #ArtificialIntelligence #Ollama #Linux

mattjhayes.com/2025/06/20/putt

Bits 'n Bytes · Putting the Open back in AI with Ollama
More from Bits 'n Bytes

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.
Replied in thread

@tek

As an experiment, I ran up the medium-sized (27B?) "reasoning" version of #DeepSeek in #Ollama and asked it for help with cryptic crossword clues from the Guardian. It just went round and round, trying the same things over and over again and going off along tracks that were obviously not going to work. It was somehow better than I'd expected, but still useless.

Cryptic crossword clues are an interesting and punishing test, because the answers won't have appeared in the model's training data.

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

🚀 Χρησιμοποιήστε AI τοπικά στο @ONLYOFFICE με τo Ollama! 🤖🔒

Αποκτήστε πρόσβαση σε ισχυρά μοντέλα AI χωρίς σύνδεση στο διαδίκτυο, διατηρώντας ταυτόχρονα την απόλυτη ιδιωτικότητα των εγγράφων σας! 📂🔐

🔹 Χωρίς καθυστερήσεις δικτύου
🔹 Πλήρης έλεγχος των δεδομένων σας
🔹 Υποστήριξη για Llama, Mistral, DeepSeek & πολλά άλλα

📖 Διαβάστε το ολοκληρωμένο guide εδώ: onlyoffice.com/blog/el/2025/06

ONLYOFFICE Blog · Χρήση Ollama AI offline στο ONLYOFFICE | ONLYOFFICE BlogΜάθετε πώς να ενσωματώσετε τα μοντέλα AI της Ollama στο ONLYOFFICE για ιδιωτική, offline εργασία.