med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

339
active users

#openai

90 posts72 participants4 posts today

Hey folks, if you are using #amazon, #Twitter #X #Instagram #Facebook #Meta #openAI #chatGPT #Google #Netflix #Apple and all the other #MAGA things, then:

YOU ARE FINANCING THE WAR IN IRAN.

WITH YOUR MONEY, WITH YOUR DATA.

Think twice. Stop complaining and start trying to find alternatives. I am far from perfect, but still I am only financing Bezos and Google. No #microsoft, no #Meta, #X for decades. If I can do that, you can. #opensource #linux.

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

Studie: Große KI-Modelle greifen unter "Stress" auf Erpressung zurück

16 führende KI-Modelle von OpenAI, Google, Meta, xAI & Co. legten bei einem Test konsequent schädliche Verhaltensweisen wie Drohungen und Spionage an den Tag.

heise.de/news/Studie-Grosse-KI

heise online · Studie: Große KI-Modelle greifen unter "Stress" auf Erpressung zurückBy Stefan Krempl

Daily dump from the #Fediverse 20250621

- The #MIT #cogscience (?) paper on #AI and cognitive decline, the #method sounds solid: media.mit.edu/publications/you

- How not to #loooooser your #job to #OpenAI or just thinking about #work #skills: 80000hours.org/agi/guide/skill

- Anyone has any useful piece of advice for introducing a 9yo to #linux? #kid #birthday #gift

- Listening to #karenhao on their new book: techwontsave.us/episode/280_we

Soon, ICE will come and escort you to Guantanamo Bay if you use an ad blocker…

𝙐𝙎 𝘼𝙧𝙢𝙮 𝙖𝙥𝙥𝙤𝙞𝙣𝙩𝙨 𝙋𝙖𝙡𝙖𝙣𝙩𝙞𝙧, 𝙈𝙚𝙩𝙖, 𝙊𝙥𝙚𝙣𝘼𝙄 𝙚𝙭𝙚𝙘𝙨 𝙖𝙨 𝙇𝙩. 𝘾𝙤𝙡𝙤𝙣𝙚𝙡𝙨

thegrayzone.com/2025/06/18/pal

The Grayzone - News and investigative journalism on empire · US Army appoints Palantir, Meta, OpenAI execs as Lt. Colonels - The GrayzoneNews and investigative journalism on empire

“The latest upheaval, brought by #ArtificialIntelligence (ai), is testing the #cockroaches as never before. #Advertising is one of the sectors most radically affected by ai so far. As such, adland offers a postcard from the future for other industries. Three lessons stand out.

The first is that the moat between human workers and chatbot rivals is narrower than most people think.

#CreativeWork is often seen as immune from #automation. Large language models (#LLM’s) are designed to predict the most likely answer, which is often the opposite of the most original one. The best ads remain too weird and wonderful for any machine to have dreamt up: consider the campaign that attached step-counters to chickens to advertise free-range eggs.

Yet this week in Cannes #TikTok, #Meta, #Google and other ad platforms showed off ai-powered features that can create passable video or rewrite ad copy at the click of a button. Their output will not win any awards. That does not matter. Most of the $1trn that is spent on ads each year goes towards workmanlike campaigns, rather than Cannes trophy-bait.

#SamAltman’s prediction that ai will one day be able to do 95% of #marketing may sound like boosterism for his firm, #OpenAI. But the inspired human-made content that people present as a counter-argument is firmly within the remaining 5%. Robots will content themselves with the rest.”

#WhiteCollar / #ZeroHourWork <economist.com/leaders/2025/06/> (paywall) / <archive.md/z9IuJ>by

A billboard showing an ad with two humans peels back to reveal a grid of colorful tiles, symbolizing AI
The Economist · What the “cockroaches” of the ad world teach about dealing with AIBy The Economist

"LLM users also struggled to accurately quote their own work. […] Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels."

arxiv.org/abs/2506.08872

arXiv logo
arXiv.orgYour Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing TaskThis study explores the neural and behavioral consequences of LLM-assisted essay writing. Participants were divided into three groups: LLM, Search Engine, and Brain-only (no tools). Each completed three sessions under the same condition. In a fourth session, LLM users were reassigned to Brain-only group (LLM-to-Brain), and Brain-only users were reassigned to LLM condition (Brain-to-LLM). A total of 54 participants took part in Sessions 1-3, with 18 completing session 4. We used electroencephalography (EEG) to assess cognitive load during essay writing, and analyzed essays using NLP, as well as scoring essays with the help from human teachers and an AI judge. Across groups, NERs, n-gram patterns, and topic ontology showed within-group homogeneity. EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement. Brain-to-LLM users exhibited higher memory recall and activation of occipito-parietal and prefrontal areas, similar to Search Engine users. Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning.

Via #LLRX - @psuPete Recommends – Weekly highlights on cyber security issues, June 15, 2025. llrx.com/2025/06/pete-recommen
Four highlights from this week: Protect Yourself #Online; Study: #OpenAI Has Been #Breached More Than 1000 Times; #Feds warn: Hang up on #phone #scammers pretending to be #borderpatrol agents; and #Cybercriminals Are Hiding #Malicious #Web Traffic in Plain Sight. #cybercrime #cybersecurity #breaches #internet #malicious

"Bob McGrew, the former chief research officer at OpenAI, said professional software engineers are not going to lose their jobs to vibe coding just yet.

McGrew, who left OpenAI in November, said on the latest episode of Sequoia Capital's "Training Data" podcast that product managers can make "really cool prototypes" with vibe coding. But human engineers will still be brought in to "rewrite it from scratch."

"If you are given a code base that you don't understand — this is a classic software engineering question — is that a liability or is it an asset? Right? And the classic answer is that it's a liability," McGrew said of software made with vibe coding.

"You have to maintain this thing. You don't know how it works, no one knows how it works. That's terrible," he continued.

McGrew said that in the next one or two years, coding will be done by a mix of human engineers working with AI tools like Cursor and AI agents like Devin working in the background.

He added that while the liability that comes with using agents to code has gone down, it is "still, net, a liability.""

businessinsider.com/vibe-codin

Business Insider · Ex-OpenAI research head: Vibe coding won't replace software engineersBy Kwan Wei Kevin Tan