med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

345
active users

#neuralnetworks

4 posts3 participants0 posts today
Continued thread

Edit: Why 6 months? え~ the short answer is, "because I started 3 years behind everyone else." At the time, #datascience was not well defined as a career direction, and #ai models were still trying to gain traction. I.e. as simple #neuralnetworks that provided a computational advantage over their statistical model counterparts. So, if I did not do at least that much, it would have been in vain

🧠 Welcome to the
curved space of everything
buzzsprout.com/2405788/episode
helioxpodcast.substack.com/p/1

August 06, 2025 • (S5 E11) • 16:12
Heliox: Where Evidence Meets Empathy 🇨🇦

🧠💥 Just discovered how your brain might be hiding explosive secrets in curved spaces. New research reveals why AI suddenly "gets it" - and it's not what you think. The math that's reshaping memory itself. #NeuralNetworks #AI #brainscience

Thanks for listening today!

If you enjoy the show, please visit the podcast
On Apple Podcasts, please scroll to the bottom,
and give it a rating.
On Spotify, head to the show and click the three-dot icon to rate.
⭐⭐⭐⭐⭐
Thank you!

AlphaGo Moment for
Model Architecture Discovery

buzzsprout.com/2405788/episode

helioxpodcast.substack.com/pub

August 02, 2025 • ( S5 E7) • 17:39
HELIOX: WHERE EVIDENCE MEETS EMPATHY 🇨🇦

We're living through what might be the last era where humans are the limiting factor in AI development. That's not hyperbole—it's the stark conclusion emerging from breakthrough research that should terrify and exhilarate us in equal measure.

________________________________________

In-context learning has been consistently shown to exceed hand-crafted neural learning algorithms across the board.

But it's limited by the length of the context. Even with neural architectures allowing context to grow to infinity, these come with high costs and scaling problems.

Is there a way to incorporate new knowledge learned in-context back into neural network weights?

Of course there is!

Let's imagine we have a lot of data, sequences of instructions and outputs where in-context learning happens.

From this data we can produce a dataset of synthetic data which presents the new knowledge learned. We can continually train the model with this dataset.

Of course this is super slow and inconvenient. But as a result we'll get a dataset with in-context learning happening, and old model weights against new model weights.

We can use this data to train a neural programmer model directly!

That model would take in the context as such, and if in-context learning has happened in those interactions, it can predict the changes to the neural network weights which would happen if the long and heavy synthetic data pipeline had been run.

Instead of the heavy pipeline, we can just use the neural programmer model to directly update the large model weights based on the in-context learning it experienced, to crystallize the learnings into its long-term memory, not unlike what hippocampus does in the human brain.

Your Wi-Fi may know who you are, literally. “WhoFi,” a new system from Rome’s La Sapienza University, identifies people with 95.5% accuracy using signals bouncing off their bodies. No cameras. No lights. Just basic routers and neural networks. It even works through walls. Groundbreaking tech, or surveillance nightmare? The line just got blurrier.

Continued thread

Edit: "But your work is #datascience about #neuralnetworks and #ai and data regulation is #cybersecurity so why would you need to be concerned about that?"

Dude. Bro. Dudebro. Broham. Ham slice. If you do not understand the relationship between cybersecurity, ai, and data processing, then there is nothing I can say to help you. Either you or your company are in some deep shit

Continued thread

Like, I am focusing on #neuralnetworks to do stuff, but it is not as if I am explicitly trying to solve a particular problem yet. So, like, there are a lot of supporting details, but a lot of them end up making me ask more questions instead. Like, I can tell that the ones writing are quite formidable, but I kind of feel like "Are you projecting your skillfulness or..." sometimes.

Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years.

"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.

This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.

And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.

It is also an essay about sociology.

The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."

garymarcus.substack.com/p/how-

Marcus on AI · How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AIBy Gary Marcus
Continued thread

If we ever see a real artificial mind, some kind of LLM will probably be a small but significant component of that, but the current wave of machine learning will most likely come to a grinding halt very soon because of a lack of cheap training data. The reason why all of this is happening now is simple: The technologies behind machine learning have been around for decades, but computers weren't fast enough and didn't have enough memory for those tools to become really powerful until the early 2000s, and around the same time, the Internet went mainstream and got filled with all kinds of data that could be datamined for training sets. Now there is so much synthetic content out there that automated data mining won't work much longer, you need humans to curate and clean the training data, which makes the process slow and expensive. I expect to see another decades long AI winter after the commercial hype is over.

If you look for real intelligence, look at autonomous robots and computer game NPCs. There you can find machine learning and artificial neural networks applied to actual cognitive tasks in which an agent interacts with its environment. Those things may not even be as intelligent as a rat yet, but they are actually intelligent, unlike LLMs.

#llm#LLMs#ai