med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

353
active users

#dynamicalsystems

0 posts0 participants0 posts today
Andrei A. Klishin<p>re-<a href="https://fediscience.org/tags/introduction" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>introduction</span></a><br>Hi Fediscience! I am an Assistant Professor of Mechanical Engineering at University of Hawaiʻi at Mānoa (Honolulu). I got here starting from Physics training with many scientific detours into data-driven models, complex systems, nanomaterial self-assembly, human learning of complex networks, naval ships, and design problems.<br>I grew up in Belarus and have *opinions* on that region of the world. I've been on Fediverse since late 2022 when *something* happened to our previous cybersocial infrastructure, but the previous server I was on is sunsetting. Please come say hi and recommend cool people to follow here.<br>I have a blog with longer thoughts on science-adjacent topics.<br><a href="https://www.aklishin.science/blog/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">aklishin.science/blog/</span><span class="invisible"></span></a><br><a href="https://fediscience.org/tags/ComplexSystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComplexSystems</span></a> <a href="https://fediscience.org/tags/NetworkScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NetworkScience</span></a> <a href="https://fediscience.org/tags/DataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataScience</span></a> <a href="https://fediscience.org/tags/DynamicalSystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DynamicalSystems</span></a> <a href="https://fediscience.org/tags/CollectiveBehavior" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CollectiveBehavior</span></a> <a href="https://fediscience.org/tags/StatisticalPhysics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>StatisticalPhysics</span></a></p>
Khurram Wadee ✅<p>A few days back, I posted some <a href="https://mastodon.org.uk/tags/AnimatedGifs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AnimatedGifs</span></a> of the exact solution for a large-amplitude undamped, unforced <a href="https://mastodon.org.uk/tags/Pendulum" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Pendulum</span></a>. I then thought to complete the study to include the case when it has been fed enough <a href="https://mastodon.org.uk/tags/energy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>energy</span></a> to allow it just to undergo <a href="https://mastodon.org.uk/tags/FullRotations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FullRotations</span></a>, rather than just <a href="https://mastodon.org.uk/tags/oscillations" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>oscillations</span></a>. Well, it turns out that it is “a bit more complicated than I first expected” but I finally managed it.</p><p><a href="https://mastodon.org.uk/tags/Mathematics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Mathematics</span></a> <a href="https://mastodon.org.uk/tags/AppliedMathematics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AppliedMathematics</span></a> <a href="https://mastodon.org.uk/tags/SpecialFunctions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SpecialFunctions</span></a> <a href="https://mastodon.org.uk/tags/DynamicalSystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DynamicalSystems</span></a> <a href="https://mastodon.org.uk/tags/NonlinearPhenomena" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NonlinearPhenomena</span></a></p>
DurstewitzLab<p>Can time series (TS) <a href="https://mathstodon.xyz/tags/FoundationModels" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FoundationModels</span></a> (FM) like Chronos zero-shot generalize to unseen <a href="https://mathstodon.xyz/tags/DynamicalSystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DynamicalSystems</span></a> (DS)? <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a></p><p>No, they cannot!</p><p>But *DynaMix* can, the first TS/DS foundation model based on principles of DS reconstruction, capturing the long-term evolution of out-of-domain DS: <a href="https://arxiv.org/pdf/2505.13192v1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/pdf/2505.13192v1</span><span class="invisible"></span></a></p><p>Unlike TS foundation models, DynaMix exhibits <a href="https://mathstodon.xyz/tags/ZeroShotLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ZeroShotLearning</span></a> of long-term stats of unseen DS, incl. attractor geometry &amp; power spectrum, w/o *any* re-training, just from a context signal. <br>It does so with only 0.1% of the parameters of Chronos &amp; 10x faster inference times than the closest competitor.</p><p>It often even outperforms TS FMs on forecasting diverse empirical time series, like weather, traffic, or medical data, typically used to train TS FMs. <br>This is surprising, cos DynaMix’ training corpus consists *solely* of simulated limit cycles &amp; chaotic systems, no empirical data at all!</p><p>And no, it’s neither based on Transformers nor Mamba – it’s a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (<a href="https://proceedings.neurips.cc/paper_files/paper/2024/file/40cf27290cc2bd98a428b567ba25075c-Paper-Conference.pdf" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">proceedings.neurips.cc/paper_f</span><span class="invisible">iles/paper/2024/file/40cf27290cc2bd98a428b567ba25075c-Paper-Conference.pdf</span></a>), specifically trained for DS reconstruction.</p><p>Remarkably, DynaMix not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.</p><p>We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting &amp; models may help to advance the <a href="https://mathstodon.xyz/tags/TimeSeriesAnalysis" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TimeSeriesAnalysis</span></a> field.</p>
Nosrat<p>(10/n) If you’ve made it this far, you’ll definitely want to check out the full paper. Grab your copy here: <br><a href="https://www.biorxiv.org/content/10.1101/2024.12.17.628339v1" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="ellipsis">biorxiv.org/content/10.1101/20</span><span class="invisible">24.12.17.628339v1</span></a><br>📤 Sharing is highly appreciated!<br><a href="https://masto.ai/tags/compneuro" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>compneuro</span></a> <a href="https://masto.ai/tags/neuroscience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>neuroscience</span></a> <a href="https://masto.ai/tags/NeuroAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeuroAI</span></a> <a href="https://masto.ai/tags/dynamicalsystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>dynamicalsystems</span></a></p>
DurstewitzLab<p>Symbolic dynamics builds a bridge from <a href="https://mathstodon.xyz/tags/DynamicalSystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DynamicalSystems</span></a> to computation/ <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a>! </p><p>In our <a href="https://mathstodon.xyz/tags/NeurIPS2024" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NeurIPS2024</span></a> (@NeurIPSConf) paper we present a new network architecture, Almost-Linear RNNs (Fig. 1), that finds most parsimonious piecewise-linear representations of DS from data: <br><a href="https://arxiv.org/abs/2410.14240" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/abs/2410.14240</span><span class="invisible"></span></a></p><p>These representations are topologically minimal (Fig. 5,7), profoundly easing interpretation and math. analysis of the underlying data-generating DS. </p><p>The AL-RNN furthermore naturally gives rise to a symbolic encoding that provably preserves important topological properties of the underlying dynamical system.<br>Symbolic dynamics directly links up with computational graphs, finite state machines, formal languages etc. (Fig. 2).</p><p>Spearheaded by Manuel Brenner and Christoph Hemmer, jointly with Zahra Monfared.</p>
DurstewitzLab<p>Interested in interpretable <a href="https://mathstodon.xyz/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> foundation models for <a href="https://mathstodon.xyz/tags/DynamicalSystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DynamicalSystems</span></a> reconstruction?</p><p>In a new paper we move into this direction, training common latent DSR models with system-specific features on data from multiple different dynamical regimes and DS: <a href="https://arxiv.org/pdf/2410.04814" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">arxiv.org/pdf/2410.04814</span><span class="invisible"></span></a><br>(Fig. 7)</p><p>We show applications like transfer &amp; few-shot learning, but most interestingly perhaps, subject/system-specific features were often linearly related to control parameters of the underlying dynamical system trained on …<br>(Fig. 4)</p><p>This gives rise to an interpretable latent feature space, in which datasets with similar dynamics cluster. Intriguingly, this clustering according to *dynamical systems features* led to much better separation of groups than could be achieved by more traditional time series features.<br>(Fig. 6)</p><p>Fantastic work by the incomparable Manuel Brenner and Elias Weber, together with Georgia Koppe!</p>
Khurram Wadee ✅<p>Just messing about a bit. Here is the famous <a href="https://mastodon.org.uk/tags/LorenzAttractor" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LorenzAttractor</span></a> plotted using <a href="https://mastodon.org.uk/tags/WxMaxima" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>WxMaxima</span></a>. The three <a href="https://mastodon.org.uk/tags/trajectories" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>trajectories</span></a>, shown in red, green and blue are for three fairly nearby <a href="https://mastodon.org.uk/tags/InitialConditions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>InitialConditions</span></a>.</p><p><a href="https://mastodon.org.uk/tags/DynamicalSystems" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DynamicalSystems</span></a> <a href="https://mastodon.org.uk/tags/ChaoticAttractors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChaoticAttractors</span></a> <a href="https://mastodon.org.uk/tags/StrangeAttractors" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>StrangeAttractors</span></a> <a href="https://mastodon.org.uk/tags/NumericalSolutions" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>NumericalSolutions</span></a> <a href="https://mastodon.org.uk/tags/Mathematics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Mathematics</span></a> <a href="https://mastodon.org.uk/tags/AppliedMathematics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AppliedMathematics</span></a> <a href="https://mastodon.org.uk/tags/CCBYSA" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CCBYSA</span></a> <a href="https://mastodon.org.uk/tags/FreeSoftware" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FreeSoftware</span></a></p>

Magnitude-based pruning is a standard #ML #AI technique to produce sparse models, but in our @ICMLConf paper arxiv.org/abs/2406.04934 we find it doesn’t work for #DynamicalSystems reconstruction.
Instead, via geometry-based pruning we find the *network topology* is far more important!

It turns out that even RNN weights small in relative size can have a dramatic impact on system dynamics as measured by attractor agreement. In fact, there is not much difference between small and large magnitude weights in contribution to DS reconstruction quality. (Fig. 1)

Following the lottery ticket hypothesis, we find that large RNNs still contain winning tickets that can be carved out by *geometry-based* pruning, but that these tickets are defined by *graph topology* with initial weight values hardly playing any role. (Fig. 4)

The ‘winning’ graph topology distilled from trained RNNs turns out to exhibit both hub-like and small world features. RNNs initialized with this topology perform significantly better than equally-sized RNNs with random, Watts-Strogatz or Barabási-Albert topology. (Fig. 6)

… and also train much faster. (Fig. 7)

This all makes sense: Natural and engineered DS often bear a specific sparse network topology that is crucial for shaping their dynamics!

Fantastic work lead by Christoph Hemmer with Manuel Brenner and Florian Hess!

Dear fellow #control #engineers and #systemstheory nerds 🤓 . I want to invite you on my little journey through the interesting lands of #nonlinear #DynamicalSystems.

My goal is to come up with some differential equations of dynamical systems which have some not-so-common step responses. How far and interesting this will be – I don't know, but it might get interesting.

We start with this little friend: It loads fast but unloads slow. I call it "leashed DT1 element".

FOUND! thx @lilbatscholar
Dynamics: The Geometry of Behavior, by R. Abraham

I remember one time I ...found a book w/ pdf.
It was about displaying dynamical systems theory, showing manifolds and so on.
Plots were not made with any programming language, they were actual drawings, pastels, watercolors.
It's the type of book I literally have dreams about, but I think this one exists😅 what was it?

Kindly boost my chances to find it 🙏🏾

Fernando Rosas (unfortunately not on Mastodon) asked on bsky:

"Has anyone figured out what exactly is the relation between the ideas of feedback, recurrence, and self-reference?"

A really interesting question.

He pointed to this paper for ideas: arxiv.org/abs/1711.02456
"Self-referential basis of undecidable dynamics: from The Liar Paradox and The Halting Problem to The Edge of Chaos"

I did some desk research and found this cool paper:
arxiv.org/abs/1112.2141
"Resolving Gödel's Incompleteness Myth: Polynomial Equations and Dynamical Systems for Algebraic Logic"
that argues there is no essential incompleteness in formal reasoning systems if you look closely enough (using a more elaborate formalism based on polynomial equations to represent and evaluate logical proposition).

I wonder if analogous construction could be created for related theorems like the halting problem in computability theory.

arXiv.orgSelf-referential basis of undecidable dynamics: from The Liar Paradox and The Halting Problem to The Edge of ChaosIn this paper we explore several fundamental relations between formal systems, algorithms, and dynamical systems, focussing on the roles of undecidability, universality, diagonalization, and self-reference in each of these computational frameworks. Some of these interconnections are well-known, while some are clarified in this study as a result of a fine-grained comparison between recursive formal systems, Turing machines, and Cellular Automata (CAs). In particular, we elaborate on the diagonalization argument applied to distributed computation carried out by CAs, illustrating the key elements of Gödel's proof for CAs. The comparative analysis emphasizes three factors which underlie the capacity to generate undecidable dynamics within the examined computational frameworks: (i) the program-data duality; (ii) the potential to access an infinite computational medium; and (iii) the ability to implement negation. The considered adaptations of Gödel's proof distinguish between computational universality and undecidability, and show how the diagonalization argument exploits, on several levels, the self-referential basis of undecidability.