https://www.europesays.com/2255431/ AI models with systemic risks given pointers on how to comply with EU AI rules #Europe #EuropeanUnion #FoundationModels #models #SystemicRisks #TheEuropeanCommission

https://www.europesays.com/2255431/ AI models with systemic risks given pointers on how to comply with EU AI rules #Europe #EuropeanUnion #FoundationModels #models #SystemicRisks #TheEuropeanCommission
GREmLN: A Cellular Regulatory Network-Aware Transcriptomics Foundation Model
https://www.biorxiv.org/content/10.1101/2025.07.03.663009v1?rss=1
Comments
* really brilliant work: LLM + neural transformer architecture
* a molecular biologist, I long-thought (no time to learn / program) flux-based analysis / ODE models
* this type of modeling is foundational to in silico modeling
* huge proponent of graphs - add LLM, attention heads - brilliant!
️ kudos
[Science] Researchers claim their AI model simulates the human mind. Others are skeptical
https://www.science.org/content/article/researchers-claim-their-ai-model-simulates-human-mind-others-are-skeptical
[Nature] A foundation model to predict and capture human cognition
https://www.nature.com/articles/s41586-025-09215-4
Hot take: Foundation Models custom tools should be AppIntents
Or we should at least be able to create a tool from an AppIntent or provide app intents to the LLM session.
It’d be like using Siri or the Shortcuts app from within the app, in a prompt-based conversational way.
Wow the Foundation Models Framework Deep Dive was amazing! #apple #wwdc25 #foundationModels #frameworks The approach for integration with on device data is brilliant
#wwdc25 #FoundationModels #Xcode #LLM
Has anyone seen the Foundation Models actually working? Am I holding it wrong? I am not getting any responses either in the playground or app, and there is a bunch of this crap in the console when running the app.
Running Xcode 26 beta on macOS 26 beta.
Are there regional restrictions? Xcode AI chat also is “unavailable in my region”.
ChatGPT & Co für sicherheitsrelevante Einsatzfelder nutzbar machen
Die @Cyberagentur hat #Forschungsprogramm #HEGEMON zur Bewertung und Anpassung generativer #FoundationModels für sicherheitskritische Anwendungen gestartet.
Entwicklung neuer Benchmarks für KI-Modelle und Anwendung auf komplexe Aufgaben aus dem Geoinformationswesen.
Mehr Infos zur Vergabe: https://t1p.de/qcuq4
#Cyberagentur #HEGEMON #KI #Sicherheit #Benchmarking #PCP #Geoinformation #Forschung #KITransparenz
Can time series (TS) #FoundationModels (FM) like Chronos zero-shot generalize to unseen #DynamicalSystems (DS)? #AI
No, they cannot!
But *DynaMix* can, the first TS/DS foundation model based on principles of DS reconstruction, capturing the long-term evolution of out-of-domain DS: https://arxiv.org/pdf/2505.13192v1
Unlike TS foundation models, DynaMix exhibits #ZeroShotLearning of long-term stats of unseen DS, incl. attractor geometry & power spectrum, w/o *any* re-training, just from a context signal.
It does so with only 0.1% of the parameters of Chronos & 10x faster inference times than the closest competitor.
It often even outperforms TS FMs on forecasting diverse empirical time series, like weather, traffic, or medical data, typically used to train TS FMs.
This is surprising, cos DynaMix’ training corpus consists *solely* of simulated limit cycles & chaotic systems, no empirical data at all!
And no, it’s neither based on Transformers nor Mamba – it’s a new type of mixture-of-experts architecture based on the recently introduced AL-RNN (https://proceedings.neurips.cc/paper_files/paper/2024/file/40cf27290cc2bd98a428b567ba25075c-Paper-Conference.pdf), specifically trained for DS reconstruction.
Remarkably, DynaMix not only generalizes zero-shot to novel DS, but it can even generalize to new initial conditions and regions of state space not covered by the in-context information.
We dive a bit into the reasons why current time series FMs not trained for DS reconstruction fail, and conclude that a DS perspective on time series forecasting & models may help to advance the #TimeSeriesAnalysis field.
The quest for a fair TimeGPT benchmark
At the end of yesterday's #TimeGPT for mobility post, we concluded that TimeGPT's trainingset probably included a copy of the popular BikeNYC timeseries dataset and that, therefore, we were not looking at a fair comparison ...
http://anitagraser.com/2025/03/29/the-quest-for-a-fair-timegpt-benchmark/
We're excited to be part of this lighthouse program of @association, (co-)leading three #HFMI projects: AqQua, The Human Radiome Project and the Synergy Unit.
𝗠𝗼𝗿𝗲 HFMI: https://hfmi.helmholtz.de/
AqQua: https://bit.ly/AqQua
The Human Radiome Project & Synergy Unit: https://bit.ly/THRP-SynergyUnit
@Techmeme This is the danger of closed source
These are knowledge models, and they only output what they are fed with
And no, they won’t magically develop ’reasoning skills’ and be able to sift through propaganda. NOT when it’s part of the training
To think otherwise means you don’t know shit how they work
They obey statistics. Training data for #ai #foundationmodels should be subject to public #academic scrutiny. Otherwise the models are bound to fall for flooding attacks
Check out the new Helmholtz Foundation Model Initiative (#HFMI) website at https://hfmi.helmholtz.de/ #FoundationModels
The Helmholtz Association provides ideal conditions for developing such forward-looking applications: an abundance of data, powerful supercomputers for training the models, and extensive expertise in artificial intelligence.
Our goal is to develop Foundation Models across a wide spectrum of research fields to address the major questions of our time.
How can #AI transform patient care?
Paul Jäger, Helmholtz Imaging's Head of Research @DKFZ explores how #FoundationModels could tackle healthcare challenges, from data overload to improved diagnostics.
Watch this @association video now bit.ly/4eiB2oQ
Was sind und leisten Foundation Models in der Wissenschaft? Wir erklären es am Beispiel von Gesundheitsdaten https://www.youtube.com/watch?v=Sqtsx6rDkUc #KI #HFMI #FoundationModels
Wir stellen unsere #Helmholtz Foundation Model Initiative vor! https://www.youtube.com/watch?v=enA0Of36PP0 #HFMI
Unser Ziel ist es, #FoundationModels über ein breites Spektrum von Forschungsfeldern hinweg zu entwickeln, die zur Lösung der großen Fragen unserer Zeit beitragen.
Ein neues KI-Basismodell soll helfen, die globalen Kohlenstoffkreisläufe besser zu verstehen und Vorhersagen zu künftigen Veränderungen zu machen. Das Projekt braucht vor allem eines: Eine Menge qualitativ hochwertiger Daten aus unterschiedlichen Quellen.
https://www.helmholtz.de/newsroom/artikel/ein-ki-modell-fuer-den-globalen-kohlenstoffkreislauf/ #FoundationModels
The Helmholtz Foundation Model Initiative (HFMI) is soliciting bids for a second round of projects. With this call, we extend our support for highest-impact pilot foundation model projects within the Helmholtz-Gemeinschaft. The call is open to all our researchers.
Apply now: https://www.helmholtz.de/en/research/current-calls-for-applications/article/helmholtz-foundation-model-initiative/
#ai #data #foundationmodels #supercomputing
Herausforderung an der Schnittstelle von KI und Geodaten!
Die @Cyberagentur gibt am 31. Juli 2024 beim Online-Partnering-Event erste Einblicke in das neue Forschungsprojekt: „HEGEMON“.
Die Gelegenheit, sich zu vernetzen und sich für die wegweisende Forschung fit zu machen.
Mehr Infos und Anmeldung: https://t1p.de/m9vpi
#Cybersicherheit #FoundationModels #GenerativeAI #KI #Benchmarking #LLM #Multimodalitaet #Geodaten #GIS #Sicherheitstechnologie #Forschung #Innovation
Challenge at the AI - geodata interface! 31.07.24 Online event "HEGEMON" of the @CybAgBund. Registration: https://t1p.de/m9vpi
#Research #Innovation #FoundationModels #GenerativeAI #Benchmarking #LLM #Multimodality #Geodata #GIS
https://nachrichten.idw-online.de/2024/07/02/press-release-cyberagentur-analysing-foundation-models-in-the-security-context