med-mastodon.com is one of the many independent Mastodon servers you can use to participate in the fediverse.
Medical community on Mastodon

Administered by:

Server stats:

359
active users

#linearalgebra

0 posts0 participants0 posts today
datatofu<p>This is called "A Gentle Introduction to the Hessian Matrix"</p><p>Hessians are somewhere between <a href="https://mastodon.social/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mastodon.social/tags/calculus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>calculus</span></a> and <a href="https://mastodon.social/tags/rstats" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>rstats</span></a> but still a core aspect of <a href="https://mastodon.social/tags/datascience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>datascience</span></a> </p><p>All in all, building and deriving things like these are probably only useful when developing a unique solution. For the vast majority of cases, having a general understanding is enough. </p><p>... actually, I am pretty sure that there is a <a href="https://mastodon.social/tags/python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>python</span></a> library for just such an occasion (I have never looked though so ymmv)</p>
datatofu<p>Okay. After that bit of hilarity yesterday, have some stuff on <a href="https://mastodon.social/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> </p><p>Not a formula sheet but still useful for developing your <a href="https://mastodon.social/tags/datascience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>datascience</span></a> intuition</p>
Alex Nelson<p>Here's a question: let \(M\) be a \(0\times 0\) matrix with entries in the field \(\mathbb{F}\). What is \(\det(M)\)?</p><p><a href="https://mathstodon.xyz/tags/Mathematics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Mathematics</span></a> <a href="https://mathstodon.xyz/tags/Determinant" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Determinant</span></a> <a href="https://mathstodon.xyz/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a> <a href="https://mathstodon.xyz/tags/Matrix" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Matrix</span></a></p>
Giuseppe Bilotta<p>That first implementation didn't even support the multi-GPU and multi-node features of <a href="https://fediscience.org/tags/GPUSPH" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>GPUSPH</span></a> (could only run on a single GPU), but it paved the way for the full version, that took advantage of the whole infrastructure of GPUSPH in multiple ways.</p><p>First of all, we didn't have to worry about how to encode the matrix and its sparseness, because we could compute the coefficients on the fly, and operate with the same neighbors list transversal logic that was used in the rest of the code; this allowed us to minimize memory use and increase code reuse.</p><p>Secondly, we gained control on the accuracy of intermediate operations, allowing us to use compensating sums wherever needed.</p><p>Thirdly, we could leverage the multi-GPU and multi-node capabilities already present in GPUSPH to distribute computations across all available devices.</p><p>And last but not least, we actually found ways to improve the classic <a href="https://fediscience.org/tags/CG" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CG</span></a> and <a href="https://fediscience.org/tags/BiCGSTAB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BiCGSTAB</span></a> linear solving algorithms to achieve excellent accuracy and convergence even without preconditioners, while making the algorithms themselves more parallel-friendly:</p><p><a href="https://doi.org/10.1016/j.jcp.2022.111413" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.jcp.2022.111</span><span class="invisible">413</span></a></p><p>4/n</p><p><a href="https://fediscience.org/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a> <a href="https://fediscience.org/tags/NumericalAnalysis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NumericalAnalysis</span></a></p>
datatofu<p><a href="https://mastodon.social/tags/datascience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>datascience</span></a> cheatsheets for <a href="https://mastodon.social/tags/python" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>python</span></a> <a href="https://mastodon.social/tags/probability" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>probability</span></a> <a href="https://mastodon.social/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mastodon.social/tags/calculus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>calculus</span></a> and <a href="https://mastodon.social/tags/scipy" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>scipy</span></a> </p><p>(Not necessarily in that order)</p>
katch wreck<p>`Although the term "matrix" was introduced into mathematical literature by James Joseph Sylvester in 1850, the credit for founding the theory of matrices must be given to Arthur Cayley, since he published the first expository articles on the subject. ... Cayley's introductory paper in matrix theory was written in French and published in a German periodical [in 1855]`</p><p><a href="https://mathshistory.st-andrews.ac.uk/Biographies/Cayley/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">mathshistory.st-andrews.ac.uk/</span><span class="invisible">Biographies/Cayley/</span></a></p><p><a href="https://mastodon.social/tags/history" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>history</span></a> <a href="https://mastodon.social/tags/math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>math</span></a> <a href="https://mastodon.social/tags/mathHistory" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mathHistory</span></a> <a href="https://mastodon.social/tags/matrix" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>matrix</span></a> <a href="https://mastodon.social/tags/linearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearAlgebra</span></a> <a href="https://mastodon.social/tags/algebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>algebra</span></a></p>
katch wreck<p>`Cardan, in Ars Magna (1545), gives a rule for solving a system of two linear equations which he calls regula de modo and which [7] calls mother of rules ! This rule gives what essentially is Cramer's rule for solving a 2 × 2 system although Cardan does not make the final step. Cardan therefore does not reach the definition of a determinant but, with the advantage of hindsight, we can see that his method does lead to the definition.` </p><p><a href="https://mathshistory.st-andrews.ac.uk/HistTopics/Matrices_and_determinants/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">mathshistory.st-andrews.ac.uk/</span><span class="invisible">HistTopics/Matrices_and_determinants/</span></a></p><p><a href="https://mastodon.social/tags/matrix" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>matrix</span></a> <a href="https://mastodon.social/tags/linearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearAlgebra</span></a> <a href="https://mastodon.social/tags/polynomial" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>polynomial</span></a></p>
Non-Euclidean Dreamer<p>Meanwhile I found a bug where an orthogonal basis of a subspace is not normalized. Somehow rotating that leads to a not orthogonal basis.</p><p>But looks like that bug was not alone...</p><p><a href="https://mathstodon.xyz/tags/debugging" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>debugging</span></a> <a href="https://mathstodon.xyz/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mathstodon.xyz/tags/creativecoding" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>creativecoding</span></a></p>
Non-Euclidean Dreamer<p>The slice thickens.</p><p>But I feel there's something wrong, either with positioning or with the zBuffer... </p><p><a href="https://mathstodon.xyz/tags/debugging" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>debugging</span></a> <a href="https://mathstodon.xyz/tags/cutandproject" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cutandproject</span></a> <a href="https://mathstodon.xyz/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a></p>
Dan Drake 🦆<p>Was paying for a coffee, and the credit card terminal was from eigenpayments.com.</p><p>I was hoping that the eigenvalue associated with my eigenpayment would be less than 1 (perhaps even $\leq 0$!), but turns out it was 1. Which of course is better than being greater than one. </p><p>I'd love to see what my credit card would do with a complex eigenvalue. 🤔🙂</p><p><a href="https://mathstodon.xyz/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a></p>
Matthew Abbott<p>I told myself to pick up a relaxing hobby for my evenings 🥲</p><p><a href="https://oliphaunt.social/tags/chinese" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>chinese</span></a> <a href="https://oliphaunt.social/tags/math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>math</span></a> <a href="https://oliphaunt.social/tags/linearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearAlgebra</span></a> <a href="https://oliphaunt.social/tags/learningIsFun" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>learningIsFun</span></a> <a href="https://oliphaunt.social/tags/language" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>language</span></a></p>
Patrick Honner<p>What are some nice ways to think about the fact that if the columns of a square matrix are pairwise orthogonal unit vectors, then the rows of the matrix must also be pairwise orthogonal unit vectors?</p><p><a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a></p>
S.T. Veje<p>Also brushing up on my linear algebra. I could never remember which is which of surjective and injective, but I think I've finally found a mnemonic that works for me: surjective comes from French "sur" meaning on top of or above, so surjective means you can cover one with the other, and injective as in Danish "en til en" (one to one ... the pronunciation of "en" and "in" is not 100% identical but close enough).</p><p>See, learning languages is useful!</p><p><a href="https://mstdn.social/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a> <a href="https://mstdn.social/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a></p>
ct<p><a href="https://oden.utexas.edu/news-and-events/news/BLIS-embraced-by-NVIDIA-RISC-V/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">oden.utexas.edu/news-and-event</span><span class="invisible">s/news/BLIS-embraced-by-NVIDIA-RISC-V/</span></a></p><p><a href="https://mastodon.content.town/tags/blis" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blis</span></a> <a href="https://mastodon.content.town/tags/riscv" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>riscv</span></a> <a href="https://mastodon.content.town/tags/risc_v" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>risc_v</span></a> <a href="https://mastodon.content.town/tags/hpc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hpc</span></a> <a href="https://mastodon.content.town/tags/supercomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>supercomputing</span></a> <a href="https://mastodon.content.town/tags/blas" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blas</span></a> <a href="https://mastodon.content.town/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mastodon.content.town/tags/statisticalcomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>statisticalcomputing</span></a> <a href="https://mastodon.content.town/tags/scientificcomputing" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>scientificcomputing</span></a></p>
Methylzero<p>If you had to do a lot of dense linear algebra (QR eigenvalues, SVD, linear least squares, etc.) on modern AMD *CPUs*, which library would you choose for maximum performance? <a href="https://mast.hpc.social/tags/HPC" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>HPC</span></a> <a href="https://mast.hpc.social/tags/BLAS" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BLAS</span></a> <a href="https://mast.hpc.social/tags/LAPACK" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LAPACK</span></a> <a href="https://mast.hpc.social/tags/linearalgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearalgebra</span></a> <a href="https://mast.hpc.social/tags/NumericalSimulation" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>NumericalSimulation</span></a> <a href="https://mast.hpc.social/tags/amd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>amd</span></a></p>
Patrick Honner<p>I noticed something interesting (and perhaps obvious!) while playing around with linear equations this morning.</p><p>When trying to the find the equation of a circle through three points using a system of equations, the strategy is to plug each point into</p><p>\[x^2+y^2+Ax+By+C=0\]</p><p>and to solve the resulting 3x3 system for \( A, B, \) and \(C\). Suppose two points on the circle are \( (2,3) \) and \( (4,4) \). When you plug these in, you get the two equations</p><p>\[ 2A + 3B + C = -13\]<br>\[ 4A + 4B + C = -32\]</p><p>Combining these equations gives you</p><p>\[2A + B = -19\]</p><p>Notice that the slope of the line between the two points, \( \frac{1}{2} \) is encoded in the coefficients of this new equation: it's the coefficient of \(B\) divided by the coefficient of \(A\).</p><p>This is one way to see why three collinear points don't determine a circle. When you plug in all three points to get the system of equations, the fact that the slopes between pairs of points are all the same produces a dependence on the left-hand side of the system that isn't consistent with the right-hand side. For example, if the third point is \( (10,7) \) (which is collinear with the points \( (2,3) \) and \( (4,4) \)), the 3x3 system can be reduced to the system</p><p>\[2A + B = -19\]<br>\[6A + 3B = -117 \]</p><p>which is inconsistent.</p><p>I'm sure this can be probably be understood in a cool way by thinking about duality, or the relationship between affine and linear functions, but I found this simple approach satisfying!</p><p><a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a> <a href="https://mathstodon.xyz/tags/Mathematics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Mathematics</span></a></p>
Eric Maugendre<p>Data returned by an observation typically is represented as a vector in machine learning.</p><p>A neural network can be seen as a large collection of linear models. We may represent the inputs and outputs of each layer as vectors, matrices, and tensors (which are like higher dimensional matrices).</p><p><a href="https://hachyderm.io/tags/algebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>algebra</span></a> <a href="https://hachyderm.io/tags/linearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linearAlgebra</span></a> <a href="https://hachyderm.io/tags/vectors" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>vectors</span></a> <a href="https://hachyderm.io/tags/matrices" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>matrices</span></a> <a href="https://hachyderm.io/tags/determinants" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>determinants</span></a> <a href="https://hachyderm.io/tags/singularity" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>singularity</span></a> <a href="https://hachyderm.io/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> <a href="https://hachyderm.io/tags/DataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DataScience</span></a> <a href="https://hachyderm.io/tags/math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>math</span></a> <a href="https://hachyderm.io/tags/maths" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>maths</span></a> <a href="https://hachyderm.io/tags/mathematics" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mathematics</span></a> <a href="https://hachyderm.io/tags/mathStodon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mathStodon</span></a> <a href="https://hachyderm.io/tags/ML" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ML</span></a> <a href="https://hachyderm.io/tags/data" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>data</span></a> <a href="https://hachyderm.io/tags/dataDon" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>dataDon</span></a> <a href="https://hachyderm.io/tags/dataScience" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>dataScience</span></a> <a href="https://hachyderm.io/tags/machineLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>machineLearning</span></a> <a href="https://hachyderm.io/tags/DeepLearning" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>DeepLearning</span></a> <a href="https://hachyderm.io/tags/neuralNetworks" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>neuralNetworks</span></a></p>
Patrick Honner<p>I especially liked this sneaky way of introducing change of basis early on!</p><p><a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/MathEd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MathEd</span></a> <a href="https://mathstodon.xyz/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a> <a href="https://mathstodon.xyz/tags/LinAlg" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinAlg</span></a></p>
Patrick Honner<p>This was the start to one my favorite linear algebra lessons last year. It was just the right mix of confusing and familiar, and it really set the class up to talk about the structure of linear combinations and vector spaces!</p><p><a href="https://mathstodon.xyz/tags/Math" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Math</span></a> <a href="https://mathstodon.xyz/tags/MathEd" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MathEd</span></a> <a href="https://mathstodon.xyz/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a> <a href="https://mathstodon.xyz/tags/LinAlg" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinAlg</span></a></p>
Pustam | पुस्तम | পুস্তম🇳🇵<p>Terence Tao’s lecture notes on Linear Algebra:<br><a href="https://terrytao.wordpress.com/wp-content/uploads/2016/12/linear-algebra-notes.pdf" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">terrytao.wordpress.com/wp-cont</span><span class="invisible">ent/uploads/2016/12/linear-algebra-notes.pdf</span></a><br> <a href="https://mathstodon.xyz/tags/LinearAlgebra" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>LinearAlgebra</span></a> <br><span class="h-card" translate="no"><a href="https://mathstodon.xyz/@tao" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>tao</span></a></span></p>