ohai.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A cozy, fast and secure Mastodon server where everyone is welcome. Run by the folks at ohai.is.

Administered by:

Server stats:

1.8K
active users

#svm

0 posts0 participants0 posts today

ANN vs. SVM: When to Choose Artificial Neural Networks Over Support Vector Machines
ANNs are great for complex relationships but can be slow. SVMs are faster in high dimensions and less prone to errors with complex data. Which is better depends on your task and data. #ANN #SVM #MachineLearning #ArtificialIntelligence #DeepLearning #Comparison
tech-champion.com/artificial-i
Comparing ANN and...

ANN vs. SVM: When to Choose Artificial Neural Networks Over Support Vector Machines
ANNs are great for complex relationships but can be slow. SVMs are faster in high dimensions and less prone to errors with complex data. Which is better depends on your task and data. #ANN #SVM #MachineLearning #ArtificialIntelligence #DeepLearning #Comparison
tech-champion.com/artificial-i
Comparing ANN and...

Alle Bundesliga-Transfers 2024/25

von Nina Potzel

Wer kommt? Wer geht? Wer verlängert? Hier aktualisieren wir laufend die Transfers und Vertragsverlängerungen der Teams in der Bundesliga der Frauen.Stand: 16. September 2024, 13:52 Uhr

FC Bayern München

ZugängeMagou Doucouré(23, V, von OSC Lille, bis 2025)Julia Zigiotti Olme(26, DM, von Brighton & Hove Albion, bis 2026)Ena Mahmutovic(20, TW, von MSV Duisburg, bis 2027)Lena Oberdorf […]

#BundesligaFrauen #DieLiga #FCB #FCC #FCN #FußballDerFrauen #HSV #KOE #LEV #MSV #PDM #RaBa #SCF #SGE #SGS #SpielerInnen #SVM #SVW #Transfers #TSG #TurbinePotsdam #WOB

https://bolztribuene.de/2024/05/07/bundesliga-transfers-2024-25/

Continued thread

Addendum 9

Transformers as Support Vector Machines
arxiv.org/abs/2308.16898
Discussion: news.ycombinator.com/item?id=3

* Support vector machine: en.wikipedia.org/wiki/Support_
* transformer as a "different kind of SVM" - SVM that separates "good" tokens within ea. input sequence f. "bad" tokens
* SVM serves as a good-token-selector
* inherently different f. traditional SVM which assigns 0-1 label to inputs

arXiv logo
arXiv.orgTransformers as Support Vector MachinesSince its inception in "Attention Is All You Need", transformer architecture has led to revolutionary advancements in NLP. The attention layer within the transformer admits a sequence of input tokens $X$ and makes them interact through pairwise similarities computed as softmax$(XQK^\top X^\top)$, where $(K,Q)$ are the trainable key-query parameters. In this work, we establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem that separates optimal input tokens from non-optimal tokens using linear constraints on the outer-products of token pairs. This formalism allows us to characterize the implicit bias of 1-layer transformers optimized with gradient descent: (1) Optimizing the attention layer with vanishing regularization, parameterized by $(K,Q)$, converges in direction to an SVM solution minimizing the nuclear norm of the combined parameter $W=KQ^\top$. Instead, directly parameterizing by $W$ minimizes a Frobenius norm objective. We characterize this convergence, highlighting that it can occur toward locally-optimal directions rather than global ones. (2) Complementing this, we prove the local/global directional convergence of gradient descent under suitable geometric conditions. Importantly, we show that over-parameterization catalyzes global convergence by ensuring the feasibility of the SVM problem and by guaranteeing a benign optimization landscape devoid of stationary points. (3) While our theory applies primarily to linear prediction heads, we propose a more general SVM equivalence that predicts the implicit bias with nonlinear heads. Our findings are applicable to arbitrary datasets and their validity is verified via experiments. We also introduce several open problems and research directions. We believe these findings inspire the interpretation of transformers as a hierarchy of SVMs that separates and selects optimal tokens.