From 33e6771a80ff92d5c63d8226700c6614e05cd4ab Mon Sep 17 00:00:00 2001 From: Ziming Liu Date: Mon, 19 Aug 2024 23:00:59 -0400 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index da30966a..e805e13c 100644 --- a/README.md +++ b/README.md @@ -2,7 +2,7 @@ # Kolmogorov-Arnold Networks (KANs) -This is the github repo for the paper ["KAN: Kolmogorov-Arnold Networks"](https://arxiv.org/abs/2404.19756) and ["KAN 2.0: Kolmogorov-Arnold Networks Meet Science"]([https://arxiv.org/abs/2404.19756](https://arxiv.org/abs/2408.10205). Find the documentation [here](https://kindxiaoming.github.io/pykan/). +This is the github repo for the paper ["KAN: Kolmogorov-Arnold Networks"](https://arxiv.org/abs/2404.19756) and ["KAN 2.0: Kolmogorov-Arnold Networks Meet Science"](https://arxiv.org/abs/2408.10205). Find the documentation [here](https://kindxiaoming.github.io/pykan/). Kolmogorov-Arnold Networks (KANs) are promising alternatives of Multi-Layer Perceptrons (MLPs). KANs have strong mathematical foundations just like MLPs: MLPs are based on the universal approximation theorem, while KANs are based on Kolmogorov-Arnold representation theorem. KANs and MLPs are dual: KANs have activation functions on edges, while MLPs have activation functions on nodes. This simple change makes KANs better (sometimes much better!) than MLPs in terms of both model **accuracy** and **interpretability**. A quick intro of KANs [here](https://kindxiaoming.github.io/pykan/intro.html).