Because there is so much information online — such an insane hypertrophy of data, so wildly in excess of what any of us could consume in a lifetime — it’s very easy to forget that vast troves of our accumulated storehouses of knowledge have yet to be digitized, and are accessible only in printed form on some shelf buried deep in the library stacks.
kan is a breath of fresh air in the research landscape, finally no more chasing LLMs and doing some fundamental research again. not necessarily it’ll become widespread, usable and general as transformers but someone is questioning the building blocks of deep learning and i am happy about it. the pde experiment looks interesting, can it also model ode/sde? if yes what’s the role of kan in diffusion models/flow matching? given the pace of arxiv we won’t have to wait much to discover it.