Lots of interesting links
I am a hoarder. I have lots of tabs open on my phone browser. However, I think it’s starting to make my browser unstable.
Thus, I will begin the process of offloading my most interesting Chrome tabs into blog posts.
I cannot be bothered to format the links, but I assure you many of them are interesting.
Also, a little fun fact is that these are approximately chronological.
https://yoshuabengio.org/2022/03/05/generative-flow-networks/
https://en.m.wikipedia.org/wiki/Temporal_difference_learning
https://scottaaronson.blog/?p=6244
https://sidbala.com/h-264-is-magic/
https://desystemize.substack.com/p/desystemize-9?s=r
https://en.m.wikipedia.org/wiki/List_of_philosophical_problems
https://davidlowryduda.com/quanta-langlands-viz/
https://transformer-circuits.pub/2022/toy_model/index.html
https://en.m.wikipedia.org/wiki/2022_University_of_Idaho_killings (I went to school here)
https://www.logicmatters.net/2020/11/16/philosophy-of-mathematics-a-reading-list/
https://www.ageofinvention.xyz/p/age-of-invention-why-wasnt-the-steam-76c
https://ics.uci.edu/~eppstein/junkyard/all.html
https://en.m.wikipedia.org/wiki/Unruh_effect
https://arxiv.org/abs/2102.06824 (Generalization of Alcubierre drive)
https://iopscience.iop.org/article/10.1088/1361-6382/abe692
https://uss-la-ca135.org/60/1960Judkins-Knott.html
https://nautil.us/the-kekul-problem-236574/
https://arxiv.org/abs/2211.14347 (The smooth output assumption, and why deep networks are better than wide ones)
https://arxiv.org/abs/2111.14726 (Do Invariances in Deep Neural Networks Align with Human Perception?)
https://www.hup.harvard.edu/books/9780674877481 (Thematic Origins of Scientific Thought)
https://www.nps.gov/articles/william_clark_cartographer.htm
https://www.newyorker.com/magazine/2020/02/10/can-we-have-prosperity-without-growth
https://scottlocklin.wordpress.com/2023/02/13/technological-and-scientific-blind-spots/
https://nonint.com/2022/04/25/tortoise-architectural-design-doc/
https://152334h.github.io/blog/tortoise-fine-tuning/ (This is a fascinating article to me)
https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/ (My friend keeps telling me to read this and tbe visualizations keep impressing me, but I haven’t read it yet)
https://www.zeta-alpha.com/post/must-read-the-100-most-cited-ai-papers-in-2022
https://sites.google.com/view/stablediffusion-with-brain/
https://arxiv.org/abs/2210.17323 (GPTQ quantization for LLMs)
https://en.m.wikipedia.org/wiki/Baumol_effect
https://w1mx.mit.edu/flea-at-mit/ (For Boston folks)
https://kaiokendev.github.io/til (Mysterious 4channer who beat Meta to some important innovations with the LLaMa LLM. Vanished shortly after.)
https://www.trentonbricken.com/Tail-Free-Sampling/
https://www.reddit.com/r/LocalLLaMA/comments/14eoh4f/rumor_potential_gpt4_architecture_description/ (Nostalgia about the first time people started thinking about MoE LLMs. This rumor is still unconfirmed.)
https://arxiv.org/abs/2104.09864 (Rotary Position Embedding or RoPE for transformers)
https://arxiv.org/abs/2302.13971 (Original LLaMa paper)
https://huggingface.co/papers/2306.02707 (Orca model, i.e. very big finetunes)
https://en.m.wikipedia.org/wiki/Wikipedia:Unusual_articles
https://archive.org/stream/KingHammurabiOfBabylon/KingHammurabiOfBabylon_djvu.txt
https://en.m.wikipedia.org/wiki/Ancient_Rome
https://en.m.wikipedia.org/wiki/Dirty_bomb
https://en.m.wikipedia.org/wiki/Lia_radiological_accident
https://thephilosophersmeme.com/2021/07/21/we-can-have-retrieval-inference-synthesis
https://www.evanmiller.org/attention-is-off-by-one.html
https://en.m.wikipedia.org/wiki/Kalman_filter
https://en.m.wikipedia.org/wiki/Two_Dogmas_of_Empiricism
https://timdettmers.com/2022/08/17/llm-int8-and-emergent-features/
https://www.feynmanlectures.caltech.edu/I_04.html#Ch4-S2
https://en.m.wikipedia.org/wiki/Magnetism
https://en.m.wikipedia.org/wiki/Geometric_mean_theorem
https://www.bu.edu/astronomy/community/open-night-observatory/faq-and-contact-form/ (Another one for Boston folks)
https://en.m.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm