1. From GPT-4 to AGI: Counting the OOMs by Leopold Aschenbrenner at Situational Awareness. A very bullish look at the potential of achieving AGI in the upcoming years as well as the key improvement factors at play.
  2. LLM Generality is a Timeline Crux at LessWrong. Response to Aschenbrenner’s article.
  3. You DON’T Need More Protein in Energy Deficit by Menno Henselmans. Menno shows contrary data to the popular belief that you should go for higher protein intake during a cut.
  4. Detecting Hallucinations In Large Language Models Using Semantic Entropy by Farquhar et al. (2024). The researchers use semantic entropy to detect confabulations by measuring the uncertainty in the meaning of responses rather than on a word basis., finding that this improves detection.
  5. The New Wave of Concierge Medicine by Nikhil at Out of Pocket. There’s a rise of concierge medical services that are banking on increased consumer health awareness and the overburdened national health services. While they provide improved experiences and offerings, they do inevitably create an ethically dubious rich vs. poor healthcare system.
  6. AI’s $600B Question by David Cahn at Sequioa. The cost of AI (in chips, datacenters, energy, etc.) are very real, but while revenue is growing, we have yet to see the types of value creation (and revenue) that justify the enormous evaluations.
  7. Association Between Dietary Protein Intake And Risk Of Chronic Kidney Disease: A Systematic Review And Meta-Analysis by Cheng et al. (2024). This study indicates that higher-level dietary total, plant or animal protein (especially fish and seafood) intake is related to a lower risk of chronic kidney disease. (n=150k)
  8. For AI Giants, Smaller Is Sometimes Better by Tom Dotan and Deepa Seetharaman at The Wall Street Journal. Small, specialized AI models can outperform larger models at specific tasks. Not only do they cost far less to train and operate, but their tailored expertise makes them more efficient and effective for practical, industry-specific uses.