- The Many Ways that Digital Minds Can Know by Ryan Moulton. This post explains the contrasting views of LLMs, with detractors likening them to blurry jpegs and stochastic parrots, while promoters see sparks of AGI and the ability to learn multivariable calculus’s generating function.
- Contra The xAI Alignment Plan by Scott Siskind at Astral Codex Ten. Even though programming AI with ethic behaviour has its issues (what is ethical varies between people and times), aiming to make AI “maximally curious” is not the solution. Our interest in fruit flies has not benefitted the fruit flies.
- Mustafa Suleyman: My New Turing Test Would See if AI Can Make $1 Million by Mustafa Suleymanarchive at MIT Technology Review. The Modern Turing Test would evaluate an AI’s ability to achieve real-world impact rather than just replicating human-like conversation. The proposed test challenges the AI to make $1 million on a retail web platform with a limited investment. Such a test would require the AI to perform complex tasks, e.g. product research, negotiation and marketing, with minimal human oversight.
- Transformers: The Google Scientists who Pioneered an AI Revolution by Madhumita Murgia at The Financial Times. A great read about the beginning of transformer models and their inventors.
- Patterns for Building LLM-based Systems & Products by Eugene Yan. A guide on integrating large language models into systems and products based on academic research, industry resources, and practitioner know-how.
- The Allure of Chinese Apps: Factors Behind Their Rise in the US by Gideon Ng. US consumers are still choosing popular apps of Chinese origin despite concerns around privacy.
August Reading List
August 2, 2023