In this podcast episode, I have a conversation with Steve Omohundro. Steve is one of the first people to point out the potential dangers of advanced AI systems and in this podcast we discuss topics related to AI, mainly personal AI and AGI (Artificial General Intelligence). Hope you enjoy!
Find it here:
Things mentioned in this podcast episode:
- Sapiens: A Brief History of Humankind
- Possibility Research
- The Bitter Lesson
- Safe-AI Scaffolding
- Software as a service (SaaS)
- The Origin of Species
- The Moral Animal: Why We Are, the Way We Are: The New Science of Evolutionary Psychology
- Antifragile: Things That Gain from Disorder
- Nonzero: The Logic of Human Destiny
- The Basic AI Drives
- Future of Life Institute
- Future of Humanity Institute
- Human Compatible
- Trolley problem
- Pyro (programming language)
- 00:00:00 – 00:01:40 Introduction
- 00:01:40 – 00:06:26 Steve’s experience with startups
- 00:06:26 – 00:10:49 Personal AI
- 00:10:49 – 00:12:28 Steve’s research company
- 00:12:28 – 00:20:37 Combining symbolism and connectionism in AI
- 00:20:37 – 00:25:22 Can GPT-3’s successors eventually build an accurate world model?
- 00:25:22 – 00:30:27 Contributing to AI or AI safety research as an individual?
- 00:30:27 – 00:34:28 Entrepreneurship opportunities for individuals in AI
- 00:34:28 – 00:45:28 Personal AI capabilities
- 00:45:28 – 00:49:14 The outcome of AGI
- 00:49:14 – 00:56:26 The reasoning behind The Basic AI Drives
- 00:56:26 – 01:00:01 Can we mathematically formalize emotions?
- 01:00:01 – 01:03:42 Can we slow down AI progress?
- 01:03:42 – 01:06:35 Next steps for AGI and personal AI
- 01:06:35 – 01:10:39 Ideal educational background for AI researchers?
- 01:10:39 – 01:13:05 How to approach learning math?
- 01:13:05 – 01:14:21 Parting thoughts
Subscribe to my newsletter to keep abreast of the interesting things I'm doing. I will send you the newsletter only when there is something interesting. This means 0% spam, 100% interesting content.