Daily Productive Sharing 1162 - Thoughts on AGI
One helpful tip per day:)
A month ago, OpenAI released its o3 models, and Will Bryk shared some forward-looking thoughts at the time. Reality has progressed even faster than these projections:
- o3 models excel at optimizing tasks for which a reward function can be defined. Tasks like math and programming are relatively easy to design reward functions for.
- For those fully embracing large language models (LLMs), by the end of 2025, programming may resemble orchestrating a group of agents to execute a series of smaller tasks.
- While engineers designing architectures or writing code have extensive organizational context, o4 won’t have this. However, o4 will enhance the productivity of engineers with this context by 10x.
- Companies might need fewer software engineers to achieve the same output using more streamlined teams. Yet global demand for software engineers could increase, as the world can absolutely benefit from 10x more high-quality software.
- Programming in English opens the field to non-technical users. However, the best builders will still be those adept at switching between different levels of abstraction.
- Since software engineering fundamentally involves understanding and addressing organizational needs through code, the complete automation of software engineering would transform all organizations.
- It’s unclear how much of OpenAI’s o model advancements rely on unique trade secrets, but their rapid improvements suggest algorithmic breakthroughs (easier to replicate) rather than unique data combinations (harder to replicate).
- Regardless, no model moat will last more than a year, as research labs frequently exchange personnel, and researchers across labs socialize and share ideas.
- O-level models make inference more critical than training. Super-optimized inference chips are easier to manufacture than training chips, so Nvidia’s moat in this area isn’t very deep.
- The only bottleneck might come if code speeds hit their limits and labs need to run an extensive list of experiments, once again constrained by compute power.
- Once AI begins generating new scientific theories, the bottleneck for progress will shift to testing and experimentation in the physical world.
- The biggest bottleneck for AI progress will ultimately be humanity itself— regulations, terrorism, and societal breakdown.
If you enjoy today's sharing, why not subscribe
Need a superb CV, please try our CV Consultation
OpenAI 在一个月前发布了 o3 系列的模型,当时 Will Bryk 给出了一些畅想,其实现实远比这些发展得更快:
- o3级模型在优化任何你可以为其定义奖励函数的任务上表现得非常出色。数学和编程相对容易设计奖励函数;
- 对于那些完全采用大型语言模型(LLMs)的人来说,到2025年底,编程将更像是指挥一群 agents 去执行一系列小任务;
- 当工程师设计架构或编写代码时,他们拥有大量的组织背景信息。o4 无法做到这一点。但o4会帮助那些拥有背景信息的工程师提升10倍的工作效率;
- 如果以具体公司为例,确实他们可能需要更少的软件工程师,因为他们可以用更精简的团队实现相同的产出。然而,全球对软件工程师的需求可能会上升,因为世界绝对可以使用10倍更多的高质量软件;
- 使用英语使编程对非技术人员开放。但最优秀的构建者仍然是那些能够在不同抽象层级之间自如切换的人;
- 因为软件工程实际上是通过代码理解和解决组织需求,软件工程完全自动化的那一天,所有组织也将随之改变;
- 不清楚 OpenAI 在 o 级模型上有多少独特的秘诀,但他们的改进速度表明这是一种算法上的进步(更容易复制),而不是某种独特的数据组合(更难复制);
- 无论如何,不会有持续超过一年的模型护城河,因为实验室像棒球卡一样交换研究人员,更重要的是,实验室之间的研究人员彼此聚会并互相交流;
- o 级模型使推理比训练更为重要。超级优化的推理芯片比训练芯片更容易制造,因此Nvidia在这方面的护城河并不深厚;
- 除非代码速度达到极限,并且有一长串的实验需要运行,实验室再次被计算能力所瓶颈;
- 一旦 AI 开始产生新的科学理论,进步的瓶颈将是物理世界中的测试和实验;
- AI 进步的最大瓶颈将是人类 -- 法规、恐怖主义和社会崩溃。
如果你喜欢的话,不妨直接订阅这份电子报 ⬇️