Pre AGI :2040见证AI人降临

GPT-4 的发布一石激起千层浪,相比 GPT-3.5,新一代的 GPT 不但能看图说话、分析图表,甚至 SAT 数学能考 700 分,生物奥林匹克竞赛排名前 1%,司法考试排名前 10%。GPT 正在以一种人类无法匹敌的速度地进化出越来越多让人叹为观止的能力,但这也助长了许多人的失业焦虑、对于数据安全的焦虑,以及对于 GPT 时代产业发展的焦虑。跟本猫一起聊一聊 GPT和 AGI。

AGI 是什么?

AGI 全称叫做 artificial general intelligent,也译作通用人工智能。"AGI" 这个词汇最早可以追溯到 2003 年由瑞典哲学家 Nick Bostrom 发表的一篇论文《Ethical Issues in Advanced Artificial Intelligence》。在该论文中,Bostrom 讨论了超级智能的道德问题,并在其中引入了 "AGI" 这一概念,描述了一种能够像人类一样思考、学习和执行多种任务的人工智能系统的概念。而也有学者将 AGI 定义为:一种可以执行人类能够完成的所有任务的人工智能。

openAI:我们的使命是确保通用人工智能——通常比人类更聪明的人工智能系统——造福全人类

AGI(通用人工智能)可以说是所有AI公司的终极目标,openAI同样也在朝着这个方向努力。以下内容摘自openAI对AGI的规划:

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
我们的使命是确保通用人工智能——通常比人类更聪明的人工智能系统——造福全人类。

If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
如果 AGI 成功创建,这项技术可以通过增加丰富度、推动全球经济发展以及帮助发现改变可能性极限的新科学知识来帮助我们提升人类。

AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.
AGI 有潜力赋予每个人不可思议的新能力;我们可以想象这样一个世界,在这个世界中,我们所有人都可以获得几乎所有认知任务的帮助,为人类的聪明才智和创造力提供巨大的力量倍增器。

On the other hand, AGI would also come with serious risk of misuse, drastic accidents, and societal disruption. Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.

另一方面,AGI 也会带来严重的滥用、严重事故和社会混乱的风险。由于 AGI 的优势如此之大,我们不认为社会永远停止其发展是可能的或可取的;相反,社会和 AGI 的开发者必须想办法把它做好。

We seem to have been given lots of gifts relative to what we expected earlier: for example, it seems like creating AGI will require huge amounts of compute and thus the world will know who is working on it, it seems like the original conception of hyper-evolved RL agents competing with each other and evolving intelligence in a way we can’t really observe is less likely than it originally seemed, almost no one predicted we’d make this much progress on pre-trained language models that can learn from the collective preferences and output of humanity, etc.

相对于我们之前的预期,我们似乎得到了很多礼物:例如,似乎创建 AGI 将需要大量计算,因此世界将知道谁在研究它,似乎就像超级进化的 RL 代理相互竞争的最初概念以及以我们无法真正观察到的方式进化智能的可能性比最初看起来的要小,几乎没有人预测到我们会在预训练语言上取得如此大的进步可以从人类的集体偏好和输出中学习的模型等会知道谁在研究它,似乎超进化的 RL 代理相互竞争并以我们无法真正观察到的方式进化智能的最初概念比最初看起来的可能性要小,几乎没有人预测到我们在预训练的语言模型上取得如此大的进步,这些模型可以从人类的集体偏好和输出等中学习。

AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast. Many of us think the safest quadrant in this two-by-two matrix is short timelines and slow takeoff speeds; shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang, and a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.

AGI 可能会在不久或遥远的将来发生;从最初的 AGI 到更强大的后继系统的起飞速度可能会很慢或很快。 我们中的许多人认为这个二乘二矩阵中最安全的象限是时间短和起飞速度慢;较短的时间线似乎更易于协调,并且更有可能由于较少的计算过剩而导致起飞速度变慢,而较慢的起飞时间使我们有更多时间根据经验找出如何解决安全问题以及如何适应。

Although we cannot predict exactly what will happen, and of course our current progress could hit a wall, we can articulate the principles we care about most:
虽然我们无法准确预测会发生什么,当然我们目前的进展可能会碰壁,但我们可以阐明我们最关心的原则:

1.We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity.
1.我们希望 AGI 能够赋予人类在宇宙中最大程度地繁荣发展的能力。我们不希望未来成为一个不合格的乌托邦,但我们希望将好的一面最大化,将坏的一面最小化,让 AGI 成为人类的放大器。

2.We want the benefits of, access to, and governance of AGI to be widely and fairly shared.
2.我们希望 AGI 的好处、访问权和治理得到广泛和公平的分享。

3.We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.
3.我们希望成功应对巨大风险。在面对这些风险时,我们承认理论上似乎正确的事情在实践中往往比预期的更奇怪。我们认为,我们必须通过部署功能较弱的技术版本来不断学习和适应,以最大程度地减少“一次成功”的情况。

The short term

短期内

There are several things we think are important to do now to prepare for AGI.
我们认为现在有几件事很重要,可以为 AGI 做准备。

First, as we create successively more powerful systems, we want to deploy them and gain experience with operating them in the real world. We believe this is the best way to carefully steward AGI into existence—a gradual transition to a world with AGI is better than a sudden one. We expect powerful AI to make the rate of progress in the world much faster, and we think it’s better to adjust to this incrementally.
首先,随着我们不断创建更强大的系统,我们希望部署它们并获得在现实世界中操作它们的经验。我们相信这是谨慎管理 AGI 存在的最佳方式——逐渐过渡到 AGI 世界比突然过渡要好。我们期望强大的 AI 能够加快世界的进步速度,我们认为最好是逐步适应这一点。

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.
渐进的过渡让人们、政策制定者和机构有时间了解正在发生的事情,亲身体验这些系统的好处和缺点,调整我们的经济,并实施监管。它还允许社会和 AI 共同进化,并允许人们在风险相对较低的情况下共同弄清楚他们想要什么。

We currently believe the best way to successfully navigate AI deployment challenges is with a tight feedback loop of rapid learning and careful iteration. Society will face major questions about what AI systems are allowed to do, how to combat bias, how to deal with job displacement, and more. The optimal decisions will depend on the path the technology takes, and like any new field, most expert predictions have been wrong so far. This makes planning in a vacuum very difficult.

我们目前认为,成功应对 AI 部署挑战的最佳方法是采用快速学习和谨慎迭代的紧密反馈循环。社会将面临人工智能系统被允许做什么、如何消除偏见、如何处理工作岗位流失等重大问题。最佳决策将取决于技术所采用的路径,并且与任何新领域一样,到目前为止,大多数专家预测都是错误的。这使得在真空中进行规划非常困难。

For example, when we first started OpenAI, we didn’t expect scaling to be as important as it has turned out to be. When we realized it was going to be critical, we also realized our original structure wasn’t going to work—we simply wouldn’t be able to raise enough money to accomplish our mission as a nonprofit—and so we came up with a new structure.

例如,当我们第一次启动 OpenAI 时,我们没想到缩放会像事实证明的那样重要。当我们意识到这将变得至关重要时,我们也意识到我们原来的结构是行不通的——我们根本无法筹集到足够的资金来完成我们作为非营利组织的使命——所以我们想出了一个新的结构。

As another example, we now believe we were wrong in our original thinking about openness, and have pivoted from thinking we should release everything (though we open source some things, and expect to open source more exciting things in the future!) to thinking that we should figure out how to safely share access to and benefits of the systems. We still believe the benefits of society understanding what is happening are huge and that enabling such understanding is the best way to make sure that what gets built is what society collectively wants (obviously there’s a lot of nuance and conflict here).
例如,当我们刚开始启动 OpenAI 时,我们没想到扩展会像事实证明的那样重要。当我们意识到这将变得至关重要时,我们也意识到我们原来的结构是行不通的——我们根本无法筹集到足够的资金来完成我们作为非营利组织的使命——所以我们想出了一个新的另一个例子,我们现在认为我们最初关于开放的想法是错误的,并且已经从认为我们应该发布一切(虽然我们开源了一些东西,并期望在未来开源更多令人兴奋的东西!)认为我们应该弄清楚如何安全地共享对系统的访问和好处。 我们仍然相信社会了解正在发生的事情的好处是巨大的,并且实现这种理解是确保所构建的是社会集体想要的东西的最佳方式(显然这里有很多细微差别和冲突)。

Generally speaking, we think more usage of AI in the world will lead to good, and want to promote it (by putting models in our API, open-sourcing them, etc.). We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas.
一般来说,我们认为在世界上更多地使用 AI 会带来好处,并希望推广它(通过将模型放入我们的 API 中,将它们开源等)。我们相信,民主化的访问也将导致更多更好的研究、分散的权力、更多的利益以及更多的人贡献新的想法。

As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models. Our decisions will require much more caution than society usually applies to new technologies, and more caution than many users would like. Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.
随着我们的系统越来越接近 AGI,我们对模型的创建和部署变得越来越谨慎。我们的决定将需要比社会通常对新技术应用的谨慎得多,也比许多用户希望的谨慎得多。 AI 领域的一些人认为 AGI(和后继系统)的风险是虚构的;如果结果证明他们是对的,我们会很高兴,但我们将把这些风险当作存在的方式来运作。

At some point, the balance between the upsides and downsides of deployments (such as empowering malicious actors, creating social and economic disruptions, and accelerating an unsafe race) could shift, in which case we would significantly change our plans around continuous deployment.
在某些时候,部署的利弊之间的平衡(例如授权恶意行为者、造成社会和经济混乱以及加速不安全的竞赛)可能会发生变化,在这种情况下,我们将围绕持续部署显着改变我们的计划。

As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.
随着我们的系统越来越接近 AGI,我们对模型的创建和部署变得越来越谨慎。

Second, we are working towards creating increasingly aligned and steerable models. Our shift from models like the first version of GPT-3 to InstructGPT and ChatGPT is an early example of this.
其次,我们正在努力创建更加一致和可控的模型。我们从第一版 GPT-3 等模型到 InstructGPT 和 ChatGPT 的转变就是一个早期的例子。

In particular, we think it’s important that society agree on extremely wide bounds of how AI can be used, but that within those bounds, inpidual users have a lot of discretion. Our eventual hope is that the institutions of the world agree on what these wide bounds should be; in the shorter term we plan to run experiments for external input. The institutions of the world will need to be strengthened with additional capabilities and experience to be prepared for complex decisions about AGI.
特别是,我们认为重要的是社会就如何使用人工智能达成极其广泛的界限,但在这些界限内,个人用户有很大的自由裁量权。我们最终的希望是世界机构就这些广泛的界限应该是什么达成一致;在短期内,我们计划对外部输入进行实验。世界上的机构将需要通过额外的能力和经验得到加强,以便为有关 AGI 的复杂决策做好准备。

The “default setting” of our products will likely be quite constrained, but we plan to make it easy for users to change the behavior of the AI they’re using. We believe in empowering inpiduals to make their own decisions and the inherent power of persity of ideas.
我们产品的“默认设置”可能会受到很大限制,但我们计划让用户可以轻松更改他们正在使用的 AI 的行为。我们相信赋予个人做出自己的决定的权力以及思想多样性的内在力量。

We will need to develop new alignment techniques as our models become more powerful (and tests to understand when our current techniques are failing). Our plan in the shorter term is to use AI to help humans evaluate the outputs of more complex models and monitor complex systems, and in the longer term to use AI to help us come up with new ideas for better alignment techniques.
随着我们的模型变得更强大,我们将需要开发新的对齐技术(以及测试以了解我们当前的技术何时失败)。我们的短期计划是使用 AI 来帮助人类评估更复杂模型的输出并监控复杂系统,而从长远来看,我们将使用 AI 来帮助我们提出新的想法以实现更好的对齐技术。

Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.
重要的是,我们认为我们经常需要在人工智能安全和能力方面共同取得进展。分开谈论它们是错误的二分法。它们在很多方面是相关的。我们最好的安全工作来自与我们最有能力的模型一起工作。也就是说,提高安全进步与能力进步的比率很重要。

Third, we hope for a global conversation about three key questions: how to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share access.
第三,我们希望就三个关键问题展开全球对话:如何管理这些系统,如何公平分配它们产生的利益,以及如何公平共享访问权限。

In addition to these three areas, we have attempted to set up our structure in a way that aligns our incentives with a good outcome. We have a clause in our Charter about assisting other organizations to advance safety instead of racing with them in late-stage AGI development. We have a cap on the returns our shareholders can earn so that we aren’t incentivized to attempt to capture value without bound and risk deploying something potentially catastrophically dangerous (and of course as a way to share the benefits with society). We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.
除了这三个领域之外,我们还尝试以一种使我们的激励措施与良好结果相一致的方式来建立我们的结构。我们的章程中有一个条款是关于协助其他组织提高安全性,而不是在后期 AGI 开发中与他们竞争。我们对股东可以获得的回报设定了上限,这样我们就不会被激励去尝试无限制地获取价值,也不会冒险部署具有潜在灾难性危险的东西(当然也是作为与社会分享利益的一种方式)。我们有一个非营利组织来管理我们,让我们为人类的利益而经营(并且可以凌驾于任何营利利益之上),包括让我们做一些事情,比如在安全需要的情况下取消我们对股东的股权义务,并赞助世界上最全面的 UBI实验。

We have attempted to set up our structure in a way that aligns our incentives with a good outcome.
我们试图以一种使我们的激励措施与良好结果相一致的方式来建立我们的结构。

We think it’s important that efforts like ours submit to independent audits before releasing new systems; we will talk about this in more detail later this year. At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models. We think public standards about when an AGI effort should stop a training run, decide a model is safe to release, or pull a model from production use are important. Finally, we think it’s important that major world governments have insight about training runs above a certain scale.
我们认为像我们这样的努力在发布新系统之前提交独立审计是很重要的;我们将在今年晚些时候更详细地讨论这个问题。在某些时候,在开始训练未来系统之前获得独立审查可能很重要,并且对于最先进的努力来说,同意限制用于创建新模型的计算增长率。我们认为关于 AGI 工作何时应停止训练运行、确定模型可以安全发布或从生产使用中撤出模型的公共标准很重要。最后,我们认为重要的是世界主要政府对超过一定规模的训练有洞察力。

The long term

长远来看

We believe that the future of humanity should be determined by humanity, and that it’s important to share information about progress with the public. There should be great scrutiny of all efforts attempting to build AGI and public consultation for major decisions.
我们认为人类的未来应该由人类决定,与公众分享有关进步的信息很重要。应该对所有试图建立 AGI 的努力进行严格审查,并对重大决策进行公众咨询。

The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time. If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.
第一个 AGI 将只是智能连续体上的一个点。我们认为进展很可能会从那里继续,可能会在很长一段时间内保持我们在过去十年中看到的进展速度。如果这是真的,世界可能会变得与今天截然不同,风险可能会非常大。一个错位的超级智能 AGI 可能会对世界造成严重的伤害;拥有决定性超级情报领导的专制政权也可以做到这一点。

AI that can accelerate science is a special case worth thinking about, and perhaps more impactful than everything else. It’s possible that AGI capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly (and even if the transition starts slowly, we expect it to happen pretty quickly in the final stages). We think a slower takeoff is easier to make safe, and coordination among AGI efforts to slow down at critical junctures will likely be important (even in a world where we don’t need to do this to solve technical alignment problems, slowing down may be important to give society enough time to adapt).
可以加速科学发展的人工智能是一个值得思考的特例,也许比其他任何事情都更有影响力。有足够能力加速自身进步的 AGI 可能会导致重大变化以惊人的速度发生(即使过渡开始缓慢,我们预计它在最后阶段也会很快发生)。我们认为较慢的起飞更容易确保安全,并且 AGI 努力在关键时刻减速可能很重要(即使在我们不需要这样做来解决技术对齐问题的世界中,减速可能是重要的是要给社会足够的时间来适应)。

Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history. Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.
成功过渡到一个拥有超级智能的世界可能是人类历史上最重要、最有希望、最可怕的项目。成功远未得到保证,利害关系(无限的下行和无限的上行)有望将我们所有人团结起来。

We can imagine a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet. We hope to contribute to the world an AGI aligned with such flourishing.
我们可以想象一个人类繁荣到我们任何人都无法完全想象的程度的世界。我们希望为世界贡献一个与这种繁荣相一致的通用人工智能。

创造一个跟人一样能思考的通用智能,AGI 可能会在不久或遥远的将来发生,chatGPT目前让我们感受到种子刚刚发芽的喜悦感。

通用人工智能(AGI)是否是人类的一种幻觉

在通用人工智能的火花:GPT-4 的早期实验中,微软研究人员于 3 月 22 日报告了他们对 GPT-4“早期版本”的调查结果,声称它表现出“比以前的人工智能模型更通用的智能”。考虑到 GPT-4 能力的广度和深度,在各种新颖和困难的任务上表现出接近人类的表现,研究人员得出结论,“它可以合理地被视为人工将军的早期(但仍不完整)版本智能(AGI)系统。”

微软同时推出了 Kosmos-1,这是一种多模式模型,据报道可以分析图像的内容、解决视觉难题、执行视觉文本识别、通过视觉智商测试以及理解自然语言指令。研究人员认为,多模态 AI(集成了文本、音频、图像和视频等不同输入模式)是构建通用人工智能 (AGI) 的关键步骤,它可以在人类水平上执行一般任务。

能思考的AI

在 2020 年,Metaculus 预测者预测到 2053 年左右将出现弱通用人工智能。现在他们预测到 2028 年将出现弱通用人工智能和强通用人工智能,其中包括:

作者相信,按现有的发展速度,大约到 2040 年左右,我们会亲眼目睹「外星人」的诞生。如果您认为世界并没有以非常不确定和不连续的方式发生变化,那么您就是平时没有注意。这个世界唯一不变的就是时刻在变化~

展开阅读全文

页面更新:2024-04-22

标签:可能会   人工智能   见证   模型   人类   能力   发生   方式   社会   系统   世界

1 2 3 4 5

上滑加载更多 ↓
推荐阅读:
友情链接:
更多:

本站资料均由网友自行发布提供,仅用于学习交流。如有版权问题,请与我联系,QQ:4156828  

© CopyRight 2020-2024 All Rights Reserved. Powered By 71396.com 闽ICP备11008920号-4
闽公网安备35020302034903号

Top