OpenAI CEO 奥特曼回应近期的安全担忧问题

微信扫一扫,分享到朋友圈

OpenAI CEO 奥特曼回应近期的安全担忧问题

山姆-奥特曼(Sam Altman)和格雷格-布罗克曼(Greg Brockman)都是 OpenAI 的资深人士,他们撰文回应了扬-莱克(Jan Leike)在本周辞职后提出的公司人工智能安全问题。两人表示,OpenAI 致力于使用 "非常严密的反馈回路、严格的测试、每一步的慎重考虑、世界一流的安全性以及和谐的安全能力"。

奥特曼和布洛克曼表示,将针对不同的时间尺度开展更多的安全研究,并与政府和利益相关者合作,确保在安全方面万无一失。

作为背景介绍,扬-莱克(Jan Leike)曾与伊利亚-苏茨基弗(Ilya Sutskever)一起领导超级排列团队,该团队成立不到一年,试图找到控制超级智能人工智能的方法。本周,两人都离开了公司,因为他们抱怨公司似乎把安全放在了次要位置,而把注意力放在了新的进步上。

OpenAI CEO 奥特曼回应近期的安全担忧问题插图

OpenAI 的公告有点啰嗦,而且他们想表达的观点也有点模糊。最后一段似乎最有线索,它写道:

"在通往 AGI 的道路上,没有行之有效的指南。我们认为,经验性的理解有助于指明前进的道路。我们相信,既要实现巨大的发展前景,也要努力降低严重的风险;我们非常认真地对待我们在这方面的角色,并仔细权衡对我们行动的反馈意见"。

从本质上讲,他们似乎在说,进行安全测试的最佳方式是在实际产品开发过程中进行,而不是试图预测未来可能出现的某种假想的超级人工智能。

奥特曼和布罗克曼的声明全文如下(机翻):

我们非常感谢 Jan 为 OpenAI 所做的一切,我们知道他将继续从外部为 OpenAI 的使命做出贡献。鉴于他的离职所引发的问题,我们想解释一下我们对整体战略的看法。

首先,我们已经提高了人们对 AGI 风险和机遇的认识,以便世界能够更好地为其做好准备。我们多次展示了扩展深度学习的惊人可能性,并分析了其影响;在此类呼吁流行之前,我们就呼吁对 AGI 进行国际治理;并帮助开创了评估人工智能系统灾难性风险的科学。

其次,我们一直在为安全部署能力越来越强的系统奠定必要的基础。首次使用一项新技术并不容易。例如,我们的团队做了大量工作,以安全的方式将GPT-4推向世界,此后,我们根据部署过程中的经验教训,不断改进模型行为和滥用监控。

第三,未来将比过去更加艰难。我们需要不断提升我们的安全工作,以应对每一个新模型的风险。去年,我们采用了 "准备框架",以帮助将我们的工作系统化。

现在似乎是谈论我们如何看待未来的好时机。

随着模型的功能不断增强,我们预计它们将开始更深入地与世界融合。用户将越来越多地与由许多多模态模型和工具组成的系统进行交互,这些系统可以代表他们采取行动,而不是与一个只有文本输入和输出的单一模型对话。

我们认为,这些系统将给人们带来极大的好处和帮助,而且有可能安全地提供这些系统,但这需要大量的基础工作。这包括对它们在训练过程中与哪些因素相关联进行深思熟虑,解决诸如可扩展监督等棘手问题,以及其他新型安全工作。在朝着这个方向发展的过程中,我们还不确定何时才能达到发布的安全标准,如果这将推迟发布时间,也没关系。

我们知道,我们无法想象未来所有可能发生的情况。因此,我们需要一个非常严密的反馈回路、严格的测试、每一步的慎重考虑、世界一流的安全性,以及安全与功能的和谐统一。我们将继续针对不同的时间尺度开展安全研究。我们还将继续与各国政府和许多利益相关方就安全问题开展合作。

在通往人工智能的道路上,没有行之有效的指南。我们认为,经验性的理解可以帮助我们指明前进的道路。我们相信,既要实现巨大的发展前景,也要努力降低严重的风险;我们非常认真地对待我们在这方面的角色,并仔细权衡对我们行动的反馈意见。

- 山姆和格雷格

奥特曼和布罗克曼的声明全文如下(英文原文):

We’re really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.

First, we have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it. We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks.

Second, we have been putting in place the foundations needed for safe deployment of increasingly capable systems. Figuring out how to make a new technology safe for the first time isn't easy. For example, our teams did a great deal of work to bring GPT-4 to the world in a safe way, and since then have continuously improved model behavior and abuse monitoring in response to lessons learned from deployment.

Third, the future is going to be harder than the past. We need to keep elevating our safety work to match the stakes of each new model. We adopted our Preparedness Framework last year to help systematize how we do this.

This seems like as good of a time as any to talk about how we view the future.

As models continue to become much more capable, we expect they'll start being integrated with the world more deeply. Users will increasingly interact with systems — composed of many multimodal models plus tools — which can take actions on their behalf, rather than talking to a single model with just text inputs and outputs.

We think such systems will be incredibly beneficial and helpful to people, and it'll be possible to deliver them safely, but it's going to take an enormous amount of foundational work. This includes thoughtfulness around what they're connected to as they train, solutions to hard problems such as scalable oversight, and other new kinds of safety work. As we build in this direction, we're not sure yet when we’ll reach our safety bar for releases, and it’s ok if that pushes out release timelines.

We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities. We will keep doing safety research targeting different timescales. We are also continuing to collaborate with governments and many stakeholders on safety.

There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward. We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions.

— Sam and Greg

来源:X | Image via Depositphotos.com

上一篇

Windows 11 即将推出新的默认壁纸 [附下载]

下一篇

微软 5 月 20 日 Build 2024 发布会观看方式和期待内容

你也可能喜欢

评论已经被关闭。

插入图片

排行榜

返回顶部