10ou

發布於 2023-05-23到 Mirror 閱讀

OpenAI Leader Proposes International AI Regulatory Body, Set Capability Threshold Limits to Control Risks

OpenAI founder Sam Altman, president Greg Brockman, and chief scientist Ilya Sutskever jointly published a blog post titled "Governance of Superintelligence" on the company's official website. The article stated that artificial intelligence systems will exceed the level of expert skill in most fields, and we can have a more prosperous future, but we must manage risks to achieve our goals. Given the potential for risk, we cannot just be reactive, superintelligence requires special treatment and coordination.

The article proposes a three-pronged approach to addressing the risks: 1. A degree of coordination among leading development efforts to ensure that superintelligence is developed in a way that maintains security and helps these systems integrate smoothly with society; such as the world's major governments A project could be set up in which many current efforts are a part, or there could be collective agreement that the rate of growth of cutting-edge AI capabilities be limited to a certain rate per year. 2. Establish an "artificial intelligence version of the International Atomic Energy Agency", that is, a specialized international regulatory agency. Any work that exceeds a certain threshold of capacity needs to be subject to an international authority. This agency can inspect the system, require audits, and test whether it meets safety standards. Deployment levels, security levels, etc.; such an agency should focus on mitigating existential risk, rather than issues that should be left to individual nations, such as defining what AI should be allowed to say. Third, we need the technical capabilities to secure a superintelligence, which is an open research problem and a lot of work is being done by us and others.

OpenAI executives believe it is important to allow companies and open source projects to develop models below a significant threshold of capability without regulation with onerous mechanisms such as licensing or auditing. Stopping superintelligence is risky and difficult. Stopping it requires a global regulatory regime, and even that is not guaranteed to be effective.