Details of UK AI Safety Institute revealed at Bletchley Park summit

2023-11-06
关注

  •  

Tech companies and governments from around the world have backed the UK’s plan for an AI Safety Institute after more details of the organisation were revealed at the AI Safety Summit at Bletchley Park.

Prime Minister Rishi Sunak speaks with US VP Kamala Harris at the end of the AI Summit at Bletchley Park. (Picture by Simon Dawson/No 10 Downing Street)

Prime Minister Rishi Sunak announced plans to create the safety institute, which will test new AI models to pinpoint potential safety issues, in a speech last week. Today it was revealed that the new body will build on the work of the UK’s frontier AI task force, and be chaired by Ian Hogarth, the tech investor who has been running the task force since it was created earlier this year.

Partners buy-in to Sunak’s UK AI Safety Institute

According to a brochure released today by the government, the institute will carefully test new types of frontier AI before and after they are released to address the potentially harmful capabilities of AI models, including exploring all the risks, from social harms like bias and misinformation, as well as more extreme risks such as humanity losing control of AI systems.

Hogarth will chair the organisation, with the Frontier AI Task Force’s advisory board, made up of leading industry figures, moving across to the institute, too. A CEO will be recruited to run the new organisation, which will work closely with the Alan Turing Institute for data science.

At the Bletchley Park summit, which concludes today, the new AI Safety Institute was backed by governments including the US, Japan and Canada, tech heavyweights such as AWS and Microsoft, and AI labs including Open AI and Anthropic.

Sunak said: “Our AI Safety Institute will act as a global hub on AI safety, leading on vital research into the capabilities and risks of this fast-moving technology.

“It is fantastic to see such support from global partners and the AI companies themselves to work together so we can ensure AI develops safely for the benefit of all our people. This is the right approach for the long-term interests of the UK.”

AI Safety Summit draws to a close

Whether the UK institute will become the global standard bearer for AI safety research is questionable given that the US government launched its own safety institute earlier this week. The UK says it has agreed to a partnership with the institute, as well as the government of Singapore, to collaborate on AI safety testing.

Content from our partners

Distributors can leverage digital solutions to transform efficiency in equipment and rental

Distributors can leverage digital solutions to transform efficiency in equipment and rental

Reynolds Catering spurs innovation by upgrading its ERP to level up capabilities at scale

Reynolds Catering spurs innovation by upgrading its ERP to level up capabilities at scale

Zeelandia leverages AI to optimise precision, efficiency and pricing

Zeelandia leverages AI to optimise precision, efficiency and pricing

The first task for the institute will be to put in place the processes and systems to test new AI models before they launch, including open-source models, the government said.

View all newsletters Sign up to our newsletters Data, insights and analysis delivered to you By The Tech Monitor team

Governments and tech companies attending the summit agreed to work together on safety testing for AI models, while Yoshua Bengio, a computer scientist who played a key role in the development of deep neural networks, the technology that underpins many AI models, is to produce a report on the state of the science behind artificial intelligence. It is hoped this will help build a shared understanding of the capabilities and risks posed by frontier AI. 

Sam Altman, OpenAI CEO, said: “The UK AI Safety Institute is poised to make important contributions in progressing the science of the measurement and evaluation of frontier system risks. Such work is integral to our mission – ensuring that artificial general intelligence is safe and benefits all of humanity – and we look forward to working with the institute in this effort.

The AI Safety Summit programme ended this afternoon, with Sunak holding a series of meetings with political leaders, including EU president Ursula von der Leyen. Later this evening he will take part in a question and answer session with Tesla CEO Elon Musk, who has not endorsed the new AI Safety Institute.

As reported by Tech Monitor, yesterday 28 countries, including the UK, US and China, signed the Bletchley Declaration, an agreement to work together on AI safety. The government also announced it is funding a £225m supercomputer, Isambard-AI, at the University of Bristol.

Read more: The UK is building a £225m AI supercomputer

  •  

  • en
您觉得本篇内容如何
评分

相关产品

EN 650 & EN 650.3 观察窗

EN 650.3 version is for use with fluids containing alcohol.

Acromag 966EN 温度信号调节器

这些模块为多达6个输入通道提供了一个独立的以太网接口。多量程输入接收来自各种传感器和设备的信号。高分辨率,低噪音,A/D转换器提供高精度和可靠性。三路隔离进一步提高了系统性能。,两种以太网协议可用。选择Ethernet Modbus TCP\/IP或Ethernet\/IP。,i2o功能仅在6通道以太网Modbus TCP\/IP模块上可用。,功能

雷克兰 EN15F 其他

品牌;雷克兰 型号; EN15F 功能;防化学 名称;防化手套

Honeywell USA CSLA2EN 电流传感器

CSLA系列感应模拟电流传感器集成了SS490系列线性霍尔效应传感器集成电路。该传感元件组装在印刷电路板安装外壳中。这种住房有四种配置。正常安装是用0.375英寸4-40螺钉和方螺母(没有提供)插入外壳或6-20自攻螺钉。所述传感器、磁通收集器和壳体的组合包括所述支架组件。这些传感器是比例测量的。

TMP Pro Distribution C012EN RF 音频麦克风

C012E射频从上到下由实心黄铜制成,非常适合于要求音质的极端环境,具有非常坚固的外壳。内置的幻像电源模块具有完全的射频保护,以防止在800 Mhz-1.2 Ghz频段工作的GSM设备的干扰。极性模式:心形频率响应:50赫兹-18千赫灵敏度:-47dB+\/-3dB@1千赫

ValueTronics DLRO200-EN 毫欧表

"The DLRO200-EN ducter ohmmeter is a dlro from Megger."

评论

您需要登录才可以回复|注册

提交评论

广告

techmonitor

这家伙很懒,什么描述也没留下

关注

点击进入下一篇

IoT in the Cloud: 8 Key Benefits and How to Get Started

提取码
复制提取码
点击跳转至百度网盘