close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

UK government introduces AI self-assessment tool
aecifo

UK government introduces AI self-assessment tool

The UK government has launched a free self-assessment tool to help businesses responsibly manage their use of artificial intelligence.

The questionnaire is intended for use by any organization that develops, provides or uses services using AI as part of its standard operations, but is primarily aimed at small businesses or start-ups. The results will tell decision makers the strengths and weaknesses of their AI management systems.

How to use AI Management Essentials

Now availablethe self-assessment is one of the three parts of a tool called “AI Management Essentials”. The other two parts include a scoring system that provides insight into how the company is managing its AI and a set of action points and recommendations for organizations to consider. Neither have yet been published.

AIME is based on the ISO/IEC 42001 standard, NIST frameworkAnd European AI law. The self-assessment questions cover how the company uses AI, manages its risks, and demonstrates transparency with stakeholders.

SEE: Delaying AI deployment in the UK by five years could cost the economy more than £150 billion, says Microsoft report

“The tool is not designed to evaluate AI products or services themselves, but rather to evaluate the organizational processes in place to enable the responsible development and use of these products,” according to the Report from the Department of Science, Innovation and Technology.

When self-assessing, feedback should be sought from employees with extensive technical and business knowledge, such as a CTO or software engineer and an HR business manager.

The government wants to include self-assessment in its public procurement policy and frameworks in order to integrate insurance into the private sector. She also wants to make it available to public sector buyers to help them make more informed decisions about AI.

On November 6, the government opened a consultation inviting businesses to provide feedback on the self-assessment, and the results will be used to refine it. The assessment and recommendation parts of the AIME tool will be published after the consultation closes on January 29, 2025.

The self-assessment is one of several government initiatives planned for AI assurance

In a paper released this week, the government said AIME would be one of several resources available on the ‘AI Assurance Platform’ it is seeking to develop. These will help companies conduct impact assessments or review AI data for bias.

The government is also creating a Responsible AI Terminology Tool to define and standardize key AI assurance terms to improve cross-border communication and trade, particularly with the United States.

“Over time, we will create a set of accessible tools to enable basic good practices for the responsible development and deployment of AI,” the authors write.

The government says the UK AI insurance market, the sector which provides tools to develop or use AI security and which currently comprises 524 companies, will grow the economy of more than £6.5 billion over the next decade. This growth can be partly attributed to increased public trust in technology.

The report adds that the government will partner with the AI ​​Safety Institute, launched by former Prime Minister Rishi Sunak in AI Security Summit in November 2023 – to advance AI assurance in the country. It will also allocate funding to expand the Systemic Safety Grants Scheme, which currently has up to £200,000 available for initiatives that develop the AI ​​assurance ecosystem.

Legally binding AI security legislation to be passed next year

Meanwhile, Peter Kyle, the UK’s technology secretary, has pledged to make the voluntary agreement on AI safety testing legally binding by implementing the AI Bill next year at Financial Times Future of AI Summit Wednesday.

At November’s AI Security Summit, AI companies including OpenAI, Google DeepMind and Anthropic voluntarily agreed to allow governments to test the security of their latest AI models before release. It was first reported that Kyle had expressed its intention to legislate on voluntary agreements to leaders of major AI companies in a meeting in July.

SEE: OpenAI and Anthropic sign agreements with the US AI Safety Institute, handing over frontier models for testing

He also said that the AI ​​bill focus on great ChatGPT style foundation models created by a handful of companies and transform the AI ​​Safety Institute from a directorate of DSIT into an “independent government body”. Kyle reiterated these points at this week’s summit, according to the FT, stressing that he wants to give the Institute “the independence to act fully in the interests of British citizens”.

Additionally, he pledged to invest in advanced computing power to support the development of pioneering AI models in the UK, responding to criticism leveled at the government. £800m funding cut for University of Edinburgh supercomputer in August.

SEE: UK government announces £32m for AI projects after cutting funding for supercomputers

Kyle said that while the Government cannot invest £100 billion on its own, it will partner with private investors to secure the funding needed for future initiatives.

A year into UK AI safety legislation

Numerous laws have been published in the last year, committing the UK to developing and using AI responsibly.

On October 30, 2023, the Group of Seven countries, including the United Kingdom, created a voluntary code of conduct on AI comprising 11 principles that “promote safe, secure and trustworthy AI worldwide.”

The AI ​​Security Summit, which saw 28 countries commit to safe and responsible development and deployment, kicked off a few days later. Later in November, the UK’s National Cyber ​​Security Centre, the US Cybersecurity and Infrastructure Security Agency and international agencies from 16 other countries published guidelines on how to ensure security when developing new AI models.

SEE: UK AI Security Summit: World powers make ‘historic’ commitment to AI security

In March, G7 countries signed another agreement pledging to explore how AI can improve public services and boost economic growth. The agreement also covered joint development of an AI toolbox to ensure that the models used are safe and trustworthy. The following month, the then Conservative government agreed to work with the United States in developing tests for advanced AI models by signing a memorandum of understanding.

In May, the government published Inspecta free and open source testing platform that evaluates the safety of new AI models by evaluating their basic knowledge, ability to reason, and autonomous capabilities. He also co-organized another AI security summit in Seoulwhich involved the UK agreeing to collaborate with the world’s nations on AI safety measures and announcing up to £8.5 million in grants for research aimed at protecting society from its risks.

Then, in September, the United Kingdom signed the the world’s first international treaty on AI alongside the EU, US and seven other countries, calling on them to adopt or maintain measures to ensure that the use of AI is consistent with human rights, democracy and law.

And it’s not over yet; Through the AIME tool and report, the government announced a new AI security partnership with Singapore through a Memorandum of Cooperation. It will also be represented at the first meeting of the International AI Security Institutes in San Francisco later this month.

Ian Hogarth, President of the AI ​​Safety Institute, said: “An effective approach to AI safety requires global collaboration. This is why we place such importance on the international network of AI security institutes, while strengthening our own research partnerships.

However, the United States has moved further and further away from AI collaboration with its countries. recent directive limit the sharing of AI technologies and impose protections against foreign access to AI resources.