First Call of Systemic AI Safety Fast Grants to Open Soon
The AI Safety Institute (AISI) is a research organisation within the UK Government’s Department for Science, Innovation and Technology (DSIT). Its mission is to equip governments with an empirical understanding of the safety of advanced artificial intelligence (AI) systems.
AISI, in partnership with UK Research and Innovation (UKRI), is offering Systemic AI Safety Fast Grants to researchers who will collaborate with the UK government to advance systemic approaches to AI safety. As well as partnering with UKRI, AISI will work with the Alan Turing Institute and other AI Safety Institutes worldwide for this programme.
There is a recognised need to assess the risks of AI to people and society that looks beyond the capabilities of AI models. AI risk management must include understanding the impact on the systems and infrastructure in which AI models operate. AI safety means safeguarding the societal systems and critical infrastructure into which AI is being deployed in order to make the world more resilient to AI-related hazards and enable its benefits.
Through this funding opportunity, AISI will support impactful research that takes a systemic approach to AI safety. This call represents the first round of seed (exploratory/proof-of-concept) grants. This will be followed by future rounds with more substantial awards. It is expected that successful applicants will benefit from ongoing support, computing resources, and access to a community of AI and sector-specific domain experts.
Proposals are requested that directly address systemic AI safety problems or improve understanding of systemic AI safety. Proposals will be prioritised that offer concrete, actionable approaches to significant systemic risks from AI. The aim is to stimulate a research field that focuses on understanding and intervening at the level of the systems and infrastructure in which AI systems operate.
Some examples of the types of research of interest to AISI are as follows (this list is not exhaustive):
- A systems-informed approach for how to improve trust in authentic digital media and protect against AI-generated misinformation.
- Targeted interventions that protect critical infrastructure (eg protecting energy or healthcare systems from an AI-enabled cyber attack).
- Projects measuring, modelling or mitigating potentially harmful secondary effects of AI systems that take autonomous actions on digital platforms.
Proposals are welcomed from researchers in any field. The applicant must be associated with a host organisation that can receive the awarded funds, such as a university, business, civil society organisation or part of government. Applicants must be based in the UK, but teams may involve international collaborators. Applications are expected from the computer science and AI sectors as well as those interested in digital media, education, cybersecurity, the democratic process, safety science, institution and market design, and the economy.
Through this first call, around 20 exploratory or proof-of-concept grants will be funded. Future bids will be invited for more substantial proposals to develop research programmes further. In addition, grant recipients will benefit from meetings, workshops and networking opportunities with AISI, other grant holders and stakeholders.
(This Bulletin article was the subject of a ResearchConnect news alert.)