By Jay Kennedy, DSC’s Director of Policy and Research
It’s hard to escape discussions of artificial intelligence, or AI, these days. It’s a rapidly developing, complex field with lots of uncertainty. In the simplest terms, ‘generative AI’ is software that can produce digital content based on prompts from a human operator. There’s a profusion of tools for different purposes and more come along every day.
This is likely to change the charity sector and society at large in profound ways over the next decade. But the implications are so potentially wide-ranging that it’s hard to grasp the full scope. It seems likely that in the near-term at least, AI will increasingly assist us with many administrative or clerical tasks and has the potential to make these more efficient.
This will offer some benefits to a sector that is chronically pressured for human resources. Although there are many legal, regulatory and moral implications for our sector and society at large which are unresolved and not well understood, AI could also ‘disrupt’ some existing barriers and power dynamics, allowing new opportunities to emerge.
Potential to disrupt a ‘broken system’
I’ve often heard it said that the way funding is provided in our sector is ‘a broken system’. Personally I don’t think it’s much of a system at all, but if it is, I agree it’s pretty poorly designed. A common complaint is that funders have most of the power in the relationship with grantees and applicants because they have the money, and therefore little incentive to improve processes, service standards, speed or effectiveness.
The pressures from those who receive funding on those who supply it are weak, because there’s usually never any shortage of demand and grantees are disincentivised from providing feedback to their grant-makers out of fear that it could hinder their chances of success. You can be a terrible grant-maker and still have no problem giving money away.
AI has the potential to substantially disrupt the way that grant-making currently functions (or dysfunctions). There was an interesting announcement last year from the Wellcome Trust on behalf of the ‘Research Funders Policy Group’ which shows that some funders are already thinking about the implications of AI on grant-making. Wellcome is definitely not a typical grant-maker; it awards the best part of a billion pounds annually in grants and is a unique outlier in voluntary sector data terms when it comes to foundation grant-making in the UK. Also, unlike many other grant-makers, it’s a major funder of scientific research and therefore has more interest in the rigour of data and evidence than perhaps some others might.
In the statement they refer to the ethical and moral implications of AI in grant applications, research and peer-review processes, and stipulate that ‘any outputs from generative AI tools in funding applications should be acknowledged. Where individual funders wish to apply further specific restrictions, this will be explicitly stated.’
So, it may be that increasingly funders will start requiring grants fundraisers to disclose whether AI has played some role in their applications. Maybe this will just become another tick-box part of the process, but it could be more complicated than that. Will fundraisers tell the truth about whether they’ve used AI? How will the funder even know if they aren’t? We could see the development of digital flags or markers embedded into application forms, for example, which could somehow alert the document creator (the funder) if AI has been used to complete application forms.
AI grant application tools already exist
There are already companies like Grantable and Fundwriter.ai that will use AI technology to help write your grant applications. Some grants fundraisers are already using Chat GPT to facilitate drafting proposals. I don’t know whether these tools are any good for the stated purpose, and I’m not able to recommend them. At this stage, it’s likely that they’ll mainly make the admin of drafting proposals easier, allowing fundraisers to be more efficient by using the AI to provide first drafts and then using human input to correct and improve.
Using AI tools for drafting applications is likely to be a bit like making sausage: you get out what you put in. The quality of the prompts you set and the data you input will determine what the software does with it and how good the ‘first draft’ is. To further mix metaphors, it’s not going to make silk purses out of sows ears (yet).
And crucially, with any of these things, beware the axiom with any ‘free’ digital technology: if it’s ‘free’, YOU are the product. The use of your data and sometimes the selling of it (who knows how or to whom) is what allows the company to provide the free version. With social media, it’s usually using your data for marketing purposes. With AI, it may be for different reasons such as helping the company’s software to learn.
Either way, DO NOT put any commercially sensitive or confidential information into any free AI programme, as you may lose control of it and get in big trouble. If you’re set on experimenting, use something that isn’t proprietary, or subscribe to the paid-for version but carefully check any terms and conditions first, particularly around data protection and intellectual property.
It’s early days and there’s going to be a proliferation of tools, and many will disappear or be bought out by major tech firms. The key near-term implication is that these tools could diminish the time it takes to submit an application substantially, allowing grants fundraisers to focus more on quality but also quantity of applications.
A revolution in funding practice?
In a relatively short period of time it’s possible that AI could drive a far greater number of applications, and higher-quality applications, to roughly the same number of grant-makers with the same total amount of grant funding available. A more important determinant of success could become access to the software and the ability to use it, rather than expertise in writing applications, assuming funders allow AI facilitated proposals and/or aren’t able find ways to control its use.
In this scenario, ineligibility rates could plummet. The gap between the total value of all eligible applications and the total value of all potential grants would widen even further than it already is. This could drive grant-makers to seriously rethink how they make funding decisions, in a way they haven’t for, well, centuries. How do you prioritise who is successful and how best to allocate your limited funds when all your applications are completely eligible and super high quality?
There’s a danger funders might lean further into the power dynamic, requiring ever greater demands for evidence of need or impact, especially if they knew that for applicants this was more easily enabled by AI. A kind of impact measurement arms race could ensue. Some might stop making open grants altogether, and use AI themselves to seek out and support the organisations and projects which most closely matched their objectives and provide the most impact.
Others might resist the change by attempting to ban AI-supported applications or prioritising applications which were not AI-produced, if the funder viewed these as more authentic and accessible. Plenty of them might even stick with the old analog system of ‘write us a hand-written letter and we’ll respond when we get round to it’.
An uncertain future, but with big possibilities
There are clearly big questions that remain unanswered, but we can see a few contours of the future impacts of generative AI on our sector. What does seem certain though is it has the potential to change many systems, broken or not, where there’s a substantial element of inefficient administrative effort. As such, the grant-making ‘system’ would seem ripe for disruption.
Want to learn more about AI and fundraising? AI – Threat or Opportunity? – An Introduction For Charities will be running on Tuesday 16 July. Register here.