This article originally appeared on hubinternational.com 

Non-profits are increasingly incorporating artificial intelligence (AI) into their operations and communication platforms, with their integration efforts actually outpacing their private sector counterparts 58% to 47%.1

AI enables non-profits to enhance stakeholder engagement and can help them access solutions to social problems they are working to address. About 70% of nonprofits believe generative AI will help them achieve their organizations’ sustainable development goals by enhancing productivity, improving access to information and increasing awareness to drive policy change.2

But AI also presents risks that could threaten a non-profit financially, reputationally and operationally.

How non-profits are using AI

AI has surged since 2020 thanks to swift advances in technology to generate text, images and videos. Non-profits are tapping into generative AI and its large language model (LLM) subset3 to create text from big sets of data to enhance efficiency and expand their reach. Additionally, non-profits can use AI to automate repetitive tasks, including certain administrative duties like scheduling meetings, data entry or volunteer management, so they can instead focus their limited employee and volunteer resources on other important work.4

The risks — and how to combat them

Despite it’s benefits, risks abound, including errors in word choice, tone or potential copyright infringement in AI-generated materials. It is critical that organizations have a process to fact-check AI-generated materials and develop usage rules and policies for employees or volunteers supported by awareness training. Organizations should also consider media liability insurance against AI content-related claims of personal injury, copyright/trademark infringement and plagiarism.

Cybercrime is another concern. This technology has enabled cybercriminals to improve the speed, scale and automation of cyberattacks. The technology can turbocharge schemes like phishing or ransomware and be used to mimic voices of real people “authorizing” fraudulent activities, known as “deepfakes.” These systems can be targets as well. If a threat actor was able to compromise a language model and poison the information within it, the outputs generated by algorithms leveraging that model could be damaging.

Unfortunately, many non-profits are resource-challenged and increasingly vulnerable to cyber threats. About 68% of non-profits have had at least one data breach in the last three years; 75% don’t actively monitor their networks and more than 70% don’t run vulnerability assessments.5

Every organization using or considering AI technology needs best practices and policies to protect against the potential risks. Here are some steps to consider:

  • Document AI use policies. Organizations need to determine who can use public tools and for what purpose. For instance, can business or personal email accounts be linked to the programs? How will access be managed — and by whom?
  • Due diligence. Third-party tools that organizations or its vendors can buy, license or access cause more than half of all AI failures, which includes providing inaccurate or copyrighted information.6 Organizations must thoroughly evaluate tools and the practices of any potential vendors to ensure they are guarding against threats. Rigorous contractual risk management — including hold-harmless, indemnification and insurance provisions — is a must.
  • Awareness training. All staff should be trained in the use of tools and general cyber-security protocols.
  • Risk management. An experienced broker is an invaluable resource to help organizations assess their cyber risk. Organizations should work with their broker to ensure they have the right insurance for AI-related exposures, such as cyber insurance and intellectual property coverage.

Contact HUB International’s non-profit insurance specialists to learn more about how to protect yourself against AI-related risks and take full advantage of the technology.

Protect your most important asset – your staff

ONPHA has partnered with HUB International to offer our members a customized Employee Benefits Program. 

Learn more and get a quote today. 

1 The NonProfit Times, “Nonprofits’ Use Of AI Exceeds For-Profit Implementation,” September 16, 2024.
2 McKinsey & Co., “AI for social good: Improving lives and protecting the planet,” May 10, 2024.
3 Cloudflare, “What is a large language model?” accessed October 29, 2024.
4 Nonprofit Leadership Alliance, “From Fundraising to Finance: How AI for Nonprofits Can Maximize Impact,” May 15, 2024.
5 Network Depot, “Why Nonprofits Have Become A Popular Target For Cybercriminals And How To Stop Them,” September 17, 2024.
6 MIT Management, “Third-party AI tools pose increasing risks for organizations,” September 21, 2023.