Technology

Establishing Guardrails Around the Use of Generative AI

It’s important to develop smart guidelines around generative AI, especially since there are a lot of unknowns surrounding it. One expert shares how associations can create strong guardrails so that the technology can be used both effectively and ethically.

While generative AI and its uses are top of mind for many organizations, concerns over the ethical use and influence of the technology have popped up among association pros, including in recent conversations in ASAE’s Collaborate online community [member log-in required].

A recent Salesforce survey of more than 500 senior IT leaders found that the majority were concerned about the ethical implications of using generative AI, and 30 percent said they need ethical use guidelines to successfully implement the technology in their business.

“Ethical use of AI is more than just using AI legally; it includes a respect for fundamental values,” said Lisa Rau, cofounder and chief growth officer at Fíonta. “To use the technology well, associations need to evaluate how the tool provides strategic value that aligns with their organization and set out principles and policies for its use and purpose.”

Establishing guidelines that include policies on security, implementation of human review, and specifications on authorized and prohibited use can help ensure that associations use the technology ethically.

Secure Confidential Information

According to Rau, one of the biggest issues with generative AI is maintaining confidential information. Issues can occur in associations when data is used to refine a Large Language Model (LLM)—a deep-learning model that understands and generates text like a human—or another underlying model.

That confidential information is vulnerable to being misused, either through security breaches or because the model providers didn’t put restrictions on it. Even nonconfidential information can become confidential when combined with other information already in a model.

“A good guideline to put in place for these issues can include prohibiting association data from being uploaded to any model or mandating the use of confidential computing—using an isolated, encrypted environment known as a Trusted Execution Environment (TEE) to run generative AI infrastructure on,” Rau said.

Other confidentiality concerns stem around the ownership of AI-generated materials if the output is based on protected sources, and issues surrounding the privacy of user inputs, including questions, prompts, or tasks. Rau recommends, at a minimum, alerting users if their questions or tasks will be incorporated into a public use model.

“Including policies about how to handle confidential information should be part of your transparency effort,” Rau said. “You want to make sure you aren’t taking your constituents’ questions and revealing them back to the technology, so they can be accessed by people outside of the organization.”

Employ Human Review

Generative AI is useful to produce information on many subjects, but how do you know if the content is accurate?

“It would be appropriate for your AI guidelines to include requiring human review in between the generation of a response or completion of a task and it being provided to a constituent or used in a work product,” Rau said.

For example, associations may use generative AI for member services, where it suggests answers for staff to choose from—ensuring that there are always human eyes on output before it is provided to members or other external audiences.

When using the technology to create written content, Rau encourages taking care to validate the accuracy by using fact checkers, copy editors, or proofreaders. Guidelines should specify the nature of the review, and associations can consider automated methods of ensuring such reviews are taking place.

Establish Authorized Use and Oversight

Rau also says the guidelines should cover how organizations themselves should use the technology. Such policies should include the types of appropriate situations for staff or volunteer leaders to use generative AI, whether for research or communicating with members and partners.

“You also want to set guides around prohibitive activity concerning the technology,” Rau said. “Indicate what constitutes harassment or unauthorized use. For example, don’t be disrespectful to technology.”

In addition, associations should reiterate their values, practices, and beliefs to constituents when explaining how they intend to use the technology. And staff should be trained on how to both use the technology and identify potential risks that may arise.

“An ethical approach means being transparent about your values to members and staff,” Rau said. “Providing oversight is key. It’s not enough to say, ‘Here’s the training and policy.’ You want to audit the use of the tools, so you know that people are following the guidelines.”

[nuttapong punna/ISTOCK]

Hannah Carvalho

By Hannah Carvalho

Hannah Carvalho is Senior Editor at Associations Now. MORE

Got an article tip for us? Contact us and let us know!


Comments