Features

Association Boards and Technological Harm: Part II

By Jeff De Cagna, AIMP FRSA FASAE • June 27, 2023

AUTHOR’S ATTESTATION: This article was written entirely by Jeff De Cagna FRSA FASAE, a human author, without using generative AI.

In Part I of this series, I explained the fundamental dilemma around the use of technology that association boards must confront as they work to safeguard their stakeholders and successors from technological harm. In addition, I shared three specific harms enabled by generative AI (loss of human agency, loss of human contribution, and loss of human empathy) that associations must work to contain. In this Part II column, I will share policy, practice, and programmatic steps that boards and staff partners can take to address current and future concerns around generative AI.

Ethical Decisions Must Be First

Let’s be clear that associations are under no obligation to use generative AI tools. While generative AI advocates may take a different view, the mere fact that these products exist does not demand their immediate implementation. And despite the rapid pace of ChatGPT adoption immediately following its launch, there remains good reason for associations to consider a more measured approach to introducing generative AI. According to a recent Pew Research Center report, 58 percent of US adults are familiar with ChatGPT. At the same time, only 14 percent of all US adults have used it, and only about one in ten US adult paid workers aware of ChatGPT has used it for work-related purposes. In other words, there is still room for meaningful dialogue about whether and how associations should adopt generative AI.

The appropriate starting point for that conversation is clear: ethical decisions must be made first. Regarding new technologies, the understandable human tendency is first to embrace the hype of cool use cases before contemplating unintended consequences. It is also a pattern we repeat. As Ezra Klein points out in a recent column in The New York Times, “…I’m skeptical of this early hype. It is measuring A.I.’s potential benefits without considering its likely costs — the same mistake we made with the internet.” (I would add the long-term societal costs of social media and mobile devices to this list.) Without any hype, AI’s growing impact at every level of society demands that associations tackle crucial ethical questions first. For that purpose, I recommend the following three core principles as a starting point:

• Associations must adopt robust ethical standards for implementing all AI technologiesWhen considering the responsible use of AI, associations must identify and contain both risks and harms. In this context, boards and staff partners can begin by evaluating the association’s current uses of AI with care to determine where and how the association is 1) putting itself at risk and 2) putting stakeholders and successors in harm’s way. This assessment of current risks and harms will provide a strong foundation for framing the ethical standards and the supporting policies and practices required for the safer implementation of AI products, including generative AI.

•Associations must focus on designing a responsible collaboration between human and machine intelligences—Once again, let’s be clear: my intention in writing this series is not to dissuade associations from using generative AI tools. Instead, I want to motivate boards, CEOs, and staff partners to grapple fully with their dilemma—seeking the benefits afforded by new technologies to build their associations to thrive while knowing that these technologies are causing real-world harms—and use their agency to develop a mutually-beneficial approach to human-machine collaboration for their associations, stakeholders, and successors.

• Associations must make every effort to ensure that humans do not come second in an AI-first worldAfter years of being an enthusiastic and vocal supporter of swift technology implementation in our community, I continue to observe the myriad downsides of excessive techno-optimism. As we look toward the rest of this decade and beyond, I want to help association decision-makers steward their systems away from an AI-centered view of the future, and toward a reaffirmation of their commitment to the primacy of human agency, contribution, and well-being over our collective fascination with machines, no matter how “smart” they might be.

Six Specific Steps for Boards and Staff Partners

With the three ethical principles outlined above in mind, I want to share six steps that association boards and staff partners should consider to address AI’s present and potential risks and harms. Concurrent with these steps, associations must also prepare their boards of directors to engage in focused, fully-informed, and meaningful conversations about AI’s short-term/long-term implications for their associations and the fields they serve. For the most part, association boards have been insufficiently prepared to have this conversation and thus are limited in fulfilling their stewardship responsibilities. Boards and CEOs must prioritize intentional learning to build a shared understanding of AI’s beneficial and detrimental impact on human beings, their work, and their lives.

Policy Steps 

• Craft policies to safeguard the association and its stakeholders from AI risks and harmAssociation boards and staff partners must collaborate in policy development around AI, especially on potential areas of legal exposure, including data privacy, intellectual property protection and the unauthorized use of copyrighted materials in the training of AI models, and the discriminatory impact of biased AI algorithms. Association policies must also address overall cybersecurity concerns, the inaccuracy/unreliability of AI-generated content, and AI-based misinformation/disinformation, including the harms created by deepfake content. To go further, boards and staff partners should consider how to shape policies and practices that expand human agency, contribution, and empathy in the context of responsible AI use.

• Craft policies around acceptable and unacceptable use cases for generative AI—As part of their policymaking collaboration, boards and staff partners should identify specific situations in which the creation and use of generative AI outputs will be considered unacceptable. By first defining the boundaries outside of which generative AI use is impermissible, associations can better shape policies and practices for more responsible use within clearly-defined limits. In addition, boards should seriously consider adopting a moratorium at least through the end of 2023 on the use of generative AI as part of their associations’ most complex and high-stakes work, including at the board level, to create a space for additional sense-making and meaning-making before deciding on what constitutes acceptable use in these sensitive areas. 

Practice Steps

• Design practices to ensure all association content creation operates within established policiesThe absence of transparent, thoughtful, and inclusive practices to ensure policy adherence may undermine both internal and external confidence in association content creation (and creators) in the areas of certification and credentialing, professional development, publishing, and standards-setting. By designing and integrating AI-related practices as stakeholder contributions to stewardship rather than blunt enforcement mechanisms, associations can unleash intrinsically-motivated care and vigilance among their most steadfast supporters.

• Design practices for using generative AI-enabled products with transparency and responsibility—When making use of generative AI platforms and tools, especially in the absence of more robust regulatory safeguards, every association stakeholder (including both staff and voluntary contributors, and third-party partners) should 1) clearly identify and fully disclose their specific acceptable use cases and purposes in a manner consistent with applicable policies, 2) avoid using generative AI to inflict harm in any form, and 3) ensure that all synthetic media products, i.e., generative AI outputs, are identified with verifiable text-based attestations or embedded watermarks. As a matter of policy and practice, it also may be beneficial to encourage stakeholders to rely first on their creative capabilities and use generative AI technologies only as the option of last resort.

Programmatic Steps

• Provide stakeholders with intentional learning experiences and resources to explain generative AI’s ethical dimensionsAssociations must provide their stakeholders with clear guidance on how the board and staff view the ethical considerations of generative AI use. In addition, stakeholders will need access to specific learning opportunities to develop an empathic and holistic understanding of real-world ethical dilemmas and questions. Association stakeholders must nurture a more robust ethical point of view on generative AI’s unintended consequences for the association and in other aspects of their lives.

• Provide stakeholders with intentional learning experiences and resources to strengthen their human skillsWhile generative AI “prompt engineering” is the hot new topic for training and certification offerings, associations will better serve current stakeholders and long-term successors by directing their energies toward building sustainable human skills. These skills include collaboration, communication, cooperation, creativity and imagination, emotional intelligence, ethical decision-making, intentional learning, and reflection. Human beings who develop and sharpen these skills will bring an extraordinary level of capability to the task of working effectively and responsibly with AI.

Next Month

As I write this column, I am still considering the topic for my next two-part series in July and August. You can look forward to the surprise unveiling when Part I is posted next month. Until then, thank you for reading and please stay well.

About The Author

Jeff De Cagna FRSA FASAE, executive advisor for Foresight First LLC in Reston, Virginia, is an association contrarian, foresight practitioner, governing designer, stakeholder and successor advocate, and stewardship catalyst. In August 2019, Jeff became the 32nd recipient of ASAE’s Academy of Leaders Award, the association’s highest individual honor given to consultants or industry partners in recognition of their support of ASAE and the association community.

Jeff can be reached at [email protected], on LinkedIn at jeffonlinkedin.com , or on Twitter @dutyofforesight.

DISCLAIMER: The views expressed in this column belong solely to the author.