Features

Association Boards and Technological Harm: Part I

By Jeff De Cagna, AIMP FRSA FASAE • June 6, 2023

AUTHOR’S ATTESTATION: This article was written entirely by Jeff De Cagna FRSA FASAE, a human author, without the use of generative AI. 

In my November 2022 The Duty of Foresight column, “The Six Toughest Decisions Association Boards Must Make,” I challenged association boards to determine how they will safeguard their stakeholders and successors from technological harm. As I wrote at the time:

“When it comes to the expanding embrace of technology in both their associations and the broader world, however, boards face a fundamental dilemma: technology will be at the core of their organizations’ pursuit of long-term thrivability even as it remains an ongoing source of unintended negative consequences for stakeholders and successors.” [Emphasis in original]

On November 30, 2022, just 15 days after the column was posted online, OpenAI released ChatGPT, and the dimensions of the dilemma increased in complexity and consequence. Depending on your point of view, the last six months of explosive growth in generative AI adoption have been an exciting time of greatly-increased workplace efficiency and productivity or a confusing period of upheaval created by immediate global access to an unreliable and unregulated technology tool.

Regardless of their individual perspectives on the rapid rise of generative AI, association boards and staff partners have a shared stewardship responsibility to collaborate with discipline and focus as they confront the principal dilemma before them and strive to safeguard both current stakeholders and long-term successors from technological harm.

Three AI-Enabled Harms

Last month, I released an open letter outlining serious concerns and critical questions about generative AI, and suggesting essential actions for the association community to take to address them. In this column, I want to explore three specific AI-enabled harms that associations must strive to contain: 

• Loss of human agencyIn February 2023, Pew Research Center released a report on the future of human agency, i.e., the ability of human beings to retain their autonomy and make their own decisions. In the report, 56% of experts argued that “by 2035 smart machines, bots, and systems will not be designed to allow humans to easily be in control of most tech-aided decision-making.” [Emphasis in original] Long before the emergence of ChatGPT and other generative AI applications, AI-enabled products were perpetuating bias, contributing to discrimination, undermining privacy, and enabling surveillance. Associations must be vocal in opposing these harms. Moreover, as organizations that help to advance human agency, boards and staff partners must embrace their unique and crucial responsibility to safeguard stakeholders and successors against its loss.

• Loss of human contributionPutting aside the hype created by dire predictions of massive AI-related job losses, the swift adoption of generative AI across a myriad of industries, professions, and fields is raising legitimate concerns about its disruptive impact on the present and future of work. The shift to more remote work during the pandemic lockdown accelerated corporate investment in AI and many companies were already rethinking human jobs in favor of AI-based alternatives. Another pragmatic consideration is the unauthorized use of human-created intellectual property in the training of the underlying generative AI models that power applications such as ChatGPT for text or Midjourney for visuals. For boards and staff partners, protecting and strengthening human contribution at work and to the larger world must be an inviolable priority today and going forward.

• Loss of human empathyMany generative AI advocates describe their interactions with ChatGPT as “conversations,” and others refer to these tools as their “buddies” or “interns.” While machine intelligence does provide benefits to humanity, it is not human. Indeed, it is highly detrimental to ascribe human attributes or feelings to generative AI tools that have no understanding of either the prompts required to extract their synthetic outputs or the outputs themselves. AI is math and does not possess compassion or empathy for human beings. Instead of participating in anthropomorphizing generative AI, association boards and staff partners will be better served by working together to nurture an empathic understanding of how human beings have been exploited by AI development and the potential hazards on the horizon for their stakeholders and successors.

Later This Month

In Part II, I will share policy, practice and programmatic steps that boards and staff partners can take to address both current and future concerns around generative AI. Until then, thank you for reading and please stay well.

About The Author

Jeff De Cagna FRSA FASAE, executive advisor for Foresight First LLC in Reston, Virginia, is an association contrarian, foresight practitioner, governing designer, stakeholder and successor advocate, and stewardship catalyst. In August 2019, Jeff became the 32nd recipient of ASAE’s Academy of Leaders Award, the association’s highest individual honor given to consultants or industry partners in recognition of their support of ASAE and the association community.

Jeff can be reached at [email protected], on LinkedIn at jeffonlinkedin.com , or on Twitter @dutyofforesight.

DISCLAIMER: The views expressed in this column belong solely to the author.