window.dataLayer = window.dataLayer || []; function gtag(){dataLayer.push(arguments);} gtag('js', new Date()); gtag('config', 'UA-60020161-1');

Introductory Thoughts on Artificial Intelligence Technology for Boards of Directors

Introductory Thoughts on Artificial Intelligence Technology for Boards of Directors

and   |  September 25, 2023

Hether Jonna Frayer interviewed Todd Wallace, Columinate’s board president, about AI.

Hether:

Thanks for agreeing to do this interview on AI technology and how it could be used to support the work of governance! I’m getting the sense from the conversations that I’ve been having that it is a wide-ranging topic. Where would you like to start? 

Todd:

Thanks for asking me to have the conversation. As you know, it is a topic that I’ve been interested in since last year, especially as applications like ChatGPT have garnered more attention and scrutiny in the mainstream media.

Logically, I think a good place to start would be with clarifying exactly what we mean when we say “AI.” Without getting too technical, we are talking about machine systems that perform tasks that typically require human intelligence. Among these systems we can distinguish between narrow or weak AI, which are systems that are designed to perform specific tasks or solve specific problems, and general or strong AI, which is comprehension and learning that is equal to that of human beings. Needless to say, general AI is purely theoretical and does not exist currently outside of science fiction.

“AI tools can be used to help stimulate new thinking and ideas, to provide counter-arguments to proposals, to provide summaries of articles or reports, or as a basic training tool. In addition, some early studies have shown that AI can encourage or improve collaboration.

Hether:

So, what we are talking about today with regards to the ways that AI could support governance and the work of boards of directors, is narrow or weak AI. How can that be helpful in the boardroom and with board work in general?

Todd: 

Well, let’s take the Large Language Models tools like ChatGPT and Bard, which have been talked about constantly in the media of late. Their large size is enabled by AI accelerators, which are able to process vast amounts of text data, mostly scraped from the Internet.

Now, it’s easier for me to show you rather than tell you, but here is an example. Imagine I am a CEO or I am on the board of a large business, and one of the units of that business is struggling to make money. I could ask ChatGPT, “What factors should I consider before I decide to cease the operations of this business unit?” In a matter of seconds, it will generate a list of various factors for me to consider. By the way, when I last did this, it listed twelve in all, including financial performance, long-term viability, and employee impact. Then, I could ask it to give me more detail about just one of the factors, let’s say employee impact, and it can go deeper, mentioning the need for open and transparent communication, consideration of the emotional impact on employees, legal obligations, etc. I can also ask it to re-state everything it just told me in simpler language…or even tell it that I didn’t like the first answer it gave me and to please try again.

So, in this example, I am really using ChatGPT as a “sounding board,” to stimulate my own thinking and help me come up with new ideas and perspectives related to my question. Do you see now how that could be a useful aid in the board room? And we are just talking right now about Natural Language Generation, but there are AI tools being developed that can do a host of tasks: create images, analyze complicated data, build visual presentations, and much more. Depending on the specific application and context, AI tools can be used to help stimulate new thinking and ideas, to provide counter-arguments to proposals, to provide summaries of articles or reports, or as a basic training tool. In addition, some early studies have shown that AI can encourage or improve collaboration.

Hether:

Could you say more about the collaboration aspect?

Todd: 

Sure. I have actually experienced the collaboration component first-hand. I was recently in a conference workshop where I was placed in a group of strangers. We were tasked to come up with a list of potential solutions to a problem using ChatGPT as an aid (so, brainstorming together). We explained the task to the AI and asked it for some options. Now, normally in this kind of context, people can be quite reserved. They might be shy to engage with others’ ideas, as they don’t want to offend someone who they don’t know well by critiquing them too harshly. Or, alternatively, individuals can be overly critical or too dominant, because they can’t “read” the group dynamic well. In our case, though, we skipped right through that awkward stage of having to “feel out” one another, and were able to leap right into the task. Because the ideas were generated by the AI, no one felt like they had to worry about hurting anyone’s feelings if they didn’t like an idea. Also, because the “attention” was more focused on the AI, we all felt free to engage with what it was offering, rather than having to negotiate social niceties. It was a fascinating experience!

Now, I should emphasize that this was only one experience, in a low stakes, artificial situation, so I do not want to overstate the implications of it. I also don’t want to suggest that AI is some kind of panacea to collaboration challenges, but the whole thing piqued my curiosity. The early studies I have seen on this have been done in the areas of business (project management) and education. I am interested to see what the research will show over time as AI applications are utilized more in the workplace.

Hether: 

Okay, we have talked a bit about the possibilities, what about limitations of the technology?

Todd: 

Yes, it is vitally important to understand the limits! First, AI applications like ChatGPT are definitely not good for fact finding or fact checking. In fact, when you use it (as of the date of this writing), underneath the prompt bar it even says in small letters, “ChatGPT may produce inaccurate information about people, places, or facts.” So, please do not rely on it to check the accuracy of your information.

Secondly, you should not expect the technology to provide non-biased views. Again, using ChatGPT as an example, the application was trained on text databases from the internet (for GPT-3, about 300 billion words from books, web texts, Wikipedia, articles and other sources). These datasets represent only a narrow slice of the vast range of perspectives that exist in the world, and they are themselves rife with conflicting points of view and inaccuracy. Additionally, its own underlying programming encourages responses that are uncontroversial and “safe,” which is another form of inherent bias.

Third, you should not use the technology for analyzing confidential, proprietary or private data.

Finally, and perhaps most importantly, we should remember that while these AI tools are powerful (and can wow us with their capabilities), they are not capable of deductive reasoning, or human levels of discernment and empathy. Therefore we should not use them to replace the judgment of empathetic, human decision-makers.

Hether:

Do you see board’s using AI currently, and if so, how are they using it?

Todd: 

It obviously depends on the culture of the group and their comfort with technology, but yes, I have started to observe some use of AI in the boardroom. For example, back in February of this year, I facilitated a group discussion where one of the topics raised was about the use of AI generally at the co-op. During the discussion, the CEO shared that he had used ChatGPT to assist in the writing of his strategic plan and the subsequent presentation to the board. (Everyone agreed that it was a great presentation.) I had one board administrator at another co-op share with me that their board used AI to assist them with the writing of their manager’s evaluation. I know of a couple of boards who have experimented with using it to help them write policy.

Hether: 

What are the possibilities for AI use in the future?

Todd:

There are many possible future applications. Just around the corner, I can see AI tools being used to assist with board administration tasks. Also, there is the potential to use AI to organize or present information in various formats, and thereby be more inclusive to the differences in how people process information. Imagine easily creating board packets that are customized to the users’ individual preference for understanding data. Or using AI to perform advanced data analysis, that could then be summarized and shared with the board. Or assigning an AI assistant to new board members to help them with onboarding. Could I someday imagine an AI board coach? Yes, absolutely. Perhaps far further into the future there could even be an AI appointed to the board as a voting member. Yes, I know that sounds far-fetched, but there are already legal theorists speculating on the potential questions and frameworks for this, should legislation eventually allow it.

Hether: 

What advice would you give to boards who are interested in using it?

Todd:

Well, the technology has and continues to evolve at incredible speed, so I will keep my suggestions broad and high-level. Anything I say that is too specific could be outdated in a matter of months, if not sooner.

First, if you are a board or board leader who is interested in bringing AI into your governance work, but are new to it, educate yourself about the technology, how it works, how it could help, but also its limitations and risks. This includes learning about the broad societal implications, positive and negative, of the increased use of AI. (In my opinion, we do not often think carefully, as a society, about the impact of our use of powerful technologies, and this type of technology has the potential to bring about great benefits—but that could also come with great costs.) If you are interested in the broader impact of AI on our society, there are numerous experts and thinkers who have been writing about it. To start, you could search for articles and interviews by Sam Altman, Kelsey Piper, Gary Marcus, Cal Newport, Jaron Lanier, and Ted Chiang. They all have compelling (and sometimes conflicting) takes on the broader implications of this technology.

Second, if its use would be a big change or shift away from the way you currently do things, have a conversation as a whole group to surface people’s specific concerns or aspirations, and consider how to incorporate these perspectives into whatever choices you make as you move forward. Remember the group discussion that I referenced earlier? The outcome of that was a shared agreement that they would carefully consider any broad implementation of AI technology, to be assured that their use of it would be consistent with their organizational values.

Finally, don’t be afraid to experiment and try new things, but do it in a way that minimizes risk and allows you to easily amplify the things that work well and diminish whatever does not.

Finally, here I would like to quote and recognize Paul Smith, the Founder and CEO of the Future Directors Institute. Based out of Australia, he is someone who is incredibly passionate about board and governance excellence, and my views on this topic are highly influenced by his work. On using AI tools in governance, he said, “Think of it as a supplemental resource that can add value and save time. It’s an extra input, not the input”—which I think is sage advice.

About the Authors

Todd Wallace

Thinking Partner & Facilitator

toddwallace@columinate.coop
503-307-8797

Hether Jonna Frayer

Governance & Leadership Development

hetherjonnafrayer@columinate.coop
269-598-6857

Have more questions?

Get in touch with one of our consultants.

This website uses cookies and third party services. Settings ACCEPT

Tracking Cookies

Basic analytics and user activity tracking.

Third-party Content

Required for Youtube videos and other off-site content.