Why GRC Teams Should Take A 'Yes, And' Approach To AI

By CEO Matt Kunkel, Forbes Technology Council Member

“Yes, and…” is a principle that originated in improv comedy but has become common in other contexts. The idea is simple: telling someone “no” can grind a situation to an awkward and uncomfortable halt, whether that situation is a comedy sketch or a corporate brainstorming session.

It’s better (and more productive) to roll with new ideas as they emerge, working together rather than against one another to build a stronger, more compelling result. That doesn’t mean every idea is a good one, but this approach at least provides the opportunity to fully examine a proposal rather than simply rejecting it. Who knows? Sometimes, talking through a bad idea can lead to something better.

This is particularly true for governance, risk and compliance (GRC) teams, who are often viewed as the “Department of No.” GRC teams look at everything through the lens of risk, and it can be easy to dismiss new ideas because they appear “too risky.” However, as technology like artificial intelligence (AI) becomes increasingly widespread, saying no isn’t always an option.

While AI has its risks, the technology has the potential to revolutionize the way modern businesses operate—which means failing to adopt it might be the biggest risk of all. Rather than saying “no” to new AI tools, GRC teams should instead look for ways to say, “Yes, and…here’s how we can limit the risk.”

Forbes