Saturday, April 19, 2025

Latest Posts

Kate O’Neill on AI, Risk, & Readiness

ProDentim Coinrule Hostinger ClickFunnels Max and Mark Decor


Picture this: It’s 2025. Your marketing intern used an AI tool to generate content for your biggest client and accidentally included hallucinated product features and hit send before anyone could review it. 

Gave you a chill, didn’t it?

As the creator economy races to adopt generative AI tools, taking a pause to build a proper content governance should be your next step. 

Lucky for us, the author of “What Matters Next” and the founder and CEO of KO Insights, Kate O’Neill, shared her wisdom on navigating the wild west of AI-powered content creation before your organization faces its own content crisis.

This interview is part of G2’s Q&A series. For more content like this, subscribe to G2 Tea, a newsletter with SaaS-y news and entertainment.

To watch the full interview, check out the video below:

Inside the industry with Kate O’Neill

Your latest book, “What Matters Next,” addresses future-ready decision-making. Can you tell us how this applies specifically to content risk management?

I think future-ready decision making is a concept or a mindset that involves a balance between business objectives and human values. This plays out in tech because the scale and scope of tech decision making is so huge. And a lot of leaders feel daunted by how complex the decision making is. 

Within content risk management, what we’re looking at is a need for governance and a sort of policy to be put in place. We are also looking at a proactive approach that’s beyond just regulatory compliance. 

The key is understanding what matters in your current reality while anticipating what will be important in the future, all guided by a clear understanding of what your organization is trying to accomplish and what defines your values.

I think the focus on developing robust internal frameworks will really benefit people when it comes to content risk. And those frameworks should be based on purpose and organizational values. It is very important to have a really clear understanding of what it is the organization is trying to accomplish and what it is that defines their values.

Transform your AI marketing strategy.

Join industry leaders at G2’s free AI in Action Roadshow for actionable insights and proven strategies to reimagine your funnel. Register now

Talking about content risks, what are the most significant hidden risks in content strategies that organizations typically overlook, and how can they be more conscious in the future?

When I worked for a large enterprise on the intranet team, our focus was not just on content dissemination but also on maintaining content integrity, managing regulations, and preventing duplication. For example, different departments often kept their own copies of documents, like the code of conduct. However, updating these documents could lead to inconsistent versions across departments, resulting in “orphaned” or outdated content.

Another classic example that I have seen so many times is some kind of work process getting instantiated and then codified into documentation. But that document represents one person’s quirky preferences, which become ingrained in documentation even after that person leaves. This leads to maintaining non-essential information without a clear reason. And so I think those are the kinds of things that are very low-key kind of risks. Those are low-harm risks, although they add up over time. 

What we’re seeing in the higher-risk stakes is not having clarity or transparency across communications and not being able to understand which stakeholders are accountable for different pieces of content. 

Also, with generative AI being used within organizations, we see a lot of people generating their own versions of content and then sending that out on behalf of the company to clients or to outside-facing media organizations. And those aren’t necessarily sanctioned by the stakeholders within the organization who would like to have some kind of governance over documentation.

A comprehensive content strategy that addresses these issues at regulatory, compliance, and business engagement levels would go a long way toward mitigating these risks.

With content strategies becoming global, how regulatory differences across global markets have complicated content risk management, particularly with the emergence of generative AI. What specific compliance issues should organizations be most concerned about?

We see this a lot in many fields of AI. We’re seeing how generative AI, particularly because of its widespread use, is clashing with global regulations. Especially in regions like the U.S., where deregulation is prominent, companies face challenges in establishing effective internal governance frameworks. Such internal governance frameworks are crucial to ensure their resilience in global markets and to prevent issues like the dissemination of unrepresentative content that could misalign with a company’s values or positions, potentially compromising safety and security.

We need to think about resilience and future readiness from a company leadership standpoint. And that means being able to say, “We need the best kind of procedures for us, for our organization.” And that’s probably going to mean being adaptable to any market. If you do business globally, you need to be prepared for your content to be consumed or engaged with by global markets. 

“I think focusing on developing value driven frameworks that transcend specific regulations is the right way to go.”

Kate O’Neill
Founder and CEO of KO Insights

We need to think proactively about governance so that we can create the kind of competitive advantage and resilience that will help us navigate global markets and changing circumstances. Because as soon as any particular government changes to a different leader, we may see complete fluctuation in these regulatory states. 

So, by focusing on long-term strategies, companies can protect their content, people, and stakeholders and stay prepared for shifts in governmental policies and global market dynamics.

I see that you’re very active on LinkedIn, and you talk about AI capabilities and human values intertwining. So, considering the balance between AI capabilities and human values, what framework do you recommend for ensuring that AI-powered content tools align with human-centric values and not vice versa?

Contrary to the belief that human-centric or values-driven frameworks stifle innovation, I believe they actually enhance it. Once you understand what your organization is trying to accomplish and how it benefits both internal and external stakeholders, innovation becomes easier within these well-defined guardrails.

I recommend using the “now-next continuum” framework from my book “What Matters Next.” This involves identifying your priorities now, engaging in scenario planning about likely future outcomes, defining your preferred outcomes, and working on closing the gap between likely outcomes and preferred outcomes. 

This exercise, applied through a human-centric lens, is actually the best thing I can think of to facilitate innovation because it really allows you to move quickly but also lets you know that you’re not moving so quickly that you’re harming people. It creates a balance between technological capability and ethical responsibility that benefits both the business and the humans connected to it.

“Think about the balance between technological capability and ethical responsibility and do that in a way that benefits the business and the humans that are in and outside of the business at the same time.”

Kate O’Neill
Founder and CEO of KO Insights

Looking ahead, what skills should content teams develop now to be prepared for future content risks?

Content teams should focus on developing skills that blend technical understanding with ethical considerations until this integration becomes second nature. The other thing has to be proactive leadership and really thinking about how there’s a lot of uncertainty because of geopolitics, climate, AI, and other numerous topics. 

And given the uncertainty of this time, I think there’s a tendency to feel very stuck. Instead, this is actually the best time to look ahead and do the integrative work of understanding what matters now and what will matter in the future — from one year to 100 years ahead. 

The key is pulling those future considerations into your current decisions, actions, and priorities. This forward-looking integration is the essence of “What Matters Next” and represents the skills many people need right now.

If you enjoyed this insightful conversation, subscribe to G2 Tea for the latest tech and marketing thought leadership.

Follow Kate O’Neill on LinkedIn to know more about AI ethics, content governance and responsible tech. 


Edited by Supanna Das



ProDentim Coinrule Hostinger ClickFunnels Max and Mark Decor

Latest Posts

Don't Miss

Stay in touch

To be updated with all the latest news, offers and special announcements.