Discern first.

For a lot of organizations, AI is a tricky context. If you haven’t yet strategically adopted AI, there’s a sense that it’s important to “get it right”, but it’s changing so quickly and it can seem like getting it right involves deep complexity and, basically, it feels daunting.

If you are already in the habit of beginning any consideration from your purpose and your values, you’re more likely to already be considering AI in the way we advocate: discern first. The most basic reason for this is captured well in the old sayings “measure twice cut once”, and “more haste less speed”. Simply put, there’s too much at stake to just follow the pack, or the consultants’ recommendation, or the rumblings of the Board. Do the work. Here are just a few things that require real care and attention, for reputation reasons, for cost reasons, for legal reasons, for ethical reasons:

  • appropriate use (protecting aspects of your work that depend on relational depth, human connection, etc.)
  • the impacts on your team’s nervous systems of the new speed at which automated workflows happen
  • data privacy
  • bias and equity implications
  • accuracy and reliability
  • over-reliance, and the impact on essential skills on your team

Thankfully there’s a lot of important work being done in all of these areas. We ground in this, in order to support you to make wise plans for your mission, to gain benefit where it is clear, without sacrificing any of the value you’ve built through relational collaboration.

In many organizations, strong feelings make consideration of these ideas more challenging. Fear of adopting dangerous technology that is hyped for profit at the expense of the environment, through exploitative labour, reinforcing and amplifying the bias and power imbalances we’ve been working so hard to unsettle. Fear of seeing our job or our colleagues’ jobs lost to automation during a cost of living crisis. These are existential fears. Unaddressed, they create tension, mistrust, disengagement…sometimes loudly, sometimes stealthily. Facilitated discussion is usually a great start here, and can sometimes lead to valuable insights into possible work design, ethical or reputational safeguards, or learning and developmental plans. Ideally it creates the space for you and your team to discern, together, what’s right for your mission, for your purpose.

What's happening?
Want to talk?

If any of this sounds familiar to you, or perhaps you’d like to ensure that it doesn’t end up sounding familiar, we can help. Choose the description that suits your moment best, and connect with us (no charge) to see if we can support you, in ways that may be quick and simple, or may need more time, or may start simple and evolve . . .

“We’re planning adoption and would like to be discerning, in terms of impact on culture and collaboration.” 

can we talk about

CULTURE-SAFE AI ADOPTION

“We’re not yet planning adoption, but want to have a diligence proposition available while we aren't.”

“We need to explore the ethics of this technology within our organizational values as a starting point.”

“Some of us are using chat, and built-in AI, but we haven’t done strategic thinking about it.”

Let's talk about
Culture-safe AI Adoption

Let's talk about
diligence in not planning adoption

Let's talk about
talking about AI as a team

Let's talk about
strategic use cases for AI in our work