Are you recording the conversation?
AI notetaking can prove useful and may drive value. It can also erode trust.
I got a notification from something called Fathom the other day. The subject line declared that someone wanted to record our upcoming Zoom call.
I opened the email and discovered that Fathom is an AI notetaking platform. The email it sent asked for my consent to be recorded. It even allowed me to ask for a copy of the recording (hello demonstrating value and getting new users).
My team facilitates sessions regularly. I’ve already flagged that AI (or other technology) could prove useful in capturing notes while the facilitator focuses on the other aspects of facilitating. And bonus, we would end up with a word-for-word transcript we could search, summarize, and query with the help of AI.
But as a human being sitting in my well-worn desk chair looking at this email in my inbox, I felt uncomfortable. No, I didn’t want to be recorded, so I clicked the no option.
I trashed the email instinctively and, after a few seconds, pulled it out and went on a little digging adventure. Fathom touts its security practices, which I appreciate as far as keeping the recordings in the hands of the recorder.
But what if you don’t want them to have a recording of you in the first place?
Additionally, while it sounds like users of Fathom can set up a workflow that asks for consent, the language in the review I read made me assume that it also means users can choose not to ask for consent.
I’ll spare you an exploration of privacy concerns and instead focus on the part that I think matters more: trust.
My team hasn’t moved forward with recording our sessions because we know that when you tell someone they’re being recorded, they’re immediately uncomfortable, either actively or passively. They’re aware that whatever they say could be brought back up later — in or out of context — and that may or may not be something they want.
When seeking insight into the value your people need or what your team thinks, recording paranoia is a barrier no one needs. Sometimes simply speaking in front of the other people in the room proves challenging. What good is an opaque dialogue where you can hear the eggshells crack?
Cultivating a space where people feel comfortable contributing to the conversation takes work because, among other things, it requires trust.
Trust that you actually want to hear what they have to say, trust that the other people in the room will respect them when they say it, and trust that whatever they say won’t be used against them later. And that says nothing about instances where contributors put trust in you to act on what they share.
Despite these challenges, I see the legitimate value that applying AI to notetaking could provide to my team and, ultimately, to the clients I serve. As I’ve already received a second consent email from Fathom, I anticipate that the number of people looking to leverage this tool will only increase (and in fairness, the concept of recording meetings isn’t new).
We need to raise the bar. As a leader in the value economy, you need to communicate your choices clearly so those conversing with you know what to expect. (Since you’re the person with whom they’re building trust, consider sharing this information yourself rather than leaving it to the robots.)1
Start by exploring your own perspective. How do you feel about being recorded? Do you feel differently if you’re recorded the old-fashioned way, if you’re being recorded with video, or if you’re being recorded by an AI bot that’s going to work with the data contained in that recording? How do you want the people with whom you’re building a relationship to feel, and what can you choose to do to help them feel that way?
Then, ensure the platform recording and storing your conversations is secure. What additional steps are you taking to protect this information? If you lead a team, do your team members understand what to save and how to save it? Or what they can and can’t share and how they can and can’t share it? Do you?
Even good security gets breached. You need to know what the platform does in the event of a security breach. You also need to know what you do if, say, a team member puts parts of a conversation into another AI platform and accidentally (or intentionally) makes the information public.
Then you need to understand the platform’s data erasure policies and explore your own. If someone reaches out and says, “Hey, I’m not comfortable having something I said in Friday’s meeting on the record. Can you delete it?” What do you do? How do you respond?
And lastly, how do you decide what gets recorded and what doesn't? Technology makes it so damn easy to record and store everything. There’s an almost animal urge to hoard data like a dragon hoards gold.
The old way of working fed that urge by assuming that the only reason not to record a meeting is if it is prohibited by law. In the value economy, just because you can doesn't mean you should.
Ask instead: What’s the point of recording meetings? What do you want to do with that information? How will you go about doing it? Does what you’re proposing treat people as someone, not something?
Your values provide some guardrails, but when in doubt, ask yourself: Does this choice build trust or erode it?
I’ve written my own message that goes out to the people I meet with in advance of the meeting. It’s encouraged dialogue and I’m glad for that.




Extremely thought-provoking post, Katie.
I've been using Fathom AI for a year now and I find it to be an invaluable tool, especially when trying to recall what was said in a meeting from 2 weeks ago. However, you bring up several valid points around value, comfort and privacy.
Normally I would say the technology itself is agnostic and whether it's used for good or evil depends on the ethics and morals of the user. But I'm not sure if I can honestly say AI technology is agnostic.
Giving this further thought...