
AI Diagnoses 4x More Accurately Than Doctors? You Have My Attention.
by Lesley Dewey
Published August 18, 2025
I came across this Wired article and haven’t stopped thinking about it, or talking about it, since. Microsoft’s new AI system diagnosed medical conditions with four times the accuracy of human doctors. Not in some narrow edge case, but in a test built from REAL case studies pulled from the New England Journal of Medicine.
Let that sink in: AI was 80% accurate. Human doctors were 20% accurate.
That’s not something you simply scroll past.
What’s especially interesting isn’t just the breakthrough, it’s how AI got there. Microsoft’s system, MAI-DxO, doesn’t rely on a single model. It orchestrates multiple LLM’s (from OpenAI, Google, Anthropic and others) in a kind of “chain-of-debate,” mimicking the back-and-forth reasoning a real team of doctors might go through.
It’s early, and there are plenty of caveats (real-world clinical use is a long road), but the direction is clear: AI isn’t just helping us move faster, it’s starting to think better. And that has implications well beyond healthcare.
Why this matters to all of us
If AI can outperform humans in complex diagnostic reasoning, where else might it start to lead?
- Could it guide decision-making in law, finance or infrastructure?
- What happens where your business no longer needs more data, but better aggregation of it?
- And how should leaders prepare their teams to work alongside tools that don’t just help, but often outperform?
These are the questions I’m asking as I dive deeper into how AI is changing the way we work, think and operate our businesses.
If this article sparks something for you – a curiosity, a concern, an opportunity – let’s talk. My goal is to help professionals explore where AI fits in their world and how to start without being overwhelmed by the options.

"This Call May Be Used to Train a Language Model"
The fine print you never agreed to—and what smart companies are doing about it.
by Lesley Dewey
Published September 8, 2025
What Happened: The Otter.ai Lawsuit
In August 2025, a class action lawsuit was filed against Otter.ai in federal court in California. The allegation? That Otter’s notetaker can auto-join Zoom, Google Meet, or Teams meetings, record conversations without full participant consent, and then use those transcripts to train its AI models.
Key word: allegation. Otter hasn’t had its say in court yet. But regardless of outcome, the case is forcing a very real conversation about AI, privacy, and the silent attendees in your meetings.
Why This Should Worry You (Especially If You're in Leadership)
I learn more about AI every week, and here’s the uncomfortable pattern I see repeating:
- Productivity outruns policy: Someone connects a calendar, and suddenly the bot is in every meeting, capturing sensitive conversations without anyone realizing it.
- Consent is not a checkbox: Just because the host hits “OK” doesn’t mean every participant - especially external clients - has truly agreed.
- Lack of ID doesn’t equal anonymity: Scrubbed names won’t stop strategy, pricing, or trade secrets from being recognizable to a model consuming that data.
- Trust is fragile: When people find out they were recorded without notice, they don’t forget - and they rarely forgive.
What Security Cameras Taught Me About AI Bots
Before I moved toward AI education, I spent years in commercial security – discussing camera and access control systems with businesses, restaurants, churches, multi-family properties, and more.
That meant lots of thorny privacy conversations. Like whether a restaurant manager could legally record audio during employee/customer interactions. (Spoiler: often no.) And those lessons map directly to AI notetakers.
Here’s what I learned:
- Audio is governed differently than video. Many states allow silent video surveillance. Audio recording often requires everyone’s consent.
- Owning the building doesn’t mean you own the people in it. Property rights don’t override labor laws, HR policies, or state wiretap laws.
- Fine print is not real notice. A sign in the break room isn’t consent. And “hidden in the privacy policy” won’t hold up.
- Stored recordings are liabilities, not assets. If you capture it, you’d better secure it - and be prepared for discovery.
Now replace “camera” with “AI bot,” and you’ve got the same risk profile.
If you wouldn’t allow a mic’d-up security camera in a performance review, don’t let an AI notetaker join it either.
The Leadership Test: Can You Pass It?
This isn’t just an Otter problem. It’s a leadership gut check.
If your AI notetaker knows your quarterly numbers, your merger plans, and who’s on a performance plan... and nobody flagged it? That’s not rare. That’s your normal. Time to fix that problem.
8 Practical Things to Do This Week
Here’s the checklist I suggest:
- Turn off auto-join. Require someone to manually invite a notetaker only after consent is secured.
- Add calendar warnings. Make pre-meeting notices automatic. “This meeting will be recorded” should be in every invite.
- Get verbal consent. Use a script. Try: “An AI notetaker is joining to transcribe. This data will be stored by a third party. Do you consent to continue?”
- Define red zones. No bots in legal calls, HR issues, performance reviews, board meetings, M&A, or NDA-protected client calls.
- Ask vendors the hard questions: Does your data train their model? What’s the deletion policy? Can you opt out of training?
- Shut down link sharing. Make all recordings private by default. No more “Oops, we left the link public” regrets.
- Train your team. Teach them how to spot a bot, stop a bot, delete a bot, and report misuse.
- Track the tech. Keep an internal register of which tools are being used, by whom, where they store data, and who’s responsible.
What to Say to Your Team Today
Here’s a suggested script:
“AI notetakers can be used - with consent - for the right meetings. They’re off-limits for sensitive topics. Auto-join is off by default. We will announce recording at the start and stop if anyone objects. We do not allow our content to train vendors’ models. When in doubt, ask before inviting the bot.”
Final Word
Whether Otter wins or loses, this case is already reshaping how AI shows up in the workplace.
Smart leaders won’t wait for a verdict. They’ll take action now - to secure trust, protect IP, and avoid legal headaches later.
The fix is simple. The risk of ignoring it? Not.

The Double-Edged Sword of AI
by Lesley Dewey
Published August 25, 2025
I use AI every day to save time and spark ideas. But am I giving up something in return? Like my ability to think deeply
and critically?
Fortunately, a recent Diary of a CEO podcast has me doing both. Steven Bartlett interviewed Dr. Daniel Amen, brain health expert and psychiatrist and Dr. Terry Sejnowski, a pioneer in computational neuroscience and their conversation was both thoughtful and encouraging.
I talk about AI a lot. So much so that I have active “good vs. evil” debates about it with my 15- and 18-year-old children at least once a week. My son believes it has a significant negative impact on human intelligence. My daughter offered the example of a teacher who has used it to create curriculum she obviously didn’t read or proof before attempting to teach the content to the class. They both believe their high school teachers are using it to “cheat” while working hard to ensure their students are unable to do the same.
So which is it? Is AI good over evil? Cheating or eliminating the busy work thereby freeing our brains to work on higher value tasks? And how do we as parents, educators and business leaders use and teach AI responsibly?
Decades in sales and marketing have taught me the value of critical thinking, creativity and human connection. My recent immersion into AI has shown me both the promise and the pitfalls in all three of those categories.
What I took away from the podcast, and my own experience, is that AI is neither hero nor villain. The danger comes when we let it think for us. The opportunity comes when we use it interactively, treating it as a partner in shaping ideas. AI can’t replace the judgment, ownership or human creativity that come from actively engaging in a topic.
For me, that means letting AI handle the repetitive, time-consuming tasks I dislike so I can focus on higher-value thinking that
actually grows my business and helps me build relationships. It isn’t about outsourcing my brain, it’s about buying back time
to use it more fully.
I’m exploring these questions daily, both in my work and at home. Here’s what I’ve settled on so far: everyone should be AI
literate. Used wisely, these tools allow for an entirely new universe of learning and level the playing field for all of us.
AI literacy isn’t optional anymore; it’s a skill none of us can afford to ignore.
What about you? When you use AI, does it feel like a crutch or a catalyst?
Credit: The Diary of a CEO podcast, Steven Bartlett with Dr. Daniel Amen and Dr. Terry Sejnowski.

Why Walmart’s Alliance with OpenAI Is the
Most Important Retail Shift Since Amazon Prime
By Lesley Dewey
Published October 26, 2025
On October 14, 2025, Walmart and OpenAI announced a strategic partnership that could permanently reshape e-commerce.
As a consultant focused on AI, specifically in sales and marketing, I believe this moment represents a turning point: the birth of agentic commerce. A shopping experience where customers don’t search; they converse, then buy.
Walmart’s massive product and fulfillment ecosystem combined with OpenAI’s conversational intelligence – specifically ChatGPT’s new “Instant Checkout” feature, creates a friendly, chat-driven purchase equivalent to the word-of-mouth sale companies crave. One where shopping comes from conversation.
This isn’t a baby step, it’s a giant leap toward an AI-first shopping model merging convenience, personalization and prediction in one interface.
It’s Frictionless: No more clicking through websites – purchases are completed within the chat via Instant Checkout.
It’s Personal: Walmart’s internal AI “Sparky” learns your habits, budget, and preferences and anticipates what you’ll need next.
It’s AI Driven Discovery: SEO is moving aside for AI Visibility Optimization – brands must now be recommended by the AI, not just found with keywords.
This move directly challenges Amazon’s dominance at a crucial point in the buying process – the moment of intent.
Amazon’s empire is built on owning the sale after the search. Walmart (via ChatGPT) is now positioned to own the conversation before the search even happens.
The retail AI wars have officially begun. Amazon’s response will likely include an acceleration of their collab with Anthropic and rollout its own “AI Shopping Agents.”
Brick-and-Mortar will shift once again. The positive for Walmart will be that ChatGPT-powered orders will turn physical stores into local hubs for fulfillment and pickup. Unfortunately for smaller, purely physical retailers, they may lose customers as
AI automates routine purchases. Physical stores will be forced to focus on immediacy, discovery and human-experience rather than simple, routine transactions.
Logistics and supply chain performance expectations will be redefined by this partnership. Shopping via the convenience of “chatting” will lead to higher frequency, but smaller purchases. The consumer will expect delivery NOW. Carriers and distributors will feel additional pressure to provide real-time tracking, analytics and automation.
Walmart will win - at least temporarily. OpenAI will win - as they gain the most valuable asset: rich, real-time transactional data from one of the world’s largest retailers. Hopefully, consumers will win – provided wrong products aren’t recommended, deliveries aren’t late, and there aren’t any privacy breaches.
Walmart has successfully forced the conversation and disrupted the traditional e-commerce search model, but Amazon is far from a static target and will not give up the agentic AI space easily. The sustainable lead will belong to the player with the best long-term proprietary assets. And the war will ultimately be won by the retailer who best blends AI, fulfillment and customer trust.
