AI in veterinary medicine: safe, cautious, and high-risk uses
Not long ago, I saw a talk show clip with a curious idea: a big red button that could wipe your digital footprint from the internet in an instant. The thought was that young people, when they turn 18, should get the chance to start fresh without carrying the weight of everything they’d shared online before.
It struck me as both relatable and confronting. When I was younger, social media felt harmless. We started on Hyves, the first real Dutch social network, which now sparks nostalgia. Then came Facebook, and gradually the tone shifted. The internet became personal, data became more valuable, and what you posted began to have real-world consequences.
That reminded me of AI. Not because it’s the same, but because it’s just as easy to dive in without stopping to think: What am I sharing? What choices am I handing over to technology? What risks come with it? AI is moving fast, also in veterinary practices. And just like with social media back then, this is the right moment to pause and ask:
- What do we want?
- What don’t we want (yet)?
- And what only under the right conditions?
How do you know if something is “responsible”?
That’s tricky. What feels like a helpful tool for one person raises doubts for another. To make the discussion easier, it helps to group AI uses into three categories: using a simple traffic light model:
🔴 Red – best avoided
🟠 Orange – possible, but with clear rules and oversight
🟢 Green – safe to use in daily practice
It’s not an exact science, but it provides a practical guide to structuring conversations.
🔴 Not everything that’s possible should be done
Some AI applications sound impressive, but just aren’t a good idea in practice. These are high-risk, hard-to-control, or damaging to privacy, safety, or trust.
Examples include:
- Facial recognition in the waiting room
- Automatic monitoring of employee behaviour
- AI making diagnoses or prescribing medication without human supervision
Technically, these systems might “work.” But they shift responsibility away from people and erode trust. In healthcare, that’s not an option.
🟠 Useful, but only with guidance
Most applications fall in the “orange” zone: potentially helpful, but dependent on good data, clear boundaries, and human oversight.
Examples include:
- AI that supports triage or risk assessments
- Systems offering treatment advice based on historical data
- Tools predicting pet (and owner) behaviour and suggesting follow-up actions
These tools can be powerful if used carefully. That means:
- Knowing what the system was trained on
- Testing and evaluating it regularly
- Keeping human supervision in place
- Being able to step in if something goes wrong
AI can support decisions. But it should never be the only voice in the room.
🟢 Practical, fast, safe
Some applications come with little risk and significant benefits, primarily by making administrative tasks smarter and faster.
Think of:
- Smarter searches that quickly find patient information
- Text suggestions for consultation notes
- Automatic reminders based on pet owner behaviour
These tools are safe to use if:
- It’s clear what they do and how they work
- Staff know when (and when not) to rely on them
- Everyone has basic knowledge of the system
Small helpers, when well-implemented, can really ease the workload and improve the pet owner’s experience.
Quick solutions, slow consequences
AI is also transforming the way software is developed. Powerful language models enable the creation of new tools at an unprecedented rate. That’s why the software landscape is moving quickly – new ideas and features are popping up at lightning speed.
This is exciting, but speed isn’t the same as reliability. What looks impressive today may be outdated tomorrow or fail to work in the complexity of daily veterinary practice. And when it comes to pet owner records, communication, and medical data, you want stability, transparency, and long-term support.
That’s why it matters not only what a system can do, but also who is behind it. Is there attention to privacy, data quality, and control? Is the technology backed by veterinary expertise? And can you understand how it arrives at its recommendations, or does it remain a “black box”?
AI can significantly reduce errors, save time, and enhance workflow efficiency. But responsible use requires ownership: knowing what you’re using, and why. Technology is not the goal. It’s a tool to help you deliver the kind of care you believe in.
And one more thing
At IDEXX Veterinary Software, Europe, we’ve set a clear goal: every employee earns at least one AI certificate. What that looks like depends on the role. Not everyone needs to code or train a model. However, everyone should understand the role AI plays in our work, the associated risks, and the key questions to ask.
Consider the themes highlighted above: data quality, transparency, and oversight. We believe every colleague should be able to recognise and name these.
When we started, there were lots of questions and uncertainties. That’s normal. AI isn’t a simple on/off switch. However, as we built knowledge together, the subject became less abstract and easier to discuss. That awareness is something we want to keep, both internally and in our work with practices.
Ultimately, it’s not just about adopting AI. It’s about making sure it feels transparent, safe, and supportive for you, your team, and your patients.

About the author
Vincent Willems is responsible for Marketing, Learning and Partnerships at IDEXX Veterinary Software, Europe. He has worked in various veterinary practices in Ireland and the Netherlands.
Let’s talk about what IDEXX software can do for your practice
Complete the form below and we’ll get back to you.