Industry

Corporate ownership, private equity, and market structure in veterinary teleradiology.

Vet AI Position Statement: 18 Months of Institutional Silence

“In March 2025, the American College of Veterinary Radiology and the European College of Veterinary Diagnostic Imaging published a joint position statement in JAVMA establishing that commercial veterinary AI radiology products do not currently meet the standards required for safe deployment in clinical practice. The position statement was the formal, peer-reviewed expression of the field’s specialty college finding that an entire commercial product category fails the threshold for clinical use. In the eighteen months since, the institutions positioned to act on the position statement’s findings have not done so. The ACVR has continued to host the same AI vendors as official conference partners. The AVMA has issued no policy resolution and modified no corporate-relationship framework. AAHA, the only voluntary accrediting body for companion-animal veterinary hospitals in the United States and Canada, has completed the first comprehensive Standards of Accreditation refresh in its 90-year history without adding any standard that would constrain commercial AI radiology products. The institutional inaction is consistent across all three institutions, occurring in the same eighteen-month window, with the same documented professional notice, and with the same documented corporate sponsorship architecture connecting each institution to the corporate parents of the AI vendors at issue. This article documents what was said, what was not done, and why the structural pattern of inaction is explicable by examining how veterinary professional self-regulation is funded and organized.”

Read More »

Veterinary AI’s Training-Set Problem — Part Three: The Validation Statistics

“The first two parts of this investigation calculated the labor required to produce the training corpora claimed by SignalPET, Vetology, and Antech RapidRead, and demonstrated that the math does not work — at the simplest annotation step, at the bounding-box step, at the segmentation step, and against the structural infrastructure veterinary medicine has not built. This article closes the series by addressing what happens after training is supposedly complete: what the products are required to demonstrate, what they actually demonstrate, and the corporate revenue model that explains why a category of medical-decision-support software exists that operates entirely outside the validation framework that constrains its human-medicine equivalent. The two halves of this article are different in tone — the first half is technical and statistical, the second half is structural and economic — but they answer the same question: why is the foundational accuracy claim of commercial veterinary AI radiology software so consistently weak, and so consistently absent from the kind of independent verification the human-side AI category requires as a precondition of going to market?”

Read More »

Veterinary AI’s Training-Set Problem — Part Two: The Bounding-Box Step

“Part One of this investigation calculated the labor required to apply image-level categorical labels to the training corpora claimed by SignalPET, Vetology, and Antech RapidRead at the Stanford CheXNeXt rate of 34.3 seconds per image — the simplest possible AI training task. The math at that simplest step did not work for the larger claims. This article applies the published bounding-box and pixel-segmentation rates from the human medical imaging literature to the same vendor claims, and adds three structural infrastructure questions Part One did not address: the absence of subspecialty fellowship training in veterinary radiology, the scarcity of pathology-confirmed ground-truth datasets, and the breed-specific anatomic variation that prevents direct application of human chest x-ray training methodology to veterinary subjects. The conclusion: the foundational claim is not just unlikely. It is structurally impossible at the scales the marketing presents. The math, the workforce, and the upstream data infrastructure all point to the same conclusion.”

Read More »

Veterinary AI’s Training-Set Problem — Part One: The Labeling Step

SignalPET claims its AI was trained on “over 2 million annotated veterinary radiographs.” Vetology claims “over 300,000 Board Certified veterinary radiologist-reviewed cases.” Antech RapidRead claims “16 million images.” This is Part One of a two-part investigation into whether those numbers can be reconciled with the documented capacity of the North American board-certified veterinary radiologist workforce. This article focuses on the simplest possible AI training task — image-level categorical labeling, the kind the Stanford CheXNeXt study measured at 34.3 seconds per image in PLOS Medicine — and shows the math does not work for the larger claims even at this most charitable level.

Read More »

Veterinary AI Validation Lags Human Radiology by a Decade

“A peer-reviewed Frontiers commentary published in June 2025 by four veterinary AI researchers — including the lead author of the ACVR/ECVDI’s official AI position statement — methodically dismantled the only published external validation study of a major veterinary AI radiology product. Circular ground truth, severe class imbalance, sensitivity of 0.444 in difficult cases, the wrong statistical test, no version traceability. That is the state of validation in commercial veterinary AI. On the human side, by contrast, a model called CheXNet was trained on 112,120 publicly released chest radiographs in 2017, validated against three independent cardiothoracic specialists, published in PLOS Medicine, and then beaten on the public leaderboard by hundreds of subsequent teams. That is what the scientific method looks like in medical AI. The veterinary industry skipped it.”

Read More »

Veterinary AI Radiology: The Regulatory Gap Vendors Exploit

“In human medicine, an AI system is not allowed to issue a diagnostic radiology report to a referring clinician without a licensed physician in the loop. Three separate regulatory layers — FDA device clearance, state medical practice acts, and CMS reimbursement — reinforce each other to make that prohibition operational. In veterinary medicine, none of those three layers applies to AI reading of radiographs. Vendors including SignalPET’s SignalSTAT, Vetology’s Virtual AI Radiologist Report, and Antech’s RapidRead are selling AI-generated radiograph interpretations to referring general practitioners with no board-certified veterinary radiologist review — a practice the ACVR and ECVDI have formally stated no current commercial product meets the standard to perform.”

Read More »

VitalRads: The Full Story

“Brian Poteet didn’t build veterinary teleradiology — he observed how it was done at PetRays, departed, and started his own company. What he built was eventually good. What happened to it is something else: the VITALRADS trademark was transferred to a private equity consolidator in 2018 without any public announcement, the brand now sits on a $1.7 billion distressed debt pile, and independent clinics are being steered to it through a “membership organization” that is operated by the same corporate owner. None of this is disclosed at the point of care.”

Read More »
Scroll to Top