Industry Basics

Industry News, Industry Basics, Tying and Bundling

Veterinary AI’s Training-Set Problem — Part Three: The Validation Statistics

“The first two parts of this investigation calculated the labor required to produce the training corpora claimed by SignalPET, Vetology, and Antech RapidRead, and demonstrated that the math does not work — at the simplest annotation step, at the bounding-box step, at the segmentation step, and against the structural infrastructure veterinary medicine has not built. This article closes the series by addressing what happens after training is supposedly complete: what the products are required to demonstrate, what they actually demonstrate, and the corporate revenue model that explains why a category of medical-decision-support software exists that operates entirely outside the validation framework that constrains its human-medicine equivalent. The two halves of this article are different in tone — the first half is technical and statistical, the second half is structural and economic — but they answer the same question: why is the foundational accuracy claim of commercial veterinary AI radiology software so consistently weak, and so consistently absent from the kind of independent verification the human-side AI category requires as a precondition of going to market?”

Industry News, Industry Basics

Veterinary AI’s Training-Set Problem — Part Two: The Bounding-Box Step

“Part One of this investigation calculated the labor required to apply image-level categorical labels to the training corpora claimed by SignalPET, Vetology, and Antech RapidRead at the Stanford CheXNeXt rate of 34.3 seconds per image — the simplest possible AI training task. The math at that simplest step did not work for the larger claims. This article applies the published bounding-box and pixel-segmentation rates from the human medical imaging literature to the same vendor claims, and adds three structural infrastructure questions Part One did not address: the absence of subspecialty fellowship training in veterinary radiology, the scarcity of pathology-confirmed ground-truth datasets, and the breed-specific anatomic variation that prevents direct application of human chest x-ray training methodology to veterinary subjects. The conclusion: the foundational claim is not just unlikely. It is structurally impossible at the scales the marketing presents. The math, the workforce, and the upstream data infrastructure all point to the same conclusion.”

Industry, Education, Industry Basics, Industry News

Veterinary AI’s Training-Set Problem — Part One: The Labeling Step

SignalPET claims its AI was trained on “over 2 million annotated veterinary radiographs.” Vetology claims “over 300,000 Board Certified veterinary radiologist-reviewed cases.” Antech RapidRead claims “16 million images.” This is Part One of a two-part investigation into whether those numbers can be reconciled with the documented capacity of the North American board-certified veterinary radiologist workforce. This article focuses on the simplest possible AI training task — image-level categorical labeling, the kind the Stanford CheXNeXt study measured at 34.3 seconds per image in PLOS Medicine — and shows the math does not work for the larger claims even at this most charitable level.

Compliance, Industry, Industry Basics, Industry News

Veterinary AI Radiology: The Regulatory Gap Vendors Exploit

“In human medicine, an AI system is not allowed to issue a diagnostic radiology report to a referring clinician without a licensed physician in the loop. Three separate regulatory layers — FDA device clearance, state medical practice acts, and CMS reimbursement — reinforce each other to make that prohibition operational. In veterinary medicine, none of those three layers applies to AI reading of radiographs. Vendors including SignalPET’s SignalSTAT, Vetology’s Virtual AI Radiologist Report, and Antech’s RapidRead are selling AI-generated radiograph interpretations to referring general practitioners with no board-certified veterinary radiologist review — a practice the ACVR and ECVDI have formally stated no current commercial product meets the standard to perform.”

Scroll to Top