Veterinary AI Radiology: The Regulatory Gap Vendors Exploit

Regulation & Law Artificial Intelligence

The Safeguards That Don’t Apply Here: How Veterinary AI Radiology Vendors Operate Outside Every Rule That Governs the Human Side — And Why Their Own Professional Society Says No Current Product Meets the Bar

In human medicine, an AI system is not allowed to issue a diagnostic radiology report to a referring clinician without a licensed physician in the loop. Three separate regulatory layers — FDA device clearance, state medical practice acts, and CMS reimbursement — reinforce each other to make that prohibition operational. In veterinary medicine, none of those three layers applies to AI reading of radiographs. Vendors including SignalPET’s SignalSTAT, Vetology’s Virtual AI Radiologist Report, and Antech’s RapidRead are selling AI-generated radiograph interpretations to referring general practitioners with no board-certified veterinary radiologist review — a practice the ACVR and ECVDI have formally stated no current commercial product meets the standard to perform.

VeterinaryTeleradiology.com Editorial Staff  ·  April 2026  ·  Estimated read: 24 minutes

The Argument in Miniature

A veterinarian in general practice takes a thoracic radiograph on a dyspneic dog. The images upload to a cloud platform. Within five to ten minutes, a written report returns: findings, conclusions, sometimes differential diagnoses and treatment recommendations. It reads like a radiologist’s consult. The general practitioner relies on it to make a treatment decision that may determine whether the animal lives or dies.

No board-certified veterinary radiologist has seen the study. The report was generated by an artificial intelligence system with no human in the loop. The software vendor is not a licensed veterinarian. There is no FDA clearance to examine. There is no state veterinary board rule specifically authorizing or prohibiting this arrangement. There is no reimbursement gatekeeper that refuses to pay unless a licensed specialist signs the report. The general practitioner owns whatever clinical decisions follow from the interpretation, and the vendor’s liability is capped — in at least one prominently marketed case — at whatever the veterinarian’s AVMA malpractice insurance will pay out on a claim.

This arrangement would be illegal on the human side of medicine in every state in the country. It is operational on the veterinary side today, sold by multiple venture-backed and corporate vendors, and expanding rapidly. The American College of Veterinary Radiology and the European College of Veterinary Diagnostic Imaging jointly published a formal position statement in 2025 declaring that no commercially available AI product for veterinary diagnostic imaging currently meets the bar for transparency, validation, or safety — and that every AI system should be used with a qualified veterinary professional, preferably a board-certified radiologist, in the loop. The vendors selling AI-primary reads have continued selling them. No enforcement authority has acted.

This article examines how we arrived here, what the three regulatory pillars on the human side actually do, why none of them reach the veterinary AI radiology market, and why the “it’s just a test result, not a diagnosis” framing the vendors implicitly rely on collapses the moment their output takes the form of prose clinical impressions delivered to a referring clinician.

Pillar One: FDA Device Clearance — The Gate That Doesn’t Exist in Veterinary Medicine

On the human side, an AI system that interprets medical images is regulated as a Software as a Medical Device (SaMD) and must obtain FDA premarket clearance before it can be marketed for clinical use. The FDA has now authorized somewhere in the neighborhood of 700 to 750 radiology AI devices under this framework. What is more instructive than the number, though, is what the FDA has consistently refused to clear: not a single one of those devices has been authorized to operate as a fully autonomous primary reader of diagnostic imaging studies. Every one of them is cleared as a clinician-assistive tool — computer-aided detection, computer-aided triage, worklist prioritization, or similar language — with a labeled intended use that presumes a radiologist reviews the output and signs the final report.

This is not accidental. The FDA’s January 2025 draft guidance on AI-Enabled Device Software Functions emphasizes the need for continuing human oversight, and radiology professional societies have actively lobbied to keep that standard in place. In a widely circulated letter following an FDA workshop on AI integration in medical imaging, the American College of Radiology and the Radiological Society of North America jointly told the agency that it is unlikely the FDA could provide reasonable assurance of the safety and effectiveness of autonomous AI in radiology patient care without more rigorous testing, surveillance, and other oversight mechanisms than currently exist. Their survey data were stark: 95 percent of radiologists who use AI in clinical practice said they would not use AI algorithms without a physician overread.

The result of that regulatory architecture is that AI device vendors on the human side cannot market a product that issues a diagnostic report directly to a non-radiologist physician without being sued, delisted, or criminally referred for selling an unauthorized medical device. The gate is not theoretical. It is operational, and the professional societies watch it like hawks.

On the veterinary side, this gate does not exist. The FDA’s Center for Veterinary Medicine does regulate certain animal drugs and feed, but it does not require premarket clearance for most medical devices intended for animal use, and it has not established a Software as a Medical Device pathway for veterinary AI. A veterinary AI vendor can build a classifier, train it on whatever dataset it can assemble, deploy it through a cloud portal, and sell interpretations to referring general practitioners — all without ever submitting anything to any federal regulator for review.

The ACVR has been explicit about what this means. In an AVMA-published discussion of the ethical and legal implications of veterinary AI, ACVR Executive Director Dr. Tod Drost observed that it is logical that if the FDA provides guidelines and oversight of medical devices used on people, similar measures should be in place for veterinary medical devices to help protect pets, and that the goal is not to stifle innovation but rather to have a neutral third party provide checks and balances. The 2025 ACVR/ECVDI position statement on AI repeats the call: stakeholders from veterinary medical associations, regulatory bodies, and specialist colleges should establish an independent organization dedicated to creating guidelines for labeling veterinary AI products, like those established by the Association of American Feed Control Officials for labeling food products or by the FDA for certifying software as medical devices.

That independent organization does not exist. The AVMA has not created it. The FDA has not claimed jurisdiction. State veterinary boards have not collectively organized one. Until it exists, a veterinary AI radiology product can come to market with no oversight beyond what the vendor chooses to disclose about itself.

What the Cornell/NC State Team Published in 2022

Dr. Eli Cohen of NC State, writing with colleagues in Veterinary Radiology & Ultrasound, laid out the regulatory gap with unusual directness: “The FDA currently has no requirements for pre-market approval of medical devices intended for animal use. This means there are no restrictions to bringing an AI product to the veterinary market, and no safeguards to ensure proper testing, accuracy, or performance.” Cohen observed that nearly everything a veterinarian could diagnose on radiographs has the potential to be medium-to-high risk — meaning leading to changes in medical treatment, surgery, or euthanasia, either from the clinical diagnosis or from client financial constraints — and that this risk level is the threshold the FDA uses in human medicine to determine whether there should be a radiologist in the loop.

His conclusion, delivered four years before this article: “We would be wise as a profession to adopt a similar model. AI is a powerful tool and will change how medicine is practiced, but the best practice going forward will be using it in line with radiologists to improve access to and quality of patient care, as opposed to using it to replace those consultations.” The veterinary AI radiology market has not adopted that model. Multiple vendors have built their businesses on the opposite premise.

Pillar Two: State Practice Acts and the “It’s Just a Test Result” Fiction

On the human side, even where the FDA has not specifically prohibited an AI use case, state medical practice acts function as a second gatekeeper. Every state in the country defines the practice of medicine to include the diagnosis of disease, and every state makes the practice of medicine without a license a crime. An AI system that issues a diagnostic report directly to a patient — or to a referring clinician in a form the clinician relies on without independent review — runs directly into this framework. A reputable analysis of the current legal landscape states the conclusion plainly: “AI cannot legally diagnose patients in the United States. Diagnosis is considered the practice of medicine and must be performed by a licensed healthcare provider. AI tools can support clinical decision-making, but they cannot replace the provider’s judgment.”

Every state veterinary practice act contains an analogous provision. Diagnosis of animal disease is the practice of veterinary medicine, and practicing veterinary medicine without a license is a crime. The Texas Occupations Code, the California Business and Professions Code, New York Education Law Article 135, Florida Statutes Chapter 474 — each of them defines diagnosis as a veterinary act that requires a DVM license. An AI software company is not a licensed veterinarian and cannot become one.

So how are AI radiology vendors operating in this space at all? The answer is a legal fiction — or, stated more charitably, an industry interpretation of the practice act that has never been tested by any state board or court. The fiction runs as follows:

The AI is not diagnosing. It is generating findings. The licensed veterinarian makes the diagnosis by reviewing those findings in clinical context.

Under this framing, an AI-generated report is treated as analogous to a CBC printout, an automated ECG interpretation (“*** ACUTE MI ***”), a Pap smear pre-screening, or a glucometer reading — a measurement or pattern-flagging output produced by an instrument, which a licensed clinician then interprets and acts upon. The clinician is the diagnostician; the AI is the instrument. On this logic, the AI vendor is not practicing veterinary medicine any more than Abaxis is practicing veterinary medicine when a chemistry analyzer reports an elevated ALT.

The framing has real legal pedigree. ECG machines have printed computer-generated rhythm interpretations for decades. Automated Pap smear screeners have been FDA-cleared for forty years. Nobody credibly argues that Philips is practicing medicine when a monitor flags atrial fibrillation. In narrow, well-bounded settings, where the AI output is a single flag or a single measurement and a licensed professional makes the diagnostic call, the “it’s just an instrument” frame works — in human medicine and in veterinary medicine alike.

The fiction begins to collapse when the AI output stops looking like a measurement and starts looking like a consultation.

The Test: Measurement or Consultation?

There is a bright line, and veterinary AI radiology products cross it with striking regularity. A measurement-style output has the following characteristics: a narrow scope, a single finding or a small set of flags, no prose narrative, no clinical recommendations, no differential diagnoses. Think of a VHS number, or a flag indicating “possible cardiomegaly — radiologist review recommended.” That is instrument output. A consultation-style output has the following characteristics: a prose narrative describing what is seen, a list of findings organized by body system or region, a conclusion or impression section synthesizing the findings, and — critically — recommendations for further diagnostics, treatment, or referral. That is what a board-certified veterinary radiologist writes. It is also, increasingly, what AI radiology vendors are shipping to general practitioners.

The distinction matters because consultation-style output is functionally a radiology opinion. When a general practitioner receives a document titled “Radiology Report” containing findings, a conclusion, and differential diagnoses, and that document was generated by a software product without a licensed veterinarian’s review, the question is not whether the vendor has technically avoided the statutory language of “diagnosis.” The question is whether a reasonable veterinarian using the product treats it as a diagnostic consultation — and whether that treatment is what the product was designed and marketed to produce.

The marketing answers this question on its own. SignalPET’s SignalSTAT product page describes its AI output as mirroring a radiologist report that includes “differentials, recommendations, and next steps.” Vetology describes its product as a “Virtual AI Radiologist Report” generated within five to ten minutes, with the AI delivering “screening results” focused on identifying and characterizing disease. Antech’s RapidRead is described as providing “accurate AI radiology reports using findings, signalment, and clinical observations to deliver insightful assessments.” These are not instrument readouts. They are written diagnostic documents, produced in the form and register of a specialist consult, and marketed as substitutes or quasi-substitutes for such consults.

“An AI product MUST be 100% autonomous to have a valid result. If a human intervenes during any part of the result creation, it’s not artificial intelligence, it’s human intelligence.” — Vetology website, “How to Evaluate AI in Veterinary Radiology,” attributed to CEO Dr. Seth Wallack, DVM, DACVR

The candor of that statement is remarkable. Vetology — whose CEO is a board-certified veterinary radiologist — has publicly taken the position that an AI radiology product is only a valid AI product if it operates autonomously, without human intervention in the creation of the result. That is not a description of a measurement instrument. That is a description of an autonomous diagnostic agent. And if the product is an autonomous diagnostic agent, the “it’s not diagnosis, it’s just a finding” frame becomes very hard to sustain.

Pillar Three: Reimbursement — The Pay Gate Veterinary Medicine Never Built

On the human side, the third pillar reinforcing the human-in-the-loop standard is the reimbursement framework. The Centers for Medicare & Medicaid Services does not pay for a radiology interpretation unless a licensed physician personally reviews the study and signs the report. Private insurers follow Medicare’s lead on this point almost universally. An AI-generated report with no physician signature is not a billable professional service. This means that even if a hospital wanted to deploy autonomous AI reads to cut costs, it could not bill insurers for the interpretations. The economic model does not support it, so the practice does not scale even where the other two gates might theoretically be permeable.

Veterinary medicine has no CMS. There is no centralized payer with a rule about who must sign a report for the interpretation to be reimbursable. Pet insurance exists but does not function as a utilization gatekeeper the way Medicare does. The client pays the clinic directly; the clinic pays the AI vendor a subscription fee or a per-study fee; the AI vendor has no economic relationship with any third-party payer that would require compliance with a professional-review standard.

This is a structural fact about veterinary medicine that has many benefits, including lower administrative overhead and more autonomy for practitioners. It also means that the most powerful enforcement mechanism that keeps human-side AI radiology honest — the refusal of payers to reimburse for interpretations lacking a licensed professional’s signature — simply does not exist on the veterinary side. Whatever restraint the veterinary AI market exhibits has to come from the first two pillars — FDA clearance and state practice acts — and neither of those is currently operational for AI radiology.

The Vendor Landscape: Who Is Doing This, and How

The term “AI-primary read” refers to a veterinary imaging service in which an artificial intelligence system, rather than a board-certified veterinary radiologist, generates the diagnostic interpretation that is delivered to the referring clinician. The report may be later reviewed by a human — or not. The defining feature is that the AI output is the deliverable, and a DACVR overread is either optional, conditional on certain triggers, or not included at all. What follows is a survey of the current U.S. market based on each vendor’s own public descriptions of its products.

SignalPET — The Clearest Case

SignalPET operates a platform called SignalPET 360°, which its own marketing describes as combining AI insights with optional radiologist oversight. Within that platform, the product most directly relevant to this article is SignalSTAT. The SignalSTAT landing page on SignalPET’s own website contains the following representation, reproduced from the company’s terms: SignalPET’s SignalSTAT service does not include a human (radiologist) review of the SignalPET report or any other materials submitted by the customer.

That language is unambiguous. SignalSTAT is sold as AI-only. The report is generated by the algorithm, delivered to the referring veterinarian, and is not reviewed by a board-certified radiologist before delivery. SignalPET markets the product at a price point of $60 to $75 per study, with guaranteed turnaround times and the pitch that it “mirrors a radiologist report that includes differentials, recommendations, and next steps.” The company pairs this product with a liability structure unique in the market: SignalPET offers what it describes as a “100% guarantee” against claims arising from practice use of the report, capped at the amount the customer recovers from its AVMA PLIT malpractice insurance policy.

The structure of that indemnity is worth pausing on. The vendor is not saying it will cover losses outright. It is saying it will cover losses up to whatever the veterinarian’s own malpractice insurance pays out. If the claim exceeds policy limits, the vendor is not on the hook for the excess. If the claim is denied by the insurer for any reason, the vendor owes nothing. If the veterinarian’s premiums go up as a result of the claim, that cost flows entirely to the veterinarian. The “100% guarantee” is, on careful reading, a promise that SignalPET will not leave the veterinarian in a worse position than their existing insurance coverage already provides — which is a far weaker commitment than the marketing language implies.

SignalPET reports that its platform is in use at over 2,300 veterinary clinics worldwide and processes approximately 50,000 radiographs per week. The scale is substantial. The model — AI-primary interpretation delivered to referring GPs with no DACVR in the loop for the SignalSTAT product — is operational at industrial scale.

Vetology — The Autonomous-AI Position

Vetology’s “Virtual AI Radiologist Report” is an autonomous AI interpretation delivered within five to ten minutes of image upload. Vetology offers a separate, optional board-certified teleradiology service, but the Virtual AI product is sold as a standalone AI report. The company’s own website states that its AI platform can generate the automated report “primarily used for canine and feline patients” and that “backed by US and European board-certified radiologists, expert consultation is always just a click away if required.”

That “if required” is doing significant work. The default deliverable is the AI report. The human radiologist is an optional add-on. Vetology markets this arrangement to hundreds of clinics and has distribution partnerships — including through Patterson Veterinary’s “Patterson Teleradiology powered by Vetology AI” — that push the AI-primary product into general-practice workflows at scale.

The most revealing artifact Vetology has published about its own design philosophy is the statement quoted above, that an AI product must be 100 percent autonomous to have a valid result. Vetology’s CEO, Dr. Seth Wallack, is a board-certified veterinary radiologist. He has built and marketed a product on the explicit design principle that human intervention in result creation disqualifies the output from being “real” AI. That design principle is difficult to reconcile with the ACVR/ECVDI position that AI systems should always be used with a qualified veterinary professional in the loop. The two positions cannot both be operationally correct.

Antech RapidRead and RapidRead Dental — The Mars Corporate Model

Antech Imaging Services, a Mars Petcare subsidiary, launched RapidRead in 2024 as an AI radiology interpretation product. Antech’s product page describes it as “a diagnostic support tool that blends the speed and efficiency of artificial intelligence with the expertise of a team of leading veterinary radiologists and data scientists.” The actual workflow, as described by Antech’s own materials, is more specific: the AI generates the report; the report is returned to the clinic in approximately ten minutes; and “certain emergent findings are automatically routed to one of our board-certified veterinary specialists for a STAT review.” The emergent-finding triggers cited in Antech’s marketing include gastric dilatation-volvulus and GI obstruction — findings the algorithm has been trained to flag as potentially life-altering.

For every other case — which is to say, the overwhelming majority of cases — the deliverable is the AI report, without a DACVR overread. If the veterinarian wants a radiologist review, they can request one as an additional service at an additional cost, and the RapidRead fee is credited against the radiologist consultation fee. The base product is AI-primary.

RapidRead Dental, launched in May 2025, operates on a similar model: AI-generated dental radiograph interpretations returned within approximately ten minutes, developed using over 55,000 images and 275,000 teeth, marketed as providing tooth-by-tooth analysis with claimed 98 percent accuracy. The product is described as supporting veterinarians in making “confident dental assessments” while they are still under anesthesia — a use case where the AI-generated report drives an immediate treatment decision with no meaningful opportunity for radiologist overread, because the patient is on the table.

What makes the Antech case distinct is the corporate context. Mars Petcare owns Antech, which operates approximately 90 reference laboratories globally and provides diagnostic services across the Mars network of veterinary hospitals — including Banfield, VCA, and BluePearl, which together comprise approximately 2,000 U.S. locations. When Mars deploys RapidRead into its own captive hospital network, the AI-primary read becomes not merely an option available in the market but, effectively, the default diagnostic pathway for a substantial percentage of U.S. veterinary imaging volume. The scale at which this model is operating is, by any measure, enormous.

2,300+ Veterinary clinics using SignalPET’s AI platform worldwide, per company disclosure
50,000/wk Radiographs processed weekly by SignalPET’s AI, per company disclosure
~2,000 Mars-owned veterinary hospital locations in the U.S. where Antech RapidRead is a native offering

Radimal and the Hybrid Tier

Radimal occupies a middle position in the market. Its public-facing marketing emphasizes DACVR board-certified specialist consultations as the core deliverable, with AI functioning as a prioritization and flagging layer across incoming cases. The company reports over 50,000 board-certified specialist consultations delivered, STAT turnaround averaging 35 minutes, and an overall positioning as a human-first service with AI augmentation. Based on the public record available, Radimal does not appear to market an AI-primary read as a standalone deliverable in the way SignalPET’s SignalSTAT and Vetology’s Virtual AI Radiologist Report do. This article does not allege that Radimal operates an AI-primary model equivalent to the three vendors discussed above.

IDEXX and the Workflow-AI Model

IDEXX, the largest publicly traded veterinary diagnostics company, has also declined to enter the AI-primary radiology read market. Its AI deployments in imaging are workflow-focused: automated hanging protocols, automatic vertebral heart score calculation with trend tracking, AI-assisted case submission routing, and image quality scoring. Teleradiology interpretations themselves remain services delivered by IDEXX Telemedicine Consultants — a team of board-certified radiologists — rather than by an autonomous AI system. This article does not allege that IDEXX sells an AI-primary radiology read.

The contrast between IDEXX’s position and the AI-primary vendors is itself informative. IDEXX has both the technical capacity and the corporate resources to build and deploy an autonomous AI reader if it chose to; it has not. Whether that restraint reflects a business judgment, a liability calculation, a regulatory assessment, or a clinical one is a question IDEXX has not publicly answered. But the market is not a monolith. Some of the largest players in veterinary diagnostics have chosen not to cross the line that SignalPET, Vetology, and Antech have crossed.

The Asymmetry, Laid Out in Full

The clearest way to see what is happening is to place the human and veterinary regulatory stacks side by side. Each of the three pillars of human-side AI radiology oversight has either no equivalent or a far weaker equivalent on the veterinary side. The cumulative effect of the asymmetry is that an AI product that would be unlawful to market in the human space is lawful to market — or at least unprosecuted — in the veterinary space, for the same clinical task, on a species where the interpretation can drive immediate life-or-death decisions.

Regulatory Pillar Human Side Veterinary Side
FDA premarket clearance Required for AI radiology devices; labeled intended use limits output and presumes clinician review; ~700 devices cleared, none as autonomous primary reader Not required. No SaMD pathway for veterinary AI. No premarket review of training data, validation, or claimed accuracy
State professional licensure Diagnosis is practice of medicine; unlicensed practice is a crime; AI vendors operate as “decision support” under physician responsibility Diagnosis is practice of veterinary medicine; unlicensed practice is a crime; AI vendors rely on untested “findings, not diagnosis” framing
Payer gatekeeping CMS/Medicare refuses to reimburse interpretations not signed by a licensed physician; private payers follow suit Not applicable. No centralized payer. Client pays clinic directly. No reimbursement-based enforcement exists
Professional society enforcement ACR/RSNA actively lobby FDA; 95% of member radiologists refuse to use AI without physician overread ACVR/ECVDI published 2025 position statement; no commercially available product meets the standard; statement is guidance, not law
Malpractice/liability Physician retains responsibility; well-developed case law on physician-AI interaction; radiologists named in suits Veterinarian retains responsibility; little case law; vendor indemnities capped at clinic’s existing malpractice policy limits
Informed consent to AI use Increasingly standard in hospital protocols; ACR guidance supports disclosure ACVR position statement recommends disclosure to pet owners; no statutory requirement; practice is inconsistent
Post-market surveillance FDA MDR reporting; manufacturer monitoring; real-world performance data collection Not required. No adverse event reporting system. No central database of veterinary AI errors or patient harms

Every row in that table represents a safeguard that the human medical system has constructed — sometimes over decades — to protect patients from unvalidated AI diagnostics. The veterinary system has either explicitly declined to build the equivalent safeguard or has never gotten around to it. The AI-primary read vendors have built their products in the resulting gap.

What the ACVR and ECVDI Actually Said — and Why It Has Been Ignored

The 2025 ACVR/ECVDI position statement on artificial intelligence in veterinary diagnostic imaging and radiation oncology, published in JAVMA, is the single most authoritative document on this topic produced by organized veterinary medicine. It is worth reading in its specific terms rather than in paraphrase, because the specificity of the language is important.

The statement says, verbatim: “The ACVR and ECVDI believe that AI systems should always be used with a qualified veterinary professional in the loop. In veterinary diagnostic imaging, board-certified radiologists are best suited to evaluate the output of computer-aided diagnostic tools.” It further states: “Artificial intelligence systems that do not ensure safe and secure handling of patient data; do not provide transparency of their underlying methodology, training, and testing sets; do not allow postimplementation monitoring as defined by good machine learning practices; and do not allow transparency for machine learning–enabled medical devices should not be used in veterinary practice. There is currently no commercially available product for diagnostic imaging that meets these standards.”

That last sentence is categorical. As of the publication of the position statement in 2025, the two specialty colleges representing every board-certified veterinary radiologist in North America and Europe took the formal position that no commercial AI diagnostic imaging product in the veterinary market meets the transparency, validation, and safety standards required for use in practice. The vendors named in this article — SignalPET, Vetology, Antech RapidRead — were all operating at scale at the time the statement was published, and all remain operational. The statement did not prompt any of them to remove their AI-primary products from the market or to convert them into radiologist-in-the-loop services. The specific scientific and methodological reasons no current product meets the bar — the proprietary undisclosed training data, the absence of public benchmark datasets, the methodologically inadequate single published external validation, the “continuously updated and does not have version numbers” software practice that would disqualify any human-side AI device — are documented in the companion engineering analysis, How Human Radiology AI Actually Gets Built — and the Wild West of Veterinary AI Where None of That Exists.

This is not because the vendors rejected the statement publicly. It is because the statement has no enforcement mechanism. The ACVR and ECVDI are professional specialty colleges. They grant board certification to radiologists. They do not license software companies. They do not have subpoena power, cease-and-desist authority, or the ability to levy fines. Their position statements are, in effect, expert testimony about the standard of care — influential in malpractice litigation, persuasive in regulatory proceedings, but directly binding on no one.

“Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety.” — ACVR/ECVDI Position Statement on Artificial Intelligence, JAVMA, June 2025, Vol. 263, Issue 6

The companion 2024 ACVR/ECVDI teleradiology consensus statement, co-authored by Dr. Constance E. De Haan of IDEXX Telemedicine Consultants along with seven other board-certified radiologists, is relevant here for what it chose not to address. The teleradiology statement lays out in detail what a proper teleradiology submission must include, what qualifications a teleradiologist must hold, how image compression must be managed, and what quality control systems must be in place. On AI, the committee wrote: “Finally, although still limited, the use of artificial intelligence/machine learning (AI/ML) to review veterinary images and identify normal and abnormal findings will expand in the next few years. Concerns regarding the use of AI/ML algorithms in veterinary radiology are outside the scope of this publication.”

In other words, the teleradiology committee acknowledged the AI problem, punted to a later position statement, and set an explicit standard for teleradiology practice that requires reports to be issued by specific categories of board-certified or residency-trained veterinarians, with all reports by non-certified individuals reviewed and countersigned by a certified radiologist. Read against the AI-primary products now in the market, the teleradiology consensus statement’s qualification requirements are exactly the standard those products are designed to bypass: a DACVR, DECVDI, Fellow of the ANZCVS, or an equivalent residency-trained individual must either write the report or countersign one written by a non-certified reader. An autonomous AI is none of those things.

The “Just a Test Result” Fiction, Tested Against the Actual Products

Return now to the legal fiction that holds the veterinary AI radiology market together: the AI is not diagnosing, it is generating findings, and the licensed veterinarian makes the diagnosis. Apply this framing to the products as they actually exist.

Test one — narrow output, clear measurement. A product that calculates a vertebral heart score from a thoracic radiograph and returns a number. This is a measurement. The general practitioner interprets it. No practice-act problem; the AI is functioning like any other instrument that measures a parameter. This describes many of IDEXX’s AI-enabled imaging features, and it is not where the problem lies.

Test two — pattern flag, no narrative. A product that scans a thoracic radiograph and returns one of two outputs: “no acute findings” or “possible abnormality — radiologist review recommended.” This is closer to a triage tool than a diagnostic report. It still requires careful handling — the flag can drive clinical decisions — but the “instrument output” framing is plausible because the tool has not produced a prose interpretation.

Test three — prose narrative with findings, conclusions, and recommendations. A product that returns a document titled “Radiology Report,” containing a findings section describing abnormalities in the thoracic, abdominal, and musculoskeletal regions, a conclusion section synthesizing the findings, and a recommendation section suggesting further diagnostics, treatment plans, or referral. This is not an instrument readout. This is a consultation. It has the form, register, and function of a board-certified radiologist’s written opinion.

SignalPET’s SignalSTAT, Vetology’s Virtual AI Radiologist Report, and Antech’s RapidRead operate at test three. Their marketing materials explicitly compare the output to a radiologist’s report. Their products are priced, positioned, and sold as substitutes for or replacements of traditional radiologist consultations. The claim that this is merely a “finding” rather than a “diagnosis” survives only so long as nobody presses on it.

When pressed, the argument runs into its limits. Consider the AVMA’s own “Model Veterinary Practice Act” language, which defines the practice of veterinary medicine to include, among other things, “diagnosing, prognosing, or treating any animal disease, deformity, defect, wound, or injury” and “representing, directly or indirectly, publicly or privately, the ability and willingness to do any act described [above].” A software product that ships a document titled “Radiology Report” containing a prose description of abnormalities, a conclusion, and treatment recommendations is — at minimum — representing the ability and willingness to perform diagnostic acts. The statutory exemptions for equipment manufacturers, kennel operators, and owners treating their own animals do not obviously cover a venture-backed SaaS company delivering radiology opinions at $60 per study.

No state veterinary board has yet issued a formal opinion testing this theory. No state attorney general has filed an unauthorized-practice complaint. The reason is not that the theory is weak; it is that veterinary regulatory enforcement is chronically under-resourced, that the injured parties in any individual case are animals who cannot sue, and that the clients who might notice a bad outcome often do not connect the outcome to the AI-generated report they may not even have seen. The absence of enforcement is not the same as the absence of an enforceable theory. It is a regulatory gap waiting for the first motivated enforcer to act.

The Liability Allocation No GP Seems To Have Read Carefully

The liability architecture that AI-primary vendors construct around their products is its own story. The central move is the same across multiple vendors: the legal exposure for a missed diagnosis on an AI-generated report flows to the referring veterinarian, not to the vendor. The vendor positions itself as providing information, not a diagnosis; the veterinarian retains responsibility for the clinical decision made in reliance on that information; and if the outcome is bad, the plaintiff’s lawyer goes after the veterinarian’s AVMA PLIT policy.

SignalPET’s indemnity structure, quoted in its own terms, makes this allocation explicit. The company offers to indemnify customers for losses from third-party claims related to implementation or interpretation of a SignalSTAT report, but the indemnity is capped at the amount the customer recovers from its AVMA PLIT malpractice policy. Translated from the contractual language: the vendor does not pay out of its own pocket; it backstops whatever the veterinarian’s existing malpractice insurance already covers. If a claim is denied by the insurer, the indemnity is zero. If a claim exceeds policy limits, the vendor owes nothing above the limit. If the claim raises the veterinarian’s premiums, that is the veterinarian’s problem.

The 2025 ACVR/ECVDI position statement anticipated exactly this dynamic. It states: “The legal responsibility of decisions made from any AI system has yet to be determined but is likely to have some degree of responsibility for veterinarians themselves rather than developers of the AI alone.” In other words, the specialty colleges expect the liability to land on the vet, not the vendor. The vendors have structured their contracts to confirm that expectation.

A general practitioner signing up for an AI-primary read product is, in effect, agreeing to the following arrangement. The vendor provides an autonomous algorithmic opinion on a diagnostic study. The veterinarian delivers that opinion to the client, treats on the basis of that opinion, and is the legally responsible party for the outcome. If the opinion is wrong and the patient suffers, the veterinarian is sued, the malpractice insurer pays (or denies), and the vendor’s exposure is either zero or bounded by what the insurer already pays. The financial and legal risk flows downhill to the smallest party in the transaction; the economic benefit flows uphill to the vendor.

This is not an accidental outcome. It is the point of the arrangement. AI-primary reads are cheaper than DACVR reads because the vendor has replaced a licensed specialist’s time with algorithmic throughput. The liability reallocation — from vendor to veterinarian — is a necessary component of maintaining that cost advantage. If vendors accepted the full liability for their products’ diagnostic output, the insurance alone would push prices back toward the cost of a traditional specialist read, and the business model would fail.

What the Sophisticated Veterinary Buyer Should Understand

This article is not an argument that AI has no place in veterinary radiology. It does. Triage flagging, worklist prioritization, measurement automation, hanging protocol optimization, image quality scoring — all of these are legitimate applications in which AI augments the work of veterinarians and specialists without attempting to replace the diagnostic act itself. The ACVR/ECVDI position statement explicitly endorses these uses and calls for continued development. The argument is narrower and more specific: when an AI system produces a prose diagnostic report delivered to a referring general practitioner without a board-certified radiologist in the loop, the product has moved from “augmentation” to “substitution,” and the regulatory and professional guardrails that protect patients on the human side are absent on the veterinary side.

For a general practitioner evaluating whether to adopt one of these products, several questions follow.

First, what is the actual scope of the output? If the deliverable is a prose report containing findings, impressions, and recommendations, the practitioner is using a consultation-style service. If it is a discrete measurement or a single flag, the practitioner is using an instrument. These are different risk profiles and should be evaluated differently.

Second, is there a DACVR in the loop at any point, and when? If the AI is primary and radiologist review is optional, the practitioner bears the full diagnostic risk on every case where they do not upgrade. If the AI is paired with mandatory human overread on every case, the risk profile resembles a hybrid service and should be priced accordingly by the vendor.

Third, what is the vendor’s liability exposure, and how is it capped? Contracts that limit vendor indemnity to the veterinarian’s own malpractice policy limits are transferring essentially all of the risk to the veterinarian. This should be understood before signing, not after a claim is filed.

Fourth, what does the vendor disclose about training data, validation, and error rates? The ACVR/ECVDI position statement identifies transparency as a necessary condition for veterinary AI adoption. A vendor that will not disclose confusion matrices, validation methodology, or performance on specific pathology classes is, by the specialty colleges’ own standard, not meeting the bar for use in practice.

Fifth, is the use of AI disclosed to the client? The ACVR position recommends disclosure of AI use to pet owners. In some states, failure to disclose a material aspect of how a diagnostic service is rendered may itself constitute an element of an unfair or deceptive practices claim. A clinic adopting AI-primary reads without client-facing disclosure is both contrary to the specialty colleges’ guidance and potentially exposed on consent grounds.

What the Regulators Could Do — If Any of Them Decided To

There is no shortage of enforcement authorities with plausible jurisdiction over AI-primary veterinary radiology reads. There is a shortage of enforcement authorities that have chosen to act. The following is a non-exhaustive list of who could move and on what legal theory.

State veterinary boards. Every state board has statutory authority to investigate unauthorized practice of veterinary medicine. The argument that an unlicensed software company issuing prose diagnostic reports is practicing veterinary medicine has never been formally tested, but the statutory elements are present in most state practice acts. A single state board opening an investigation into one prominent vendor would alter the market overnight.

State attorneys general. Most states have consumer protection statutes prohibiting unfair or deceptive trade practices. A vendor marketing AI output as equivalent to a board-certified radiologist’s consult — when in fact no board-certified radiologist has reviewed the case — presents a potentially actionable misrepresentation. The AAVSB (Association of American Veterinary State Boards) published its AI guidance white paper in August 2025 partly because state boards and state AGs have begun receiving questions they do not know how to answer.

The Federal Trade Commission. Section 5 of the FTC Act prohibits unfair or deceptive acts in commerce. Vendor marketing claims — accuracy percentages, equivalency to specialist interpretation, liability guarantees that function differently than advertised — are susceptible to FTC scrutiny. The FTC has shown increasing interest in AI marketing claims generally.

The FDA, through jurisdictional assertion. FDA CVM could, if it chose, assert that veterinary AI diagnostic software falls within its authority to regulate medical devices intended for animal use under 21 U.S.C. § 321(h). This would require a policy decision and a rulemaking, but the legal foundation exists.

Congress. Congressional action to require FDA oversight of veterinary AI medical devices — modeled on the existing SaMD framework for human devices — is the solution the ACVR has explicitly called for. Given the pace of congressional action on any topic, this is the slowest of the available paths, but it is also the most durable.

The ACVR/ECVDI position statement closes with a direct call: “Regulatory bodies such as national, state, provincial, or other veterinary certifying boards are encouraged to establish rules and guidelines outlining acceptable roles of AI in practice, to protect veterinary professionals and patients from potential harms of misuse.” That call has been sitting unanswered since 2025. The vendors named in this article continue to operate. The gap persists because no enforcer has chosen to close it.

What Changes When the First Enforcer Moves

The regulatory architecture around veterinary AI radiology is not stable. It is a standoff — vendors selling into a gap, professional societies documenting the gap, regulators watching the gap but not entering it. Historical precedent from adjacent fields suggests this kind of standoff ends abruptly. The first state veterinary board to open a formal investigation into one of these vendors will change the calculus for all of them. The first state AG to file an unfair-trade-practices complaint against an AI-primary product will reset how the category is marketed. The first federal agency to assert jurisdiction will reshape the competitive landscape, eliminating marginal players and forcing the remainder to conform to something like the human-side standard.

None of those moves requires Congress. None of them requires the creation of new law. All of them can be accomplished with existing statutory tools applied to facts that are already in the public record. The ACVR and ECVDI have laid the evidentiary foundation. The question is whether any enforcer reads the position statement as what it is — a professional society formally declaring that no product currently in the market meets the standard of care — and treats it accordingly.

The Uncomfortable Symmetry

One final observation, intended as a challenge rather than a conclusion. The AI-primary vendors and their defenders often frame the regulatory asymmetry between human and veterinary medicine as an advantage of the veterinary field — a freedom from over-regulation, an opportunity to innovate, a chance to bring diagnostic access to underserved practices and regions. There is a version of that argument that is serious, and it deserves engagement on its merits. The shortage of board-certified veterinary radiologists is real. The cost and turnaround time of traditional teleradiology can be prohibitive for some practices. The idea that AI can expand access is not empty.

But the symmetry cuts the other way too. The absence of FDA premarket review does not make veterinary AI products safer; it makes them less scrutinized. The absence of a reimbursement gatekeeper does not mean veterinary clients do not pay for care; it means they pay out of pocket and have no insurer reviewing whether the service they received met professional standards. The absence of state enforcement of practice acts against AI vendors does not mean no one is practicing veterinary medicine without a license; it means no one has bothered to prosecute the ones who are. Every regulatory gap that the AI-primary business model depends on is also a protection that patients and clients lack, compared to their human-medicine counterparts.

The specialty colleges have said, clearly and formally, that no product in the current market meets the standard. The vendors have responded by continuing to sell. The clients, whose animals are the actual patients in this system, have no way to know any of this is happening unless someone tells them. The referring general practitioners, who bear the legal risk for whatever the AI produces, are in most cases relying on vendor marketing and have not read the position statement, the terms of service, or the liability cap language. Whatever happens next in the regulation of this market, the documentary record establishes that the profession’s own experts warned that this was coming, and the warning was specific, timely, and ignored.

The Structural Answer

In human medicine, three overlapping regulatory layers — FDA device clearance, state medical practice acts, and CMS reimbursement — combine to ensure that an AI system cannot issue a diagnostic radiology report to a referring clinician without a board-certified radiologist in the loop. Each layer is imperfect in isolation. Together, they are operational, and the ACR and RSNA actively defend them.

In veterinary medicine, none of those three layers applies to AI radiology reads. The FDA has not asserted jurisdiction. State practice acts have not been enforced against software vendors. No reimbursement gatekeeper exists. The only constraint on AI-primary vendors is the professional authority of the specialty colleges — and the specialty colleges have explicitly stated that no current commercial product meets the standard for use in practice. The vendors selling AI-primary reads are operating in that gap, and they have built their business models around it.

The practice is not analogous to human-medicine AI deployment; it is the opposite of it. The safeguards are not weaker; they are absent. And the ACVR and ECVDI have said so, in formal publication, with authority that any state board or attorney general could rely on today. What is missing is not the evidence. What is missing is the enforcement.


Frequently Asked Questions

What is an AI-primary radiology read in veterinary medicine?

An AI-primary radiology read is a veterinary imaging service in which an artificial intelligence system, rather than a board-certified veterinary radiologist, generates the diagnostic interpretation that is delivered to the referring clinician. The AI output — typically a written report containing findings, conclusions, and in many cases treatment recommendations or differential diagnoses — is the deliverable. A board-certified specialist (DACVR) overread is either optional, conditional on specific algorithmic triggers, or not included at all. The defining characteristic is that the AI’s interpretation is what the referring veterinarian relies upon to make clinical decisions, with no independent specialist review of the case in the standard product workflow. As of the date of this article, three veterinary AI radiology products are sold in the U.S. market under this model: SignalPET’s SignalSTAT, Vetology’s Virtual AI Radiologist Report, and Antech’s RapidRead (and the related RapidRead Dental). Each vendor’s specific product mechanics are documented in the body of this article based on the vendor’s own published marketing and contract language.

Why are AI-primary radiology reads legal in veterinary medicine but not in human medicine?

On the human side of medicine, three overlapping regulatory layers combine to prevent an AI system from issuing a diagnostic radiology report directly to a referring clinician without a licensed physician in the loop. The first is U.S. Food and Drug Administration premarket clearance: an AI system that interprets medical images is regulated as Software as a Medical Device (SaMD) and must obtain FDA clearance before it can be marketed for clinical use. Approximately 700 radiology AI devices have been authorized under this framework, none as fully autonomous primary readers — every clearance specifies a clinician-assistive intended use that presumes a radiologist reviews the output. The second is state medical practice acts, every one of which defines the practice of medicine to include diagnosis and makes the unauthorized practice of medicine a crime. The third is reimbursement: the Centers for Medicare and Medicaid Services does not pay for a radiology interpretation unless a licensed physician personally reviews the study and signs the report, and private insurers follow Medicare’s lead. On the veterinary side, none of these three layers operates against AI radiology. The FDA Center for Veterinary Medicine has not established a Software as a Medical Device pathway for veterinary AI and does not require premarket clearance for these products. State veterinary practice acts have not been formally tested against the legal theory that an AI vendor issuing prose diagnostic reports is engaged in the unauthorized practice of veterinary medicine. There is no centralized payer that refuses to reimburse interpretations lacking a board-certified specialist’s signature. The result is a regulatory gap in which products that would be unlawful to market for human use are operational in the veterinary market for the same clinical task.

Does the FDA regulate AI radiology software for veterinary use?

No. The FDA Center for Veterinary Medicine regulates certain animal drugs and feed but does not require premarket clearance for most medical devices intended for animal use, and it has not established a Software as a Medical Device pathway for veterinary AI radiology products. This stands in direct contrast to the FDA’s regulation of human-side AI radiology, where as of 2024 approximately 700 radiology AI devices have been authorized under the SaMD framework — none as fully autonomous primary readers. Every human-side AI radiology clearance specifies a clinician-assistive intended use, with labeling that presumes a board-certified radiologist reviews and signs the final report. The veterinary regulatory gap was directly identified by Dr. Eli Cohen of NC State, writing with colleagues in the journal Veterinary Radiology & Ultrasound in 2022: “The FDA currently has no requirements for pre-market approval of medical devices intended for animal use. This means there are no restrictions to bringing an AI product to the veterinary market, and no safeguards to ensure proper testing, accuracy, or performance.” The 2025 ACVR/ECVDI position statement and ACVR Executive Director Dr. Tod Drost have called for the establishment of an independent organization to provide oversight equivalent to what the FDA provides on the human side. That independent organization does not currently exist.

What did the ACVR and ECVDI say about commercial veterinary AI radiology products?

The American College of Veterinary Radiology and the European College of Veterinary Diagnostic Imaging jointly published a position statement on artificial intelligence in veterinary diagnostic imaging and radiation oncology in JAVMA in June 2025 (Volume 263, Issue 6, pages 773 to 776; Appleby RB, Difazio M, Cassel N, Hennessey R, Basran PS). The statement establishes the formal position of the two specialty colleges representing every board-certified veterinary radiologist in North America and Europe. It states verbatim: “The ACVR and ECVDI believe that AI systems should always be used with a qualified veterinary professional in the loop. In veterinary diagnostic imaging, board-certified radiologists are best suited to evaluate the output of computer-aided diagnostic tools.” It further states that AI systems that do not ensure safe handling of patient data, do not provide transparency of underlying methodology, training and testing sets, do not allow post-implementation monitoring as defined by good machine learning practices, and do not allow transparency for machine learning–enabled medical devices “should not be used in veterinary practice.” The statement then makes a categorical declaration: “Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety.” For detailed analysis of the engineering and validation gaps that this position statement identifies, see our coverage of how human radiology AI actually gets built and the wild west of veterinary AI where none of that exists.

Which veterinary AI radiology vendors sell AI-primary reads without a board-certified radiologist review?

Based on each vendor’s own publicly available marketing and contract materials, three veterinary AI radiology products are sold in the U.S. market as AI-primary reads. SignalPET’s SignalSTAT product page contains the following representation reproduced from the company’s terms: “SignalPET’s SignalSTAT service does not include a human (radiologist) review of the SignalPET report or any other materials submitted by the customer.” SignalPET reports its broader platform is in use at over 2,300 veterinary clinics worldwide and processes approximately 50,000 radiographs per week. Vetology’s Virtual AI Radiologist Report is described in the company’s own marketing as an autonomous AI interpretation delivered within five to ten minutes, with optional access to a board-certified teleradiologist available “if required.” Vetology’s CEO, Dr. Seth Wallack, has stated that “an AI product MUST be 100% autonomous to have a valid result. If a human intervenes during any part of the result creation, it’s not artificial intelligence, it’s human intelligence.” Antech’s RapidRead, a Mars Petcare product through Antech Imaging Services, generates an AI report returned to the clinic in approximately ten minutes, with “certain emergent findings” such as gastric dilatation-volvulus and GI obstruction automatically routed to a board-certified specialist for STAT review; for every other case, the deliverable is the AI report without DACVR overread. RapidRead Dental, launched in May 2025, operates on a similar model for dental radiograph interpretations. Two other vendors discussed in this article — Radimal and IDEXX — are not in this category based on each company’s own public materials. Radimal markets a DACVR-primary service with AI functioning as a prioritization layer rather than as the diagnostic deliverable. IDEXX has not entered the AI-primary radiology read market; its AI deployments in imaging are workflow-focused and teleradiology interpretations are delivered by IDEXX Telemedicine Consultants, a team of board-certified radiologists.

Who is liable when an AI-primary radiology read leads to a misdiagnosis?

Under the typical contractual structure used by AI-primary veterinary radiology vendors, the legal exposure for a missed or incorrect interpretation flows to the referring veterinarian rather than to the AI vendor. The vendor positions itself as providing information rather than a diagnosis; the veterinarian retains responsibility for the clinical decision made in reliance on that information; and if the outcome is bad, the plaintiff’s claim runs against the veterinarian’s malpractice policy. SignalPET’s published indemnity structure makes this allocation explicit: the company offers what it describes as a “100% guarantee” against claims arising from practice use of the SignalSTAT report, but the indemnity is capped at the amount the customer recovers from the customer’s own AVMA PLIT malpractice insurance policy. Translated from the contract language, the vendor does not pay out of its own pocket; it backstops whatever the veterinarian’s existing malpractice insurance already covers. If a claim is denied by the insurer, the indemnity is zero. If a claim exceeds policy limits, the vendor owes nothing above the limit. If the claim raises the veterinarian’s premiums, that cost flows to the veterinarian. The 2025 ACVR/ECVDI position statement anticipated this dynamic, stating that “the legal responsibility of decisions made from any AI system has yet to be determined but is likely to have some degree of responsibility for veterinarians themselves rather than developers of the AI alone.” The financial and legal risk in the AI-primary read business model flows downhill to the smallest party in the transaction; the economic benefit flows uphill to the vendor.

Is using an AI radiology service the unauthorized practice of veterinary medicine?

Every state veterinary practice act in the United States defines diagnosis as the practice of veterinary medicine and requires a Doctor of Veterinary Medicine license to engage in that practice. The Texas Occupations Code, the California Business and Professions Code, New York Education Law Article 135, Florida Statutes Chapter 474, and analogous statutes in every other state contain this framework. An AI software company is not a licensed veterinarian and cannot become one. The legal theory under which AI-primary read vendors operate is what this article describes as a legal fiction: the AI is not diagnosing, it is generating findings, and the licensed veterinarian makes the diagnosis by reviewing those findings in clinical context. Under this framing, the AI output is treated as analogous to an automated ECG interpretation, a Pap smear pre-screener output, or a glucometer reading — an instrument output that a licensed clinician interprets and acts upon. This framing has real legal pedigree in narrow, well-bounded settings where the AI output is a single flag or a single measurement. The framing becomes legally vulnerable when the AI output takes the form of a prose diagnostic report titled “Radiology Report,” containing a findings section, a conclusion section synthesizing the findings, and recommendations for further diagnostics, treatment, or referral. SignalPET’s SignalSTAT, Vetology’s Virtual AI Radiologist Report, and Antech’s RapidRead each generate output of this consultation form. Whether a state veterinary board, state attorney general, or court would accept the “findings, not diagnosis” theory as a defense to an unauthorized practice claim has not been formally tested in any reported proceeding. For analysis of how state veterinary practice act provisions function in adjacent contexts, see our coverage of the AVMA fee-splitting prohibition and state veterinary practice act provisions.

Should pet owners be told when AI is used to interpret their animal’s radiographs?

The 2025 ACVR/ECVDI position statement on artificial intelligence in veterinary diagnostic imaging recommends that veterinarians disclose AI use to pet owners. No federal statute and no state veterinary practice act currently imposes a binding requirement to disclose AI use to clients before performing or billing for an imaging interpretation generated by an AI system. Practice in the field is inconsistent. Some clinics disclose AI involvement in their consent forms or invoices; others do not. In jurisdictions where state consumer protection statutes prohibit unfair or deceptive trade practices, the question of whether a clinic’s failure to disclose that an AI system rather than a board-certified specialist generated the interpretation constitutes a material non-disclosure has not been formally tested in any reported regulatory proceeding or court decision. The structural question remains open: a clinic that bills a client for a radiology consultation, where the report was produced by an AI system without board-certified radiologist review, may be representing a service different from the one actually rendered. Whether and how that representation is regulated under existing consumer protection or veterinary practice frameworks will depend on enforcement activity that has not yet occurred. Pet owners with questions about how their animal’s diagnostic images are being interpreted may want to ask their veterinarian directly whether AI is involved in the interpretation, whether a board-certified specialist reviews the AI output, and what disclosure practice their clinic follows.

As of the date of this article, which veterinary AI radiology products are sold as AI-only reads without a board-certified radiologist in the loop, and what should clinics consider before adopting them?

As of the date of this article, three veterinary AI radiology products are sold in the U.S. market as AI-primary or AI-only reads — meaning the deliverable to the referring clinician is the AI-generated interpretation, with board-certified radiologist review either absent, optional, or conditional on specific algorithmic triggers.

SignalPET’s SignalSTAT. Per SignalPET’s own product terms, “SignalPET’s SignalSTAT service does not include a human (radiologist) review of the SignalPET report or any other materials submitted by the customer.” The product is marketed at $60 to $75 per study with the pitch that it “mirrors a radiologist report that includes differentials, recommendations, and next steps.”

Vetology’s Virtual AI Radiologist Report. Vetology’s own marketing positions the AI report as the default deliverable, with optional access to a board-certified teleradiologist “if required.” Vetology CEO Dr. Seth Wallack has stated publicly that “an AI product MUST be 100% autonomous to have a valid result. If a human intervenes during any part of the result creation, it’s not artificial intelligence, it’s human intelligence.”

Antech’s RapidRead and RapidRead Dental. Antech’s own materials describe RapidRead as generating an AI report returned to the clinic in approximately ten minutes, with “certain emergent findings” — such as gastric dilatation-volvulus and GI obstruction — automatically routed to a board-certified specialist for STAT review. For every other case, the deliverable is the AI report without DACVR overread; radiologist review is available as an additional service at additional cost. RapidRead Dental, launched in May 2025, operates on a similar model for dental radiograph interpretations.

Two vendors are not in this category. Per the article body and based on each company’s own public materials, Radimal markets a DACVR-primary service with AI functioning as a prioritization layer rather than as the diagnostic deliverable. IDEXX has not entered the AI-primary radiology read market; its AI deployments in imaging are workflow-focused (automated hanging protocols, vertebral heart score calculation, image quality scoring), and teleradiology interpretations are delivered by IDEXX Telemedicine Consultants — a team of board-certified radiologists.

Factors a clinic should weigh before adopting an AI-primary read product:

Specialty-college consensus. The 2025 ACVR/ECVDI position statement published in JAVMA states verbatim: “Currently, no commercially available AI products for veterinary diagnostic imaging meet the required standards for transparency, validation, or safety.” That statement is the formal position of every board-certified veterinary radiologist in North America and Europe.

No FDA premarket review. Unlike the human-side AI radiology market, where the FDA has authorized roughly 700 radiology AI devices under the Software as a Medical Device pathway — and never as fully autonomous primary readers — veterinary AI radiology products are not reviewed by any federal regulator before they are marketed. Training data, validation methodology, and claimed accuracy figures are not subject to independent agency scrutiny.

Reliability and validation gaps. As Dr. Eli Cohen (NC State) wrote in Veterinary Radiology & Ultrasound in 2022, “There are no restrictions to bringing an AI product to the veterinary market, and no safeguards to ensure proper testing, accuracy, or performance.” False-positive and false-negative rates on specific pathology classes are typically not disclosed by vendors at the level required by the ACVR/ECVDI transparency standard. The companion engineering analysis at How Human Radiology AI Actually Gets Built — and the Wild West of Veterinary AI Where None of That Exists documents the specific methodological gaps in detail.

Liability flows downhill to the clinic. Vendor contracts generally allocate the legal exposure for a missed or incorrect interpretation to the referring veterinarian rather than the AI vendor. SignalPET’s “100% guarantee” against claims is capped at the amount the customer recovers from the customer’s own AVMA PLIT malpractice insurance policy. The 2025 ACVR/ECVDI position statement notes that legal responsibility for decisions made from any AI system “is likely to have some degree of responsibility for veterinarians themselves rather than developers of the AI alone.”

State practice-act exposure is untested. Every state veterinary practice act defines diagnosis as the practice of veterinary medicine and requires a DVM license. The legal theory under which an AI vendor is not engaged in the practice of veterinary medicine when it ships a prose report titled “Radiology Report” containing findings, conclusions, and treatment recommendations to a referring clinician has not been formally tested in any state board proceeding or court. For analysis of how state practice acts apply to undisclosed clinical service arrangements in veterinary medicine generally, see our coverage of the AVMA fee-splitting prohibition and state veterinary practice act provisions.

Disclosure to clients is inconsistent. The ACVR/ECVDI position statement recommends that AI use be disclosed to pet owners. No statutory disclosure requirement currently exists. A clinic adopting AI-primary reads without informing clients that an algorithm — rather than a board-certified specialist — generated the interpretation may be exposed under state consumer-protection statutes in some jurisdictions.


Vendor Marketing Materials Quoted in This Article
Artificial Intelligence AI Radiology ACVR ECVDI SignalPET Vetology Antech RapidRead Mars Petcare Radimal IDEXX FDA State Practice Acts Unauthorized Practice Veterinary Regulation Liability AVMA PLIT JAVMA AAVSB Software as Medical Device Position Statement

Editorial & Legal Disclaimer. VeterinaryTeleradiology.com is an independent industry publication. This article is based entirely on publicly available and documented sources, each identified by name in the Primary Documents Referenced and Vendor Marketing Materials sections above. Those sources include: the 2025 ACVR/ECVDI position statement on artificial intelligence published in JAVMA; the 2024 ACVR/ECVDI consensus statement on teleradiology published in Veterinary Radiology & Ultrasound; peer-reviewed commentary published by Cohen and Gordon in VRU in 2022; published news coverage and press releases from the American Veterinary Medical Association, the American College of Veterinary Radiology, the American College of Radiology, and the Radiological Society of North America; the AAVSB AI Guidance Whitepaper published August 2025; peer-reviewed analysis published in JAMA Network Open; and publicly accessible vendor marketing materials and terms of service from SignalPET, Vetology, Antech Diagnostics, Radimal, IDEXX, and Patterson Veterinary. No confidential sources, non-public documents, or unverified information is relied upon in this article. Every factual claim is attributable to one or more of the above primary or secondary sources.

This article presents documented facts, structural observations, and questions for reader and regulatory consideration. It does not assert legal conclusions, make criminal accusations, or impute wrongdoing, fraud, or illegal conduct to any individual or entity. References to vendor marketing claims and contract terms are based on those parties’ own publicly posted materials as accessed at the time of this article’s preparation. The legal analysis presented reflects structural observations about applicable law and is not legal advice. This publication is not a law firm and does not provide legal advice to any party. Veterinarians, state regulators, and other readers with specific factual or legal questions should consult qualified counsel.

Where this article characterizes the absence of regulatory action — for example, that no state veterinary board has opened a formal investigation into a named vendor, or that no state attorney general has filed an unfair-trade-practices complaint — that characterization is based on the absence of any such action in public records, trade press archives, and regulatory filings reviewed by this publication. It is not an assertion that no such action exists under any circumstances or will not occur in the future.

Vendors named in this article are characterized based on their own public marketing materials as of the date of preparation. Any vendor whose product offerings, terms of service, or operational model differs from the description above is invited to contact this publication with the specifics, and any corrections supported by documentary evidence will be published in full. This publication extends that invitation directly and without prejudice to SignalPET, Vetology, Antech Diagnostics, Radimal, IDEXX, Patterson Veterinary, and any other vendor whose products are discussed.

Scroll to Top