Official Statement
dogAdvisor's response to the EU's AI Act
6th November 2025 | @dogAdvisor.dog


London, United Kingdom, 6th November 2025
Please be aware that this statement contains information about Medical Intelligence - a soon to be released feature on dogAdvisor Max, relevant to this article to clarify our obligations under the AI Act. At the time of writing, Medical Intelligence is not yet available to dog owners.
Good Afternoon,
The European Union has introduced the world's first comprehensive AI law - the AI Act, which entered into force on the 1st August. This regulation sets harmonised rules for AI systems across all of its 27 EU member states. As a London-based AI company serving dog owners across Europe and beyond, we fall within the scope of this regulation. Today, we're formally issuing a statement regarding our views on this legislation and how we are staying compliant.
What we think about the AI Act
We welcome the AI Act! We recognise that systems that manipulate users, exploit vulnerabilities, or make life-altering decisions without any human oversight and accountability are dangerous. When we started Max, we didn't layer safety on top of our model - we actually built Max's Foundational Safety Framework before we built max himself! Today, dogAdvisor has some of the world's strongest AI safety practices with Principle Alignment, the Foundational Safety Framework, AI Safety Lab, and safe, coherent, and transparent safety reporting.
Max is classified as a limited-risk AI
Max does not perform medical diagnosis, prescribe treatments, or make clinical decisions. While Max includes Medical Intelligence - a feature that uses advanced data sets to provide high-level medical support and answer complex medical questions - he operates strictly within educational boundaries. Medical Intelligence is designed to help owners understand medical topics and make informed decisions, but Max consistently reminds users that he cannot replace professional veterinary care and expressly directs them to seek veterinary advice for medical concerns. Max does not diagnose conditions, prescribe medications, or make treatment decisions. Max therefore advises and educates dog owners, including on complex medical topics through Medical Intelligence. But he doesn't diagnose dogs, prescribe medication, or replace veterinarians. Medical Intelligence helps owners understand medical information - like symptoms, conditions, or treatment options - so they can have informed conversations with their vet. Max always includes reminders to seek professional veterinary advice, and these disclaimers are built into Medical Intelligence's architecture through our Medical Alignment guidelines. That design choice was intentional - we built Max to empower owners with knowledge, not to act as a substitute for professional veterinary care. Max therefore falls into the limited risk category.
How dogAdvisor stay compliant
[Obligation 1: Transparency to users]
The law requires that AI systems designed for direct interaction with users must inform users that they are actively interacting with an AI system, unless this is obvious from context.
dogAdvisor AI Engineering





Max formally introduces himself as an AI in every conversation




We clearly state Max is the world's first life-saving dog AI
Max's responses include disclaimers when appropriate, especially when using Medical Intelligence for complex medical questions
Medical Intelligence specifically includes built-in Medical Alignment guidelines that ensure users are expressly reminded to seek veterinary advice
We never imply or suggest that Max is a human veterinarian or can replace professional medical care
[Obligation 2: Document system design & limitations]
The law requires that we (as deployers) maintain documentation explaining how the AI system works, its intended purpose, known limitations, and when users should not rely on it for information or advice.

dogAdvisor publish public documentation of Max's architecture, including Principle Alignment, Foundational Safety Framework, Emergency Guidance, Medical Intelligence (with Medical Alignment guidelines), and safety testing results. We publish this in our Safety Transparency reports that deal with Max's safety performance and detail comparative testing results against other competitors.

dogAdvisor publish Safety Cards for different Max model generations, and publish relevant Safety Cards for Max features like Emergency Guidance, Medical Intelligence, and further technical features.

We provide annual Oversight Reports that publish the most dangerous or complex queries Max receives and how he responds to these requests.

Max provides context-aware disclaimers during conversations, especially when discussing medical topics through Medical Intelligence or emergencies.
[Obligation 3: GPAI Compliance]
The law requires that we (as deployers) of general-purpose AI models (like Gemini, Grok, Claude, chatGPT etc) must verify the underlying GPAI provider is compliant with obligations, and meet additional obligations.

We use a foundation model provided by an AI lab that is compliant with the AI Act

Our Max Safety page and Max chats clearly explain how it should be used, and when professional advice is needed

We maintain internal documentation including how we use our foundation model, our safety layers, and the technologies we use to enable Max's features. This documentation contains sensitive corporate information, and falls under legal privilege, and is treated as an internal document. This documentation is also legally protected and is considered a trade secret because it protects commercial value from disclosure. We therefore never release this documentation to the public. We will never willingly disclose this information to anyone, including Law Enforcement, unless we have a legal obligation of disclosure or obtain a lawful warrant that compels us to release this.

All of dogAdvisor's articles are used to train Max. They were created by dogAdvisor, with the assistance of generative models to improve our phrasing grammar and content. We publicly disclose this.
[Obligation 4: Human oversight and monitoring]
While not strictly mandated for limited-risk systems, we choose to comply with this obligation as we believe it is an ethical obligation we hold: to ensure the systems we release are safe, transparent, accountable, and adequate.








[Obligation 5: AI Literacy]
All AI engineers must meet adequate AI literacy skills by understanding how AI works generally, its capabilities, limitations, risks, and ethics.
[Obligation 6: Record-Keeping & Incident Reporting]
We must maintain records of AI system operation, safety interventions, and any serious AI incidents.
[Obligation 7: Prohibited Practices]
AI systems may not manipulate users through subliminal or deceptive techniques, AI may not exploit vulnerabilities like age, disability, or economic circumstances, AI may not be used for social scoring or surveillance, and it is prohibited to use AI for real-time biometric identification in public spaces.
dogAdvisor's AI engineers manually review conversations with Max, analyse dangerous or complex questions, and flag and record any responses that may fail to meet our safety standards.
Engineers publish this information in Oversight Reports published on our Safety page
Emergency Guidance conversations are prioritised and manually reviewed by engineers
We have standard steps, practices, and procedures for responding to misalignment and safety incidents
Engineers are educated on the foundations of AI, its limitations, and other principles as required by law
We already exceed these requirements and make our compliance public on the dogAdvisor Safety page
Principle Alignment - Max is constitutionally prohibited from engaging in harmful practices. Our Principle Alignment framework prevents manipulation, deception, exploitation of owner vulnerability or emotions, advice that is harmful to dogs or humans, and overriding safety boundaries even if politely requested
Foundational Safety Framework - Our intent classifiers detect and block attempts to misuse Max for prohibited purposes.
Max does not collect, process, or analyse any biometric information. Max does not track users beyond basic context and needs like estimated location (compliance) and basic information about their device.
Preparing for the future
Governments worldwide are watching Europe's approach. The UK is developing its own AI regulation framework. The US, Canada, Australia, and other nations are drafting legislation. The regulatory landscape for AI will only grow more complex. Our approach, though, remains simple: to build safety into the foundation of Max, not as an afterthought. We will continue to test, invest, and build in more safe, accountable, and transparent AI systems.
The AI industry has largely operated without appropriate regulation for years. That era is ending. Companies building AI tools for any purpose - like to help dog owners - will soon face the same scrutiny as dogAdvisor. The difference is, we're ready.
