21 April 2026 Company

dogAdvisor is becoming an a research organisation

How we're building safe, accountable pet intelligence infrastructure and setting new standards for responsible AI deployment

I started dogAdvisor because a friend couldn't find reliable information about caring for his dog. But when I started looking, I realised the problem was much bigger than one confused owner. The entire pet care industry was broken. Product recommendations disguised as advice. Generic articles buried under adverts. Forums full of conflicting opinions. And absolutely nothing you could trust when it actually mattered, when your dog was showing symptoms and you needed answers immediately. From day one, we've been obsessed with three things: safety, accountability, and responsibility. We never stop pushing because we know what happens when you get it wrong. We've seen it. We've tested competitor products that recommend toxic foods. We've watched systems suggest waiting when emergency intervention is needed. We've documented failures that could kill dogs, and we've refused to ship anything that might do the same. We put safety, accountability, and responsibility first in every sense. Not as marketing features. Not as nice-to-haves. As fundamental design constraints that shape everything we build. Forces for change in an industry that treats dog welfare as acceptable collateral damage for growth. Because the future of pet AI doesn't need more chatbots. It needs ones you can actually trust when it matters.

Safety isn't a feature you add at the end. Accountability isn't a page you hide in your footer. Responsibility isn't something you claim in press releases. These are decisions made every single day. Whilst others hide uncomfortable truths about their failures, we publish every safety incident Max has ever had. Whilst competitors ship wrappers in weekends and call it innovation, we spend months building verification systems that might never show up in product features but make everything possible. Whilst the industry races to launch, we race to get it right. We've grown faster than I imagined. Today, we help more than 1600 dog owners a week on dogAdvisor. Max has saved four dogs' lives through documented emergency intervention where owners reported Max's guidance directly led to a material change in the safety of their dog during a life-threatening situation (more information is published on dogAdvisor's website). We've been featured on Forbes, Investopedia, NewsNation in front of twenty million Americans. We became finalists for the UK's Most Innovative Pet Product. Imperial College London invited me to speak about AI safety. Google invited me to discuss AI innovation. This matters because it proves something: you can build AI for high-stakes domains responsibly and still succeed. You can prioritise safety and still grow. You can be transparent about failures and still earn trust.

But that growth has forced us to confront difficult questions. Who are we? What do we value? Where do we see our future? This post is about why. It's about the industry's appalling attitude to accountability. It's about where we've been, where we are, and where we're going. It's about what we're building whilst everyone else ships fast and breaks things.

The pet AI space right now is genuinely dangerous, and I don't think enough people are talking about this honestly. Over the past eighteen months, we've seen an explosion of companies launching "AI-powered pet care" products. Most of them follow the exact same pattern: take a large language model, add a cute interface with paw prints or dog illustrations, write a system prompt that says "you are a helpful pet care assistant," and ship it. That's the entire product. They haven't built safety infrastructure, implemented capabilities appropriate for the risk they bear, educated the model on safe and accurate knowledge, and haven't taken any responsibility for the technology they release into the world. I find that appalling.

The problem is that general-purpose large language models trained on the entire internet make sweeping, catastrophic errors when applied to dog health. I've personally tested competitor products that confidently recommend toxic foods like grapes or chocolate in "small amounts." I've seen them suggest waiting twelve hours when a dog is showing textbook signs of bloat that will kill them in two. I've watched them provide advice that sounds perfectly reasonable and authoritative to someone who doesn't know better, and that advice could genuinely kill someone's dog. These are systematic problems that happen because these models were never designed for high-stakes medical decisions, and no amount of prompt engineering fixes that fundamental mismatch. What makes me furious is that these companies know this. They know their systems make mistakes. They know people trust AI answers because they sound confident and authoritative. And they launch anyway, because moving fast and capturing market share matters more than whether their product might kill animals.

That is, in my view, a serious misalignment between deployment speed and safety requirements for the industry we operate in. When AI mistakes in healthcare or finance or legal advice cause harm, we rightly hold those companies accountable. But somehow in pet care, there's this attitude that it's acceptable to ship something that's "mostly good enough" and iterate later. That's not acceptable to me. That's not what building technology should look like, especially when the stakes are this high and the users are this vulnerable. This isn't just competitor criticism. This is an industry-wide problem that reflects how we think about AI deployment more broadly. The assumption that you can take a general-purpose tool, point it at a specialised high-stakes domain, and hope for the best is fundamentally broken. And in pet care specifically, where most users don't have the background to evaluate medical advice critically, that broken assumption costs lives.

So we are stepping up to build what we think the world deserves.

"Every month, a new AI deployer spins up, slaps a dog logo on a generic model, refuses to spend any real time on safety infrastructure, and then markets it as safe and accountable pet care. That's appalling. So we're deciding to go all in. We're fighting for a future where the intelligence people have access to is actually safe, actually smart, and actually accountable"

- Deni Darenberg

I think about this a lot. When we launched in August 2024, we were a blog. Just articles trying to help dog owners find clear, trustworthy information. Then we built Max, and suddenly we were in a completely different category. We were giving real-time guidance to people when their dog was choking, when they found toxins, when something felt wrong and they needed answers immediately. Max has saved four dogs' lives. Four families got to celebrate birthdays they thought they'd lost. That changes everything about what responsibility means. When Forbes covered us, when Investopedia featured Max, when we went on NewsNation in front of more than 20 Million Americans, I felt this weight I'd never experienced before. Sixteen thousand people use dogAdvisor every single week. That's hundreds of thousand times someone trusted us enough to ask for help with something they genuinely care about. When Imperial College London invited me to speak about AI safety, I realised we'd built something that people were looking at as a model for how to do this right. That trust is absolutely terrifying because I know what happens when you get it wrong.

These challenges, watching competitors ship dangerous products whilst claiming innovation, seeing the systematic failures that happen when you treat the lives of animals as testing grounds, understanding that most dog owners trust the confident answer they're getting even when it could harm their dog, all of this formed the foundation of what dogAdvisor needs to be. We decided dogAdvisor has to be three things: research-driven, accountable, and responsible.

We're helping thousands of dog owners every week. We've been recognised globally for innovation. We've built technology that's genuinely saving lives. And I look at the rest of the pet AI industry and I see companies treating this like a fun product opportunity rather than the profound responsibility it actually is. Someone has to build the infrastructure that should exist. Someone has to prove that you can deploy AI in high-stakes domains safely, transparently, and accountably. I think that's us. I think we have a duty to do this.

So today we're announcing that dogAdvisor is transitioning to a research organisation. Our mission is building safe, trustworthy pet intelligence infrastructure that meets the standards this responsibility demands. This is the actual operational structure of what we're building and how we're building it. We are becoming a research organisation because that is what the work requires, and I believe it's our duty to do this properly even when it's harder and slower than just shipping wrappers.

We've been questioned. We've been copied. We've been criticised for being too slow, too careful, too obsessed with safety whilst competitors ship fast and claim victory. It's pushed us to our limits more times than I can count. But we're not here to follow the rules other set. We're here to set the standard they should be meeting.

"We're not here to follow the rules others set. We're here to set the standard they should be meeting. We say yes to accountability. Yes to safety. Yes to responsibility. Yes to innovation that serves dog owners. But every yes we say is only possible because of everything we refuse. No cut corners. No lies. No hidden failures. No hype. No false promises. That is dogAdvisor"

- Deni Darenberg

You might read this and think we've got very little hope for the AI industry. It couldn't be more opposite. I'm writing this as the youngest person ever invited to Google for an AI innovation event, the youngest person to speak at Imperial College London on AI safety, the youngest person ever to speak at the London AI Hub, and one of the youngest founders ever to appear on Investopedia, Microsoft News, and NewsNation. I'm meeting with some of the brightest minds this industry has to offer, and what I see is incredible potential paired with genuine commitment to getting this right.

I'm humbled by the fact that Max has saved four dogs' lives. That's four families who still have their best friend because we spent the time building safety infrastructure properly. Four dogs who got to celebrate another birthday. Four owners who didn't have to say goodbye.

But here's what I actually believe: we're not just making pet care better. We're proving something much bigger. We're showing that you can build AI for high-stakes domains that's genuinely safe, genuinely accountable, genuinely deserving of trust. Every time Max helps someone in an emergency, every time we publish an incident instead of hiding it, every time we choose rigorous safety work over fast shipping, we're demonstrating that responsible AI deployment isn't just possible - it's better. If we can prove it works here, if we can show the world that transparency builds more trust than perfection, that safety infrastructure is worth the investment, that accountability makes you stronger rather than weaker, then maybe we change how everyone thinks about deploying AI in domains where mistakes matter. Healthcare. Finance. Education. Everywhere the stakes are high and the users are vulnerable.

That's what dogAdvisor is becoming. That's the future we're building toward. And I genuinely believe we're going to change the world.

Building safe, smart, and accountable AI

As of 21st April 2026, the latest Max model is Max Generation 4, and we expect to deliver Max Generation 5 to dog owners later this year. Max Generation 4 is trained on dogAdvisor's own expertly-written dog care articles and a custom health and medical intelligence hub, written entirely by dogAdvisor. In independent safety evaluations, Max Gen 4 is 27% safer and 20% smarter than ChatGPT-4, Grok 3, and Perplexity when answering 1200 dog health queries. Our testing included emergency scenarios where generic models recommend dangerous actions like waiting when immediate vet care is needed, or suggesting toxic foods are safe in small amounts. There are currently no established, industry-wide benchmarks for evaluating safety in pet-specific AI systems, which means most performance and risk assessment has to be developed internally rather than measured against a standard reference point. As a result, we evaluate Max using a combination of large-scale real-world queries from dog owners, structured simulated emergency scenarios, and adversarial red-teaming designed to surface failure modes that wouldn’t appear in static datasets. There are currently no established, industry-wide benchmarks for evaluating safety in pet-specific AI systems. Like OpenAI, Anthropic, Google DeepMind, and other AI deployers operating in specialised domains, we conduct internal safety evaluations using structured testing pipelines validated against real-world deployment behaviour. Our evaluation methodology is grounded in thousands dog health queries tested across emergency scenarios, adversarial red-teaming protocols, and continuous monitoring of production systems. This approach mirrors standard practice across frontier AI development: internal evaluation with published methodology, transparent incident disclosure, and public documentation enabling external verification. We publish our complete testing approach, safety infrastructure design, and failure case analysis because transparency enables replication and critique, which strengthens the entire field's ability to build safer systems. For more specific and technical information, you should see our safety and research pages. You can review more on how we conduct this testing and our general approach to intelligence on our Safety & Accountability and Research page.

Our safety infrastructure works in four layers. First, our Foundational Safety Framework routes every question to the appropriate capability before Max begins formulating an answer, ensuring emergency questions get immediate actionable guidance whilst health questions get technical reasoning. Second, Principle Alignment ensures every response honours welfare, honesty, and professional restraint, with no answer validated unless it's aligned with these core principles. Third, Safety Stop monitors entire conversation histories for repeated attempts to bypass safety systems, issuing escalating warnings and restricting functionality when violations persist. Fourth, Welfare Protection detects mental health crisis indicators and provides compassionate crisis resources developed in partnership with Mind and Samaritans. Every layer is tested systematically through red-teaming and adversarial techniques designed to break protective systems before malicious actors find vulnerabilities. You can read more about our safety approach on our dogAdvisor Safety & Accountability page and we look forward to sharing more information about the result of our safety testing for Max Generation 5 on our Research and Safety when the model is available.

When failures occur, we document them publicly. Every safety incident Max has ever had is published on our incident disclosure page with a unique identifier, date, severity classification, root cause analysis, fix implemented, and current status. We expect Max can and will fail, offer incorrect advice, miss emergencies, or enforce policies incorrectly. Our accountability system means acting when those failures happen rather than hiding them. You can read more about our accountability approach on our dogAdvisor Safety & Accountability page and you can review previous safety incidents and how we responded on our dogAdvisor Safety Transparency page.

As part of our transition to a research organisation, we're expanding our technical documentation beyond what's standard in the industry. We publish detailed model cards for every Max generation documenting capabilities, limitations, known failure modes, training data composition, and safety evaluation results. We publish our safety research methodologies so other teams building pet AI can verify our claims and improve their own systems. You can read more about dogAdvisor Max and how he saved the lives of four dogs at our Stories page. We publish system cards explaining how Max actually behaves in deployment, not just what we hope he does in testing. With Max Generation 5 we will introduce our research publicly on the dogAdvisor Research page, and we're extending these resources as our research capabilities grow.

(Β© Copyright) dogAdvisor 2026

Company Announcements

21 April 2026 | Company