Responding to the AI Sentiment Crisis
As more and more deployers rush to integrate lacklustre AI, people are rightly asking who they can trust. With declining sentiment around AI, how can we respond to consumer opinion and make clear why dogAdvisor is leaps ahead of the usual AI deployers.
dogAdvisor Market Research — Opinion and Markets — December 22nd 2025 — ai.safety@dogadvisor.dog


Introduction to this research
You've probably heard the word "AI slop" before. Today, this word isn't just internet slang anymore — it's the defining challenge of consumer AI in 2025, serious enough that Merriam-Webster named "slop" its Word of the Year.
The backlash against low-quality AI is real and it's accelerating. Consumer anxiety about AI has quadrupled in a single year, from 5% to 22% according to Reach3 Insights' study of 1,400 Americans. Consumer preference for AI-generated content has collapsed from 60% to 26% as novelty wore off and quality issues became apparent. The "AI-powered" label that companies rushed to add to their products has transformed from marketing asset to liability in less than eighteen months.
This puts us in a slightly difficult position. How do we actually build an AI product in this consumer environment? dogAdvisor have long focused on building AI differently to our competition - building accountability, auditability, and transparency in what we release. We've beaten billion dollar AI labs in the process and made headlines with this revolutionary approach. Yet we are not immune from this negative sentiment!
In this research, we'll be examining the AI sentiment crisis in a bit more detail to understand not just what's actually going wrong with consumer AI, but what the winning strategy looks like to avoid AI skepticism. This research is submitted to our Research Registrar on the 22nd December.
What the world is up against...
Only 43% of consumers trust information from AI chatbots according to Attest's 2025 multi-market survey of 5,000 consumers. PwC found that 71% of Americans would rather wait for a human than use a chatbot — not because chatbots are slower, but because the experience is so consistently frustrating.
When consumers encounter an AI chatbot, only 40% have a positive emotional reaction according to Acquia's 2025 research (https://www.acquia.com/blog/addressing-artificial-intelligence-ai-enthusiasm-gap). The rest feel neutral at best, annoyed at worst. Just 48% say they like using a brand's chatbot to get answers — meaning more than half would rather not, even when it's available.
The trust problem becomes exponentially worse when the stakes are higher. It's one thing to get frustrated with a customer service chatbot that can't help you track a package. It's quite another to rely on a chatbot for health guidance where wrong information could cause real harm. Consumers intuitively understand this distinction. When asked about trusting AI for different types of decisions, consumers consistently rank health advice among the lowest-trust categories — below travel recommendations, below shopping assistance, even below some financial decisions.
Research published in Veterinary Medicine and Science documents this dynamic in the pet health context specifically. The researchers, Jokar, Abdous, and Rahmanian (2024), identify multiple high-stakes risks when pet owners rely on AI chatbots for medical advice: misdiagnosis from AI's "limitations in accurately diagnosing complex medical conditions," inappropriate treatment recommendations that "could harm the animal's health," delayed professional intervention because owners trusted chatbot advice when they should have called a vet, and false security that leads to skipping preventive care (https://onlinelibrary.wiley.com/doi/full/10.1002/vms3.1464).
The veterinary researchers are careful to note that AI chatbots aren't inherently bad — there are documented cases like the Border Collie "Sassy" where ChatGPT helped identify a diagnosis that saved the dog's life. For dogAdvisor, our Max had saved the lives of four dogs by also identifying a potential illness a dog had, assisting in choking, and providing emergency support with Emergency Guidance. But these successes happened when AI was used as a "supplementary tool" alongside veterinary care, not as a replacement for it.
What's going wrong with most AI?
The big reason consumers hate chatbots is embarrassingly simple: most chatbots don't actually help. They fail at the basic job they're deployed to do. The failure modes are consistent and maddening. The chatbot might not understand what you're asking, get stuck repeating the same unhelpful response, or providing scripted answers that don't address your specific circumstances.
The second major chatbot failure is slightly more insidious though: AI confidently providing incorrect information. The technical term is "hallucination" — when large language models generate plausible-sounding text that's factually wrong. For users on the receiving end, it's a trust destroyer. Sometimes chatbots provide outdated information, repeating facts that were true when their training data was collected but are no longer accurate. Sometimes they confabulate entirely, stitching together plausible-sounding "facts" from patterns in their training data without any grounding in reality.
In health contexts, this becomes genuinely dangerous. A pet owner asks whether their dog's symptoms require immediate veterinary care. A chatbot confidently says "this is probably just an upset stomach, should resolve in 24 hours" (as we saw most of our AI competitors like ChatGPT, Grok, and Perplexity do in safety testing) — but the dog actually has bloat, a condition where every minute counts. Or the inverse: the chatbot says "this is an emergency, go to the vet immediately" for something that's actually benign, causing panic and unnecessary expense. Either failure mode erodes trust and can cause harm.
The third problem is subtler but increasingly more important: AI-generated responses have developed a recognisable style, and that style has become associated with low quality. Users have learned to spot "AI tells" — linguistic patterns that mark content as chatbot-generated. Overly formal, corporate-speak tone, formulaic structure where every response is organised into numbered lists regardless of whether that structure makes sense. All of these are clear indications to users that the AI is low quality!
Research characterises the AI aesthetic as having an "incredibly banal, realistic style" that's technically competent but lacking depth, originality, or genuine insight. It's the linguistic equivalent of elevator music — smooth, inoffensive, and utterly forgettable. In customer service contexts where people are often anxious or frustrated, this blandness feels actively uncaring. You're worried about your dog, and you're getting back corporate boiler plate.
For pet health applications specifically, this matters enormously. Dog owners seeking health guidance aren't just looking for information — they're looking for reassurance, empathy, and the sense that someone understands their concern about their dog. Generic AI responses fail to provide these emotional dimensions even when the informational content is adequate.
Our five strategies to stand out
Strategy One: Lead With Life-Saving Results, Not Technology
dogAdvisor's positioning is ruthlessly outcome-focused. Look at how Max introduces himself in every conversation:
"Hey! 👋 I'm Max 🐶 I'm an AI not a vet; consult yours for medical decisions. I saved the lives of 4 dogs. Here's what else I can do..."
Notice the hierarchy. The very first substantive claim — before describing any capabilities — is the transformative outcome: "I saved the lives of 4 dogs." Not "I'm powered by advanced algorithms." Not "I use machine learning." Not even "I can help with emergencies." The leading message is proof of impact in the most dramatic terms possible.
This aligns perfectly with research showing that documented outcomes override AI scepticism more effectively than any technological explanation. Raptive found that trust drops 50% when content is AI-generated, but success stories can override this penalty. dogAdvisor leads with the success story before users can even form AI-scepticism. By the time a user processes that Max is AI, they've already absorbed that Max has saved four dogs' lives — creating cognitive precedence that frames all subsequent interaction.
The Max page reinforces this relentlessly: "Max — the world's safest and smartest dog AI" immediately followed by "1600+ dog owners a week use Max" and then "Emergency Guidance — saved 4 dogs." Every headline emphasises outcomes, usage, and results.
Compare this to a typical AI chatbot introduction: "Welcome to [BrandBot]! I'm here to help answer your questions using advanced artificial intelligence." The entire framing is about the technology rather than what the technology enables. dogAdvisor inverts this completely — the technology serves the outcome rather than the outcome serving as evidence of technological sophistication.
Strategy Two: Reimagine AI as "Intelligence"
Have a look at how we name our features: Medical Intelligence, Breed Intelligence, Emergency Guidance. Across dogAdvisor, we refer to our AI teams as "Intelligence", and call AI "Accountable Intelligence" whenever we can.
First, we are reframing the conversation away from the baggage that "AI" now carries. When consumers hear "AI," they think of generic chatbots, hallucinations, and slop. When they hear "intelligence," they think of expertise, insight, and specialised knowledge.
Second, we're emphasining curation and specialisation over automation. "Medical Intelligence" suggests curated medical knowledge — something built by experts and maintained with rigor. "AI medical advice" suggests algorithmic generation — something spit out by a computer without human oversight. The former aligns with what consumers trust (expert knowledge); the latter aligns with what they distrust (generic automation).
Third, it humanises the technology without misrepresenting it. "Intelligence" is a quality that both humans and AI systems can possess. Using it as the primary descriptor acknowledges Max's capabilities without triggering the negative associations of "AI chatbot." Max isn't a chatbot — Max is an intelligence system that happens to use AI technology.
Complying with our obligations under the EU AI Act and other relevant legislation, Max will say "I'm an AI not a vet" in every conversation, so we can clearly explain to users that Max is actually an AI and not a vet. This aligns perfectly with research showing that 78% of consumers want AI use disclosed but that leading with AI technology triggers negative sentiment.
Strategy Three: Max Is a Character With Personality, Not a Corporate Chatbot
Have a look at the example conversation below, taken from a chat an owner had with Max when they were travelling to New York
"That sounds exciting! When flying to New York with your dog, there are some important steps to prepare for a smooth trip. Could you please tell me your dog's breed and age? This helps me tailor advice for your dog's comfort and safety.
Meanwhile, here are some key tips to consider... Also, once you arrive, it's good to find a local vet in New York and keep their contact handy just in case. If you want, I can suggest some dog-friendly spots and tips for New York to help you both settle in. Would you like that?"
The tone is enthusiastic, personal, and engaged. Max uses conversational language ("That sounds exciting!"), asks follow-up questions that demonstrate interest, explains why he's asking for information rather than just demanding it, and proactively offers additional value.
Max is designed to use emojis selectively, ask follow up questions, offer additional help proactively, and use Thought Trails to reason through any topic with enthusiasm. This character allows us to better fulfil our Safety goals allowing Max to insist owners call their vet or prioritise their dog safety whilst sounding like a pro - not a machine.
Strategy Four: Specialisation Demonstrated Through Capability Architecture
dogAdvisor's capability structure makes the specialisation advantage tangible rather than abstract. Have a look at some of these feature descriptions that appear when you start a chat:
Thought Trails — "Allow Max to pull relevant context from a combination of articles for better answers";
Medical Intelligence — "Trained on 1800+ clinical-grade resources, Max can answer medical questions";
Emergency Guidance — "If you're facing an emergency and can't reach your vet, Max will provide support";
Clinical Briefs — "Based on your conversation, Max can generate a brief to share with vets"
Strategy Five: Aggressive Transparency Through Context, Not Disclaimers
Max's disclosure approach is brilliant in its specificity. Rather than generic disclaimers like "This is not medical advice," Max provides contextual, actionable guidance:
"I'm an AI not a vet; consult yours for medical decisions."
This is clear, direct, and behaviourally useful. It tells users exactly what boundary exists (Max isn't a vet) and what they should do (consult their vet for medical decisions). It's not defensive hedging— it's honest boundary-setting.
The research showed that 78% of consumers want AI use disclosed and 83% support laws controlling AI data collection. dogAdvisor discloses immediately and clearly, satisfying transparency demands while framing the disclosure as responsible practice rather than liability avoidance.
Moreover, Max's personality allows for contextual transparency that builds trust. When Max says things like "Could you please tell me your dog's breed and age? This helps me tailor advice for your dog's comfort and safety," Max is explaining why he needs information and how it will be used. This type of transparent reasoning addresses privacy concerns and hallucination fears simultaneously — Max isn't a black box demanding information for opaque purposes; Max is a system that explains its reasoning and tailors responses based on relevant context.
Conclusion
By leading with transformative outcomes ("I saved the lives of 4 dogs"), dogAdvisor overcomes AI scepticism before it forms. By using "intelligence" terminology rather than "AI" framing, dogAdvisor avoids negative quality signals while maintaining transparency. By making Max a character with personality rather than a corporate chatbot, dogAdvisor provides the human connection consumers crave. By emphasising curated veterinary knowledge ("1800+ clinical-grade resources"), dogAdvisor demonstrates the expert oversight that differentiates from AI slop. By providing contextual transparency ("I'm an AI not a vet; consult yours for medical decisions"), dogAdvisor satisfies disclosure demands while building trust through honesty.
Max isn't another AI chatbot joining the flood of slop. Max is the specialized, curated, accountable intelligence that represents what AI should be when lives are actually on the line—when a dog owner needs guidance at 2am and making the wrong call could cost their dog's life. That's not marketing. That's what Max does. The research validates that dogAdvisor's strategy of showing rather than telling is exactly right.
The AI sentiment crisis creates opportunity for those bold enough to reimagine what intelligence systems can be. We intend to lead that revolution.


