Delphium Labs
Delphium LabsApplied AI Research . London . 2026
Research
PerspectiveMar 2026

From Research to Product: Why We Built FindingFin

By Delphium Labs

It started as a spreadsheet

FindingFin did not start as a product idea. It started as a spreadsheet.

In mid-2024, our research team at Delphium Labs began running a straightforward experiment. We asked ChatGPT, Perplexity, and Gemini to recommend hotels, restaurants, and venues across dozens of UK locations. We recorded every property mentioned, checked what signals those properties had in common, and documented it all in a shared spreadsheet that grew more unwieldy every week.

By the time we had logged 500 queries and audited over 200 individual hospitality businesses, two things were clear. First, independent hospitality businesses were systematically invisible to AI engines. Second, the reasons were specific, measurable, and fixable. The problem was that no tool existed to help operators understand those reasons or act on them.

That gap between research insight and practical action is where FindingFin was born.

The research phase

Delphium Labs is an applied AI research company. We do not start with product concepts and work backwards. We start with questions and follow the data.

The question that launched this work was straightforward: how do AI engines decide which hospitality businesses to recommend? Not in theory. In practice. When a traveller types "best boutique hotel in the Cotswolds" into ChatGPT, what determines which properties appear in the response?

We designed a study to find out. Over the course of several months, we ran hundreds of queries across ChatGPT, Perplexity, and Gemini, covering every major hospitality category. Hotels, restaurants, wedding venues, tour operators, serviced apartments, glamping sites. We varied the query types: direct searches, comparison queries, qualifier queries, location-specific questions, experience-based prompts.

For every query, we recorded every business mentioned by name. Then we audited those businesses. We checked their websites for structured data and schema markup. We assessed their content depth. We counted their reviews across platforms. We evaluated their Google Business Profiles. We examined their presence on OTAs, travel publications, and social platforms.

We did the same for businesses that did not appear but, by any reasonable measure, should have. Properties with strong reputations, excellent guest reviews, and compelling offerings that were simply invisible to AI engines.

The data told a consistent story. The factors that determined AI visibility were not the same factors that made a business good at what it does. They were technical, structural, and content-related. And almost no independent hospitality businesses were getting them right.

The gap we found

The core finding of our research was a structural mismatch. Most hospitality businesses have good websites designed for human visitors. Clean design, attractive photography, a booking widget, some copy about the property. For a human browsing the site, this works well enough.

But AI engines do not browse websites the way humans do. They extract structured data. They parse content for specific, detailed information. They look for signals that indicate authority, completeness, and relevance to a specific query.

Almost none of the independent hospitality businesses we audited had structured data implemented correctly. Most lacked schema markup entirely. Room descriptions were minimal - often just a room type name and a price. Location content was thin or non-existent. FAQ pages targeting real traveller questions were rare.

The result was predictable. AI engines recommended chain hotels and OTA-heavy properties by default. Chains have corporate web teams that implement schema at scale. OTAs have massive content footprints and domain authority that AI engines draw from. Independent businesses, which often offer the best and most distinctive experiences, were invisible. Not because they are worse. Because they present themselves online in ways that humans understand but AI engines do not.

Our 500-query study quantified this: chain hotels appeared in 72% of AI-generated recommendations. Independent properties accounted for 28%. The split was consistent across query types and engine types, though each engine showed its own patterns.

This was the gap. Not a knowledge gap - hospitality operators know how to run great businesses. A visibility gap driven by technical factors that most operators have never been told about and would not know how to fix even if they had.

Why existing tools did not solve this

When we started sharing our research, the first question we heard was: "Is there a tool that fixes this?"

We looked at what was available. The answer was no - not in any meaningful sense.

SEO tools are not built for AI visibility. Traditional SEO platforms are designed for Google Search rankings: keyword positions, backlink profiles, page authority. AI visibility operates differently. A property can rank on page one of Google for its brand name and still be completely absent from ChatGPT recommendations. The signals that drive AI visibility overlap with SEO in some areas (structured data, content depth) but diverge in others (answer-readiness, contextual relevance, multi-engine presence). Trying to optimise for AI visibility using SEO tools is like navigating with a road map when you need a nautical chart. Some landmarks are the same, but the terrain is different.

AI monitoring tools show scores but not solutions. A small wave of AI visibility monitoring tools began appearing in 2025. Most of them do essentially the same thing: they check whether your brand is mentioned in AI engine responses and give you a score. This is useful as a baseline, but it stops at measurement. You know your score. You do not know why it is what it is, which queries you are missing from, or what to change. We have written separately about why monitoring alone is not enough.

No tool was specifically designed for hospitality's unique challenges. Even among the emerging AI visibility tools, none were built for hospitality. Hotels, restaurants, and venues have specific schema types, specific query patterns, specific competitive dynamics, and specific content needs. A generic AI visibility tool does not know that a hotel should implement HotelRoom schema with room-type-level detail, or that a restaurant's FAQ page should answer "do they have a tasting menu" because that is a query travellers actually ask AI engines.

The gap was clear. What the market needed was not another monitoring dashboard. It needed a tool that combined AI visibility measurement with hospitality-specific intelligence, gap analysis, and actionable recommendations.

What FindingFin does

FindingFin is the product we built to close that gap. Every feature traces back to something we learned in our research.

Multi-engine visibility check. FindingFin shows you how your property appears across ChatGPT, Perplexity, and Gemini. Not a single aggregated score, but engine-by-engine visibility, because each engine behaves differently and the actions you take to improve visibility on each one are different. Our research showed that Perplexity favours recent, detailed web content. Gemini leans heavily on Google Business Profile data. ChatGPT correlates with editorial mentions and brand recognition. FindingFin shows you where you stand on each.

Query-level detail. FindingFin does not just tell you whether AI engines mention your brand. It shows you which specific traveller questions surface your property and which do not. "Boutique hotel in Bristol" might return your property. "Romantic weekend break near Bath" might not. That query-level granularity is essential for understanding where to focus your content and optimisation effort.

Gap analysis. This is the feature that emerged most directly from our research methodology. When your property does not appear for a query but a competitor does, FindingFin examines what the visible property has that you lack. Better schema markup. More detailed room descriptions. A local area guide. More Google reviews. The gap analysis turns an absence into a specific, actionable diagnosis.

Prioritised recommendations. Based on 18 months of Delphium Labs research data, FindingFin ranks recommendations by expected impact. Schema markup implementation consistently ranks as the highest-impact technical change. Content depth improvements follow. Review strategy is a longer-term play with compounding returns. FindingFin puts the highest-impact items at the top, so operators with limited time and resources know exactly where to start.

Progress tracking. After you make changes, FindingFin measures the result. Before-and-after comparisons tied to specific actions close the loop between diagnosis and outcome. You do not just hope your changes worked. You see whether they did, and by how much.

The approach behind the product

The principles that guided FindingFin's development reflect who we are as a company.

Research-led. Every recommendation in FindingFin is backed by Delphium Labs data. We do not guess which factors matter for AI visibility. We measure them. When we tell a hotel that schema markup is their highest priority, it is because our data shows a 2.1x correlation between comprehensive schema and AI citation rates. When we recommend detailed room descriptions, it is because we have measured a 3.4x visibility advantage for properties with specific, descriptive room content.

Hospitality-specific. FindingFin is not a generic AI tool repurposed for hotels. It was built from day one for the hospitality industry. The schema types it checks are hospitality schema types. The queries it runs are real traveller queries. The competitive analysis compares you against hospitality businesses, not every business on the internet. The recommendations reflect the specific content and technical needs of hotels, restaurants, venues, and tour operators.

Action-oriented. Knowing your score is step one. Improving it is the point. Every feature in FindingFin is designed to drive a specific action. The gap analysis tells you what to fix. The prioritisation tells you what to fix first. The progress tracking tells you whether the fix worked. We built FindingFin to be the shortest path between "I want better AI visibility" and "here is exactly what to do about it."

Built in London, UK. Delphium Labs is based in London, close to the hospitality businesses we serve. We work with independent hotels, restaurants, and venues across the UK. The properties we study and the operators we talk to are not abstractions. They are businesses we know, in communities we are part of. That proximity shapes how we build.

Still researchers at heart

FindingFin is a product, but Delphium Labs is a research company. That distinction matters.

We are still running studies. We are still auditing properties. We are still logging queries and tracking how AI engine behaviour shifts over time. Every insight that goes into FindingFin comes from ongoing Delphium Labs research, not from a single study we ran once and stopped updating.

The AI visibility landscape is changing fast. Engines update their models, their retrieval methods, their source preferences. Query patterns shift with traveller behaviour. New competitors enter the space. What worked six months ago may not work the same way today.

That is why FindingFin is built on a research foundation rather than a static rulebook. As our data evolves, the product evolves with it. The recommendations you see in FindingFin today are informed by the most recent Delphium Labs analysis, and they will continue to update as we learn more.

We started with a spreadsheet. We built it into a research programme. That research programme revealed a gap in the market that no existing tool addressed. So we built FindingFin to close it.

The spreadsheet still exists, by the way. It has 47 tabs now. But the insights it contains are in a much more useful format.