
We recently began working with a new client who is in the early stages of developing a location-based entertainment center (LBE) in his local community. He had used AI chatbots to create pro forma projections. We reviewed them and found some clear errors.
So, we decided to try using an AI chatbot ourselves to create a pro forma for a social game eatertainment center. It was quite an educational experience.
For example, the center we described to the AI had a 40-redemption game room. AI projected $40 per week in game revenue, which is at least one-tenth of what games in a decent redemption game room would generate. It also unrealistically projected the COGS for redemption prizes as being far too low. Again, it incorrectly estimated per capita game revenue at $1.25, which was significantly off. Fortunately, an entrepreneur relying on those projections probably wouldn't develop a roadkill LBE, as the projections are too low to secure funding.
However, in another projection scenario we ran, if the LBE were funded based on these projections, it would have failed. The AI chatbot estimated per-capita revenue at $100, which is about twice what is reasonable.
These are just a few examples of the numerous errors we found while using the AI chatbot to generate LBE revenue and expense projections.
Here's what Claude, one of the popular AIs, warns about its accuracy right on its screen:
![]()
If you click on the warning, here is what you see:
"In an attempt to be a helpful assistant, Claude can occasionally produce responses that are incorrect or misleading.
"This is known as "hallucinating" information. . . For example, in some subject areas, Claude might not have been trained on the most-up-to-date information and may get confused when prompted about current events. Another example is that Claude can display quotes that may look authoritative or sound convincing, but are not grounded in fact. In other words, Claude can write things that might look correct but are very mistaken.
"Users should not rely on Claude as a singular source of truth and should carefully scrutinize any high-stakes advice given by Claude.
"When working with web search results, users should review Claude's cited sources. Original websites may contain important context or details not included in Claude's synthesis. Additionally, the quality of Claude's responses depends on the underlying sources it references, so checking original content helps you identify any information that might be misinterpreted without the full context."
Besides AI hallucinations and confusion, another limitation of using AI for projections is its very limited access to online data on LBE revenues and expenses. Only public companies are required to disclose their financial details publicly. Even when they do, the data is often not precise. For example, the revenue and expenses Dave & Buster's reports cover a mix of all their LBE operations, including some older large models and some newer, smaller models. Their financials also combine the Dave & Buster's and Main Event FECs they own, which are two completely different types of LBEs. So, their financial data is only an average across their different types of LBEs.
Lucky Strike Entertainment Corp serves as another example of obscure financial data. The company owns 300 bowling centers, ranging from traditional AMF alleys to Bowlero centers and upscale Lucky Strike locations. As a result, their financial data reflects an average across three different center types.
The LBE industry has a lot of roadkill, those that didn't succeed. Don't rely solely on AI chatbots to help you build a successful LBE.
Subscribe to monthly Leisure eNewsletter