Besides the data I already implemented in the Exploration Report, I decided to add the “clues visited” and the “total possibilities explored” as well, because the design of the path is complex and there are multiple avenues, including loops, to reach each ending.
The clues visited in one exploration could be more than unique clues explored, as players can use the loops to go back and forth in circles.
To calculate the number of total available paths, I used exhaustive enumeration and the help of AI tools by providing the specific topology structure. Since there are many loops in my design, I only count the first loop to avoid an infinite loop, yet it’s still hard to count the exact amount. After computing using the depth-first search (DFS) algorithm, I got a total of 227 valid paths to reach either endpoint.
Thus, the “Total possibilities explored” can be calculated by: the unique paths explored / total available paths (227).
With all the data that would be used in the report, I started designing the rubrics for the “Human Detector”:
| Solutions explored | 20% |
| Clues explored | 30% |
Clues visited
|
20%
|
| Total possibilities explored | 30% |
| Total | 100% |
After determining the rubrics for detector results of human vs AI percentage, I also designed different comments based on the percentage to add to the entertainment of the game while encouraging players to explore more:
- human <= 15%: Did you even use your brain? Did you let a monkey choose for you?
- 15% < human <= 25%: Is your strategy to brute-force your way through by wearing out your mouse? – try using your brain.
- 25% < human <= 40%: Did your brain finally wake up, or was that just a lucky accident?
- 40% < human <= 60%: It’s not good yet, but it’s not terrible either. Keep it up!
- 60% < human <= 80%: You meet minimum human standards!
- 80% < human <= 99%: You’ve left 97.42857% of players in the dust!
- human = 100%: No AI, no mods, just pure biological brilliance!
Next, I worked on the tooltip text for each clue, i.e., writing a few more different styles of comments for each clue (using both my own ideas as well as ChatGPT and DeepSeek), and then randomly display one of them each time the player interacts, to increase the freshness of the player’s exploration and the interest of the game experience each visit, as well as making the internal process of solving a puzzle more dynamic by imitating the whimsy of human when thinking.
Here’s the current demo video:
Next Step
- Finish drawings for the remaining clue icons
- Iterate a little bit based on playtesting feedback
- Work on the achievement system