Alexander M. West
Camera Shy: Heatmapping
When working with puzzle games, there comes a problem of figuring out where players get stuck or where they will struggle. Sometimes players get stuck in spots that are completely unexpected; they can struggle with seeing something that’s right in front of them rather than what they were intended to solve.
In order to reduce these problems, watching people play the game and finding their natural paths is a great technique that can offer solutions before a problem can arise. However, watching multiple people test and play a game can take up a lot of time that could be used doing other things. Rather than individually note the behavior of multiple players, the collection player data is more effective and less time consuming.
Player data can appear in various forms depending on the information being collected. Examples of these forms can be spreadsheets, graphs, charts or, more intuitively, heatmaps. A heatmap can be used by a developer to log data during multiple play sessions in order to be able to look back and figure out if there are common pain points as well as how often something occurs in relation to space.
Heat Mapping and data collection is essentially what I looked into and worked on the last two weeks. The data I wanted to collect, due to our game being both a hidden object game and a VR game, was the space(s) that the player was most often looking at. This information made it possible to determine both what was the most interesting part of the map as well as what hidden objects were the hardest or quickest to find. Although this information was beneficial to the team, there are hindrances to collecting viewing points. Those of which include:
The fact that frequency doesn’t necessarily mean that the player is actually interacting with that space; they could just be idly looking in that direction.
Alternatively, if a player isn’t looking in that space doesn’t mean that they are missing something; a player could very quickly find an object at a specific point and never look there again. This doesn’t necessarily mean that an object was not hard to find; this player just found the object easily.
Despite these difficulties, however, this data can genuinely show trends and allow my team to see how and where help is needed in order to draw more attention to specific sections of the game.
Since we are only collecting the viewing points on the map, Heat Mapping is relatively simple. To begin the process, a fragment shader is needed in order to display points given. From these initial points, the shader finds the intensity at that specific fragment based on intensities and radii of every point; essentially, a weighted average is created per fragment, which can be used to sample color from a ramp texture. After the data is gathered, it’s simple to apply the shader to every object in the scene and then send the data to them in order to create a neat little heatmap that I can walk around and view.
If only it were so simple. Several problems arise from this method.
I have the potential of having A LOT of data I need to send to the fragment shader. Our game is expected to reach roughly a 30 minute play time by the end of the semester. In order to get data that’s actually useful, I need to gather a bunch of it every second—on a previous project I’ve found that 10 times a second is good enough for these purposes. This means that I have a potential 18,000 points (10 times a second * 60 seconds in a minute * 30 minutes in a play session). These points are actually 4 floats because Unity only allows sending Vector4 not Vector3 to shaders which means I actually have up to 72,000 floats I need to send to the fragment shader. This is so much data, in fact, that Unity legitimately will not allow it to be sent. In addition to the fact that this level of date is not allowed in Unity, it’s unnecessary for every fragment to process 18,000 vectors; especially when we realize that most of those points aren’t going to actually affect the color of the current fragment.
In order to get around this, I had the thought of: textures! I have Vector4 points and as my professor always says, “it’s just data,” so I can pack the points into a texture and read it in the same fragment shader. This would mean I have a 2D texture that is at most 135x135 pixels (upwards of 18,225 Vector4).
In order to get this data out of a texture I need to sample every pixel. That means that I’m still processing just as many points as previously stated which maintained a really slow progress. This also means that I would then have to process the data in a nested for loop. GPUs really don’t like for loops and in order processing.
That’s really bad, like really really bad.
It was at this point that I wasn’t entirely sure how I wanted to display the data to the screen. At this point I had spent so much time debugging shaders, problems with shaders, thinking of different solutions, I needed to pick something that’ll work and move on. I was ready to switch data formats and stuff it in a spreadsheet with some processing on how often objects were being hit, which isn’t an awful idea but visuals are better. This led me to Unity Gizmos. With Gizmos, I can draw spheres or wire spheres at each point in the scene to visualize where people are looking. The more drawn points are spaces that are more often looked at, whereas the less drawn points are less noted by players.
In the end, I wish I had more time to spend on this in order to do an even deeper dive. There are a ton of other things I wanted to try but our team only has so much time to before the end of the semester.
With that being said, if I were to deep dive I would explore the following options:
Create an actual heatmap, don’t know if this is feasible with a ton of points, might be able to use compute shaders to reduce the amount of points, specifically duplicates
Calculate during run time, probably not possible cause it’s VR
Send points to only the material affected and any nearby, may work but also again more complicated raycasting in VR isn’t fun
Send chunks of data, save the texture, send more chunks, layer textures on top, potential loss of data
Currently I’m only saving the point that the player is looking at in the map. Considering this is VR, there are other things to account for: there are actually two eyes that have independent movement not a stationary straight, people move things with their hands without looking at them or looking around to see what it does, people can move around the space with six degrees of freedom, and they can look at things from a distance. This means that often where a player is looking just isn’t enough. I would love to get the full suite of data from the player, hand transform, camera transform, play space size, and if possible eye tracking (not with current VR tech). With this information I would love to instead of just a heatmap create a playback functionality which would allow scrubbing and speed. From this a developer would be able to watch the player play on the developers time instead of the players time. This could also be something that could be shipped with a game to collect data from players everywhere instead of just testers in a controlled environment. The biggest drawback with this is collecting more data in VR might have significant impacts on performance in an environment where performance is extremely necessary.
Look Into Making Heatmap Functional
I would seriously love to explore ways to actually get a heatmap to work with this amount of data. I don’t have a lot of ideas on how to do this and initial research into the topic doesn’t come up with a lot of stuff. The thought of figuring something out makes me really excited and even more interested. The following 3 ideas are possible ways to get it to work, although not perfect could be good starting points if I have time.
Calculate Heatmap During Runtime
Instead of saving data and calculating it after the fact, I could create the heatmap in real time. This would require me to do a sphere/circle cast instead of just a line cast and send data to the specific material in Unity. From here, I can change the data on the material and seriously shrink the amount of data the fragment shader would need to process. From here the heatmap itself could just be saved as a set of textures and reloaded at a different time. The biggest problem with this comes with limitations on VR and specs. Raycasting is expensive and when VR is even more expensive dropping frames to do sphere/circle casts multiple times a second will seriously drop frames.
Change Material Data Instead of Shader Data
This is basically the same idea as the previous, except instead of doing it during runtime it is done from starting point and directional data for casting. The idea would be to change the data on the specific object material from sphere/circle casts. This would be done in editor during “runtime” so it’s not done while a player is playing but while a dev wants to view it. The dev would load the data and run it in editor then a bunch of sphere/circle casts would be done in order to add the point data to the materials. From here the data would be on each material and the dev could look at an actual heatmap. The largest problem with this approach would be how long it took to actually create that heatmap from runtime, raycasting is expensive.
Layer Data (Possible GPGPU)
There’s currently a ton of duplication of points or points that are really close to other points where they might as well be on top of one another. These points could be averaged into each other and given a large intensity at the center at the expense of precision. This could also be done with general purpose gpu in order to just speed the process up. The downside to this is that even after combining points there might still be a ton of points that need to be processed and after all that there’s still a loss of precision. This is the precision vs speed tradeoff that is often seen in data analysis.
This process and exploration was really fun for me. I would’ve loved to spend more time on it but there are other things on the project that need to get done right now. If I have more time later on this will definitely be the first thing I come back to and make better or expand.