Using AI to provide actionable data insights

Design lead
project management
AI
Userlane logo
problem
Userlane is a platform that aids software adoption by allowing customers to create tailored guidance content for their apps. HEART (Happiness, Engagement, Adoption, Retention, Task Success) is an analytics framework Userlane uses to assess an app's performance. The product however did not provide a clear answer to what good performance is, or instruction on what to do to improve specific metrics.

This project an MVP-style experiment, where we explored how AI can aid our customers by providing clear actionable insights.
what i achieved
  • I developed a high level blueprint for insight content and design, and further refined it for each main metric (H, E, A, R, T letter scores).
  • I created draft prompts and examples of what good final outputs would be. Working closely with AI and Data team, I supported them by revising prompts to provide and clarify product and customer context.
  • Pushed for using more contextualised data, such as benchmarks, to make our metrics more meaningful.
challenges and limitations
  • With a major redesign of all HEART dashboards planned, we limited ourselves to only minor changes in the UI, in order to not waste effort or cause change fatigue among users. This meant we had to preserve the existing layouts and key components.
  • We were unable to lead users directly into content creation. E.g. when app Engagement would benefit from creating a Guide, we were unable to link a button to do so in another part of the platform. This was due to a bigger technical obstacle, and fixing it was out of scope for this project.
learnings
  • Garbage in, garbage out: it quickly became evident where our data provided an incomplete picture. For instance, Task Success realistically only looked at clicks of tagged UI elements, which might or might not correlate with successful task performance. In case of a low T score, we couldn't specify what tasks on which page users were actually struggling with, which reassured us in our goal of enriching task tracking as future product and data initiative.
  • Using AI text output directly is zero effort, but poor content design. Scannable layouts need to be thought through, AI agents provided with a list of words or phrases to render in a particular style (e.g. bold). Had we had more time and creative freedom, the insights would take a more visual form - "users don't read" still applies.
  • When not to use AI? You can still get decent results using "dumb" conditional logic. And it can be faster, cheaper, and more predictable than always defaulting to AI. E.g. for T metric, a simple "if: low score; then: suggest using Guides and Tooltips" approach was sufficient.
  • We had many rounds of reviews on AI output and prompt refinement. We learned how to handle initially highly complex prompts by breaking them down for multiple AI agents, and how to limit hallucinations. But for your initial AI projects, budget more time.
  • Using benchmark data to determine what constitutes a good score was an improvement, but still a fairly blunt tool. Allowing customers to set specific goals for app performance would provide even deeper, individualised context for HEART metrics.
Additional improvements to HEART overview page

The main HEART page, which offers a concise overview of an app's performance, was enhanced by integrating benchmark-based scores and substituting static content with dynamic, meaningful data.