Getting the “Brain” of Your Product Right Framework for validating the logical part of your Product


Photo by Josh Riemer on Unsplash



You opened your Uber App and added your destination. The price was 5X above expectations. What? Why?

Needless to say, you are not very happy! The App is broken, you assume.

Another click away, Google Maps shows you that the regular road is blocked for repair and the diversion would add 5 kilometers.

Ah! Now the 5X makes more sense.

Both Products told you the same fact in different flavors. You were happy with one and disappointed with the other.

All literature remotely related to Product management has established the criticality of user validation.

They recommend that you:

  • Validate user problems

  • Validate your assumptions

  • Validate your proposed solution.

The recommended mechanism to achieve all things validation surveys, focus group discussions, interviews, and whatnot.

Validating different aspects, however, requires an intended well-designed methodology that differs per the intent. It’s easier to witness how the user is interacting with that button, and if she has issues with navigation. It is, however, not as easy to measure customer happiness or sadness when the recommended price of an Uber Ride slowly increases from Rupees 200 to 500. It is however more critical to know, witness, and validate two other key aspects of your Products with your users. The often neglected aspects can be the make and break your Products. The first aspect is — How does the user interact with your corner case handling mechanism. Trust me when I say this, your Product is as strong as its weakest link. The second applies more to Products which are powered by or use an additional hidden layer of Algorithms, a complicated set of rules, or Artificial Intelligence.

Why are these aspects so crucial to Validate?


What happens behind the scenes does not stay behind-the-scene!

As a Product Manager, it is our job to know all the nooks and crannies of our Product behavior.

We give our engineering teams carefully crafted stories that call out loud the correct expected behavior of a Product feature.

When it comes to algorithms (Artificial Intelligence or not) backing the Product, it becomes heavily conceptualized and brought to life in the Engineering world.

There might be assumptions and decisions that were taken during

implementation which totally make sense to us. But do they do too to the users?

The contribution of these algorithms, although extensive, often does not demand critical attention from users. That is until something goes wrong.



The magnitude of “Implications of going wrong” here is huge!

A button color gone wrong you can change. A copy not making sense is easy enough to fix.

An algorithm behavior or a logical block takes more time to create or correct.

Any changes to this piece can have a domino effect on basically everything else.

The criticality of getting this piece right is immense.


Our user validation is usually ineffective because…

The effectiveness and correctness of the algorithmic output are often not very easy to validate through our conventional methods due to the following reasons:


We fail to put users in the right shoes


Taking someone near a well lighted beautiful rack of shoes, pointing out one and asking “Do you like it?” is not the same as handing it over to them, allowing them to walk on a cobblestone road, and then asking “Do you like it?”


The trustability of the user answer is miles apart.


Our biases take over our judgment

Amazon has the sale of Office Chairs. Steal prices! One of them, very suitably priced and looks great. So what if the Brand is unknown, the ratings look good. Well, I wanted a Blue colored chair to match my interiors but a black would do too!

What just happened?

You used good ratings and prices to convince yourself that the chair was perfect. The factors like Brand and Colour, which might earlier have held some importance were ignored.

When you want something to be perfect, it’s likely that in your mind it is

This is the most dangerous bias you can carry to your Product, and we do.


We spend less time and attention on our current points in question

It is easier to validate if user interaction is smooth. You could validate how users are interacting with it by simply witnessing them, and asking relevant questions designed appropriately.

If you ask users answers to questions like:

  1. You just booked your car on Uber and we booked one that’s 20 minutes away. How would you feel about it? or

  2. How would you feel if your Food Delivery charge becomes 2X.

Most users are going to be unhappy. Neither do these answers validate anything nor are they valid or valuable.

Never take a nod or a smile for an approval

The bigger question remains still unanswered — How do we validate with our users?


Break down into “User Testable Scenarios”

Your Product is a puzzle where each piece fits into the others perfectly.

The key pieces that hold the focus of our current article are:

  • The corner cases, and

  • The algorithmic output

The very first step is to identify and segregate clearly the pieces to be tested.



The mechanism of validating each piece is highly focused and customized. An algorithmic output cannot be validated like UX validation, at least not effectively.


After you have broken the bigger puzzle into pieces, the second step is to identify the user scenarios you can test user reactions with.



If you are trying to validate your Pricing algorithmic output and how the user reacts to it, 2 “User Testable Scenarios” are:

Question 1: You are booking an Uber, and the price was predicted at 200 Rupees. Happily, you booked the ride, took the cab. On the way, there were several roadblocks and the cab had to take diversions. After reaching your destination, Uber requested an amount of Rupees 350.

Question 2: You always take an Uber to your office and you know it costs you Rupees 200 (plus/minus 20). Today, you were trying to book a cab, and the quoted price was Rupees 350. The better story you tell, the more accurate responses you get.

The third key step is to prioritize the “User Testable Scenarios” and decide the sequence of questions best suited. The sequence of scenarios can lead to biased user reactions. Based on my experiences, I would recommend mixing and matching to avoid user bias or the possibility of leading them to an answer.

What it would look like is: Testable part 1: Scenario 1 Testable part 2: Scenario 1 Testable part 1: Scenario 2…and so on.

Try to constraint a few (maybe unrelated if you can) scenarios per user to avoid the risk of bias from other questions.

Get them into the shoes they gonna buy

If we were selling shoes, it would have been much easier to validate it. All we needed was to make the user wear it and walk:

  • On concrete

  • On cobblestone

  • On clay

  • Walk fast

  • Wear it for 2 hours

  • You get where I am going……

A Software Product, unfortunately, is not as easy. The key here is how to effectively get them to feel it, be in that mindset they are going to be when they encounter the scenario. You want them to give it some thought.

You could have your own ways to do it based on the context of your Product, your company, access to users, and so on. I would share some mechanisms that have worked for me in the past:

  1. Weave an effective story that makes them feel it. Add a lot of context and key in the details.

Some of the key details I include (I manage a B2B Product) are:

  • Details of their business day (other activities that you have at hand)

  • The criticality of the response (Is someone waiting for your response? Are you a roadblock?)

  • Identify, empathize and highlight the user’s role in the bigger workflow or process

The above details helped the users get the context, give confidence that I understand their scenario, and extract a thoughtful response.

This technique is better-done Face to Face (or video call, in the COVID scenario) because user reactions and unstated observations are essential to success.

2. In a case when users are not very easy to reach, and not in heavy numbers we cannot afford to waste a few scenarios per user.

Another approach that has worked for me (Not as well) is asking users for their expected behaviors in certain scenarios through a survey (or an excel sheet).

The survey would focus on providing the user with the context of scenarios, the outcomes, and ask for user reaction.

This approach goes orthogonal to setting the context and expecting to observe user reactions. You provide the expected outcomes and ask them to react.

Both the above methods can be used as a complement to each other or as a replacement of the other.

Unbias your user responses

Users say what you want to hear, you want to hear what they do not say

When you are talking to someone and especially when asking questions, the storyline is the king.

Aspects of the storyline, including the sequence, the phrasing, and the depth are key to successful validation.

Would you have empathized with Anna Karenina if you had started from the part where she has an extramarital affair? You are more likely to judge and that’s is a normal human tendency.

Would you have sympathized and loved Tyrion Lannister if I had just mentioned that he spends his days visiting brothels?

Users need a complete story to provide their candid reactions. It is your job, as a Product Manager, to identify the key aspects of your story (Short story) that you want to highlight to prevent biases from users.

No one wants to really disappoint a person they are talking to. Well if they can avoid. And this again is a bias.

Key ideas that can help you do some unbiasing: 1. If you can, avoid telling users that you are so closely associated with the Product. There are both kinds of users in the world — extra kind, and the ones who take the opportunity to bitch about the Product. The first kind is good for your day and bad for your Product. 2. Let someone else you trust (maybe a salesperson accompanying you) run the show. Take a seat and observe.



Key Objective To effectively validate the Logical parts of our Products with users.

Supporting Goals

  1. Ensure that every response a user provides is to a well-understood scenario

2. Extract the most honest and natural reactions of users, however bad it is.

Next Steps:

  1. Cluster users reactions into logical groups or ranges (based on your Product context

  2. Identify the critical failure points if any

  3. Prioritize and get to work with your teams to address them.

For some added context, I have worked primarily on Products that were powered by or included complicated algorithms at their heart. My first Product was an AI-powered Video Analytics Product that was leveraged for Surveillance and Retail Analytics. The second was an AI-powered chatbot in the BFSI domain, and the current includes a customer exposed modular HCM Appstore.


Share with me your thoughts about this article. Connect with me on LinkedIn here and follow me on Twitter.

This story was originally published on Medium.

7 views0 comments