I have recently reviewed and reflected on the UX testing process that I'm used to: get armed with a few Figma prototypes, recruit relevant users, run them through a script, reflect on the most frequently repeated points. For the most part, I'm questioning the last two steps: how much are the interviewees influenced by the questions and tasks that the interviewer gives?
It seems appropriate to call it a "task bias" - an emotional state that an interviewee experiences when they're prompted to deliver feedback. Questions like "what do you like about this screen" or "what would you do on this screen" imply that there certainly is something to like or do. In reality, a lot of interfaces are not significant enough to be liked or prompt an action (apart from "I'd go explore a different screen"). Thus, the answers to the questions above might not hold the full value: it's hard to tell whether this emotion or thought would've happened without an interviewer suggesting it.
Still, such prompting questions often bring seemingly valuable results: a weird icon choice, a clear color emphasis, a broken user flow, etc.
The natural subsequent questions to the brain dump above are
With both questions being impossible to answer, the only course of action that seems viable is 1) avoid prompting questions and 2) when avoiding them is not possible, put extra effort into critically assessing the answers you get.