This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hey all, I'm facilitating a project related to improving the advanced search mechanics of a complex product. As search functionality is a very open ended and can be complex it must be one of the most difficult things to prototype and test for. I think that one reason for this is that any user might string together a multi-variable search in a variety of different ways, with different keywords and a different understanding of the syntax and mechanics. Accounting for all of these different possibilities is impossible and time-consuming to attempt.
To give some context about some of the pain points: some of the difficulties users experienced were related to needing to know the structure and hierarchy of the records logged inside the product in order to search for them, the naming of labels and buttons on the form were confusing, and the layout and UI was unintuitive, clunky and confusing.
At the moment, I'm planning on doing guided tests where I give instructions to the subject about what to search for and will be focused on qualitative feedback and comparing it against the same on the existing product. It doesn't feel like the insights gained from quantitative KPIs would be very strong as the necessary guidance I would provide is very unlike a realistic environment.
Wondering others view and insights on how to approach testing for search mechanics? Anyone have success doing this?
Subreddit
Post Details
- Posted
- 1 year ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/UXDesign/co...