Introduction
As we’ve discussed in previous articles we are embarking on a sophisticated approach to scientific information discovery with SearchSense. Changing search paradigms is impactful to users, and getting feedback and validation throughout the process is critical.
At the beginning of our journey to reinvent scientific search, we hypothesized that users want greater flexibility, natural language search input, and improved “intelligence” in the search system. To validate this hypothesis, we’ve invited users to join us on the development journey.
Current process
We have many channels to collect feedback and inform our product roadmap, including:
- Talking directly to users through sales, customer support, and user research sessions.
- In-app feedback mechanisms (emails and surveys).
- Product usage and behavioral analysis.
Our customer success, sales, and customer center teams are key to hearing users’ voices—these hardworking colleagues are in constant communication with our users, relaying the highlights, pain points, and ideas for enhancement to the product team. We also reach out to customers one-on-one for user research sessions. Both paths provide rich, high-quality, qualitative feedback on features we’ve released or new ideas that we’re considering. The direct voice of the customer allows us to go deeper into a topic, ask follow-up questions, and understand how we can improve their experience.
Another way our users engage with us directly is through in-app product surveys or email. This allows them to share feedback through a timely, non-disruptive channel.
We also perform behavioral analyses on the platform itself to better understand interaction with features and content, how these patterns are changing over time, and areas of friction. This helps us validate that users are seeing value in what we build and the results we deliver.
Why beta testing?
SearchSense evolved from significant user feedback, research sessions, and the ever-growing need to provide solutions that are intuitive and save users' time. This evolution seemed to be best supported by more systemic upgrades rather than incremental feature delivery. However, building a new search infrastructure takes time to develop, and we wanted a way to validate that what we built is correct. To account for this, we invited users ‘into the factory’ and gave them early access to try out these features as they were being developed. This allowed us to get targeted feedback quickly to validate that customer use cases were supported by the new features, that customers were seeing benefit from these features, and receive suggestions for improvements or enhancements.
How this helps
What have we learned? From simple user interface adjustments to retraining our models, beta testing has given us important insights and actionable outcomes. Key feedback themes include:
- Users want reliable, quality data in their results.
- Users want transparency in search interpretations.
- Users want alternative paths for different search scenarios.
We learned that users embrace the use of AI to improve efficiency but have concerns around hallucinations and opaque query modifications. To alleviate these concerns, the CAS SciFinder team is clearly indicating when AI is leveraged, showing how the query was interpreted or modified, and providing an option to re-run the query without AI modifications. We noticed that the user interface for CASDraw would be improved with a default search type selection, rather than users having to select it every time they started a new structure search. In general, users seamlessly adjusted to the new search bar and found it efficient to use the new result type tabs after conducting a search. Most importantly, we were reassured that the familiarity and trust of CAS content was understood and only enhanced by AI to make discoveries easier and faster. We intentionally took a “best of both worlds” approach when building SearchSense – ensuring an intuitive, sophisticated search experience using trusted, authoritative CAS content.