As the field of biomedicine and clinical research continue to grow, so does the need for explainable and comprehensive search engines that can help researchers and healthcare professionals find the information they need quickly and easily.
However, there has been a persistent belief that a perfect top ten search result is necessary for a search engine to be considered effective. We would like to challenge that notion.
Especially in the fields of Pharmacovigilance, Drug Monitoring, Drug Discovery, and Drug Development.
🧐 LLMs, Neural Networks and Relevancy.
Even with the rise of large language models (LLMs) such as GPT-3, or GPT-4, and improvements on neural networks, the goal of pharmacovigilance is not to rely upon a biomedical Google that provides the best top ten search results. Rather, the focus is to ensure that all safety signals and adverse effects are captured.
When performing pharmacovigilance or similar activities, you have to explain to your local and governmental authorities who conducted the search, which inclusion criteria have been used, which exclusion criteria, which results have been considered and when.
Any neural network, for how accurate, remains a black-box that cannot be deciphered and therefore cannot be explained to authorities when performing regulatory activities. A more relevant top-10 that cannot be explained, is a top-10 that cannot be used. So what should you do?
🧾 Shift towards comprehensiveness.
Your main priority is comprehensiveness, therefore ensuring that all relevant information is brought up is a must. A search engine that finds all relevant data for each search is more relevant to your use case than an engine that relies on a top ten. Even if more accurate.
You should be able to determine quickly whether results are enough comprehensive:
- Do they contain your inclusion criteria? Can it be verified?
- Are they within the specified date range if any?
- Do they contain synonym expansion through MeSH? Which ones?
- Are clinical trials and pre-prints included? Why?
While comprehensive, with this approach you will end up having to navigate thousands if not even millions of references.
⚖️ How do you navigate millions of references?
With high quality search strategies. Search strategies are responsible for the explainability, relevancy, data consulted (inclusion criteria) and data discarded (exclusion criteria).
Though search strategies are not easy, as 92% of searches present errors on PubMed. Search strategies should include clear inclusion/exclusion criteria, dates filtering, publication filtering (preprint, reference, clinical trial) and MeSH for synonym expansion.
The positive side of search strategies is that when accurate, they provide much less information with high relevancy.
Biomedical search engines are tools that aid professionals in finding the most relevant data while not losing in comprehensiveness. Investing in increasing the quality of your search strategies will impact tremendously the accuracy and quality of your results while not losing in comprehensiveness.
After all, the main enemy is missing evidence.
Everything starts with search.
With a smart suite of search tools to help you find the information you need, when you need it. Enhance your Search Experience with PapersHive Today!Contact Us