Maximize Your Consensus Experience With These Best Practices

Our product will always be a work in progress, and this is just the first iteration. Not all searches work perfectly, and bugs will exist. We hope this article will help improve your experience.
Additionally – it is important to know that Consensus is not a chatbot. We are a search engine that is designed to accept research questions, find relevant answers within research papers, and synthesize the results using the same language model technology.
Focus on the Right Subject Matter:
Consensus only searches through peer-reviewed scientific research to find the most credible insights to your queries. We recommend asking questions related to topics that have likely been studied by scientists.
Consensus has subject matter coverage that ranges from medical research and physics to social sciences and economics.
Examples of queries that perform well and have loads of relevant research include:
- What are the benefits of mindfulness meditation?
- Does spanking impact childhood development?
- “Does NO3 improve exercise performance?”
- “What predicts success for a start-up founder?”
- “What is the impact of climate change on GDP?”
- “Is creatine safe?”
- “Is social media bad for sleep?”
- “What is the best treatment for restless leg syndrome?”
- “Do direct cash transfers reduce poverty?”
Consensus is NOT meant to be used to ask questions about basic facts such as: “How many people live in Europe?” or “When is the next leap year?” as there would likely not be research dedicated to investigating these subjects.
Follow Recommended Formats for Your Query: Ask a question!
Although there is no “correct” way to structure a query, we have seen the best results by asking research questions with the following formats:
- Simple, yes/no questions:
- Ask about the relationship between concepts:
- Ask about the effects, impact, or benefits of a concept:
Other query formats that can produce strong results but have some limitations include:
- Ask a question that requires a number:
- “How much protein is needed per day for muscle gain?”
- Limitation: very hit or miss! When it works it’s great, though!
- Ask about “what is the best…”
- “What is the best treatment for restless leg syndrome?”
- Limitation: most research papers aren’t written like this, so you will usually just get a list of possible best options
- Ask a question about “how to” do something
- “How do you increase local voter turnout?”
- Limitation: similar to the above, most research papers aren’t written like this, so you will usually just get a list of possible methods
- Two concepts separated by an “and”:
- Zinc and depression
- Limitation: you are missing out on all our neat synthesize features and our models wont be as accurate in parsing your intent!
- Open-ended phrase:
- Avocado health effects
- Limitation: you are missing out on all our neat synthesize features and our models wont be as accurate in parsing your intent!
Current Limitations & Future Directions:
We have identified the following limitations and are actively working to fix all of them:
Text issues: While we have done our best to resolve many of the text issues in scientific papers, there are still problems present in our corpus. Examples of this include no spacing between sentences, weird special characters, and typos.
Abstract labels: Oftentimes, scientific abstracts are written with labels separating their respective sections (Background, Methods, Conclusions). Although we have removed many of the more common labels, we have not removed every permutation. Because of this, some findings may still have a label attached.
Lack of context: Some extracted findings lack the necessary context to make sense on their own. An example of this may include a sentence that reads “It reduces inflammation in the body” — as opposed to “Fish oil reduces inflammation in the body.” Currently, we are working on ways to bring in the necessary context to ensure that all sentences make sense.
Unnecessary info: Scientists love to write in jargon, complete with add-on qualifiers and commentary. We are working on models that remove any unnecessary information from our findings while also preserving the author’s intention. Until that feature is ready, you’ll have to deal with some long-form jargon!
Imprecise findings: Our models that extract author findings are not perfect! We may sometimes accidentally surface background information or other statements that are not what you are looking for. Our models are going to be continually improved.