Local company aims to revolutionize academic peer review using Artificial Intelligence
In the frantically-paced world of academia, where the number of published research papers vastly exceeds the pool of available peer reviewers, a new solution is currently brewing with the help of Artificial Intelligence (AI)
SymbyAI aims to streamline and enhance the peer review process, benefiting researchers, journal editors, and institutions alike.
Michael House, a seasoned programmer, and co-creator of SymbyAI, explains the core issue: “We’re facing a deluge of scientific papers, and the traditional peer review process simply can’t keep up. Many researchers wait months for feedback, while journal editors struggle to find qualified reviewers.”
SymbyAI addresses these challenges by using AI to analyze research papers, identify potential issues, and compare findings against a vast database of academic publications.
The AI model behind SymbyAI was designed by Ashia Livaudais, a former colleague of Michael’s who specializes in natural language processing and machine learning. Before venturing into SymbyAI, she was a computational physicist at Fermilab, where she worked on automating particle accelerator operations—a task traditionally handled by teams of physicists.
“My job title was basically, I wrote algorithms for running particle accelerators with no one at the helm,” she explained, highlighting her expertise in applying AI for complex scientific tasks.
When asked about the inception of SymbyAI, Livaudais shared that it began as a side project known as “Hypogen.” Initially, it aimed to generate hypotheses by processing research papers from various open-access repositories.
However, the project’s focus shifted when she recognized the pervasive issue of unvalidated research in the scientific community.
“It was frustrating to see so much funding and time wasted on work that was either poorly designed or, in rare cases, outright fraudulent,” Livaudais noted. This led her to pivot the project towards addressing errors in scientific research, using AI to catch mistakes like conflicts of interest or plagiarism.
Reflecting on SymbyAI’s development, Livaudais emphasized the role of the broader scientific community.
“We had a huge, very supportive community of cross-disciplinary researchers to help make this happen. These domain experts are passionate about improving the quality of science and were crucial to shaping SymbyAI,” said Livaudais.
Her drive, coupled with Michael Brogan’s technical expertise, has been instrumental in the platform’s evolution.
SymbyAI operates in two main ways.
First, it reviews the internal content of a research paper, checking for errors like inconsistencies, missing references, or unclear methodology. Second, it cross-references the paper’s findings with a database of peer-reviewed publications, identifying any conflicting results or missed citations.
“It’s like having a second set of expert eyes on your paper, but those eyes have read thousands of other papers as well,” said Michael.
For example, a researcher in biology might submit a paper on gene editing techniques. SymbyAI would not only flag any methodological issues but also identify if the conclusions contradict existing studies in the field. If discrepancies arise, the AI provides links to those studies, allowing the author to adjust or defend their work.
In fields like data science or engineering, where algorithms and mathematical models are frequently published, SymbyAI goes a step further. The platform can extract formulas and code from the paper, test them for accuracy, and even run simulations when applicable.
“It’s a game-changer for technical fields, where the validity of a paper often hinges on the precision of its computational models,” Michael noted.
One of the biggest challenges in developing SymbyAI was overcoming the tendency for AI models to “hallucinate” or generate inaccurate information. To mitigate this, SymbyAI assigns confidence scores to its suggestions, helping reviewers assess the reliability of the AI’s feedback.
“Our goal isn’t to replace human reviewers but to assist them, making their jobs easier and more efficient,” Michael clarified.
In practice, this means that researchers and reviewers can see how confident the AI is in its analysis and decide whether to follow up on its suggestions. The system is designed to be collaborative, not authoritative, ensuring that human judgment remains at the forefront.
SymbyAI also sets itself apart by being transparent about its data sources. All of the academic publications it references come from open-access databases like ArXiv, ensuring that the AI is trained on publicly available, peer-reviewed research. This approach avoids the ethical pitfalls associated with proprietary data scraping, making SymbyAI a responsible tool in the AI and research landscape.
While SymbyAI was developed with academic institutions in mind, its potential applications extend far beyond universities. Any organization that relies on research, from pharmaceutical companies to think tanks, could benefit from the platform’s ability to validate complex studies quickly and accurately. “We’re already in talks with several industries, including defense and biotech, to tailor SymbyAI for their specific research needs,” Michael reveals.
The team is also exploring ways to offer SymbyAI as an on-premise solution for organizations that handle sensitive or classified information. This would allow businesses and government agencies to use the platform while keeping their data secure.
Dr. William MacKenzie, an Associate Professor of Management at UAH and the Executive Editor for The Journal of Social Psychology, recognizes the potential of AI in academic research, particularly in improving efficiency during the research process.
He sees significant value in AI for literature reviews, explaining that it could help researchers access a broader range of related work, thereby enhancing the quality of their own research.
“AI has the potential to just kind of scour and make sense of the entire body of knowledge that exists in all corners of the world,” MacKenzie stated.
When discussing SymbyAI specifically, Dr. MacKenzie appreciated the idea of verifying claims within a manuscript, as it could prevent authors from making unsupported assertions.
“Having a system in place to verify claims, I think that’s really interesting and a really cool idea,” he said.
He also highlighted the potential for AI to check raw data and validate the accuracy of research methodologies, particularly in fields like social sciences.
However, he points out some challenges, such as the need for buy-in from publishing houses rather than universities, and the fact that the peer review process involves a human element that AI cannot replace. Despite these limitations, he believes AI could still offer valuable tools to streamline parts of the academic research process, especially in ensuring data accuracy and transparency.
With a planned launch in December and over $450,000 in pre-signed contracts from five initial customers, SymbyAI is poised to make a significant impact. These early adopters, including several academic journals and research-driven businesses, see the platform as a way to not only speed up the peer-review process but also improve the overall quality of published research.
As the team works toward a $2 million fundraising goal to expand operations, SymbyAI’s future looks bright. Michael and Ashia are confident that their platform will not only transform peer review in academia but also set new standards for research validation across multiple industries.