The AI-powered reviewing tools that now exist must be reviewed before they are used, writes Giorgio F Gilestro. Gilestro explains that peer review serves two purposes: validating routine scientific work and recognizing rare discoveries that challenge established frameworks. While humans are capable of performing both of these functions, AI tools currently cannot capture what human experience brings to the review. Additionally, “algorithmic perfectionism” can lead to LLMs offering irrelevant feedback. Gilestro warns that using AI reviewing tools without proper review could ultimately destroy the review’s credibility before the tools can be used to build a system “in which algorithms handle the syntax so that humans can handle the significance.”