Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism. Run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved September 10, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

50 Great Peer Review Examples: Sample Phrases + Scenarios

by Emre Ok March 16, 2024, 10:48 am updated August 8, 2024, 12:19 pm 1.5k Views

Peer Feedback Examples

Peer review is a concept that has multiple different applications and definitions. Depending on your field, the definition of peer review can change greatly.

In the workplace, the meaning of peer review or peer feedback is that it is simply the input of a peer or colleague on another peer’s performance, attitude, output, or any other performance metric .

While in the academic world peer review’s definition is the examination of an academic paper by another fellow scholar in the field.

Even in the American legal system , people are judged in front of a jury made up of their peers.

It is clear as day that peer feedback carries a lot of weight and power. The input from someone who has the same experience with you day in and day out is on occasion, more meaningful than the feedback from direct reports or feedback from managers .

So here are 50 peer review examples and sample peer feedback phrases that can help you practice peer-to-peer feedback more effectively!

Table of Contents

Peer Feedback Examples: Offering Peers Constructive Criticism

Peer review examples: constructive criticism

One of the most difficult types of feedback to offer is constructive criticism. Whether you are a chief people officer or a junior employee, offering someone constructive criticism is a tight rope to walk.

When you are offering constructive criticism to a peer? That difficulty level is doubled. People can take constructive criticism from above or below.

One place where criticism can really sting is when it comes from someone at their level. That is why the peer feedback phrases below can certainly be of help.

Below you will find 10 peer review example phrases that offer constructive feedback to peers:

  • “I really appreciate the effort you’ve put into this project, especially your attention to detail in the design phase. I wonder if considering alternative approaches to the user interface might enhance user engagement. Perhaps we could explore some user feedback or current trends in UI design to guide us.”
  • “Your presentation had some compelling points, particularly the data analysis section. However, I noticed a few instances where the connection between your arguments wasn’t entirely clear. For example, when transitioning from the market analysis to consumer trends, a clearer linkage could help the audience follow your thought process more effectively.”
  • “I see you’ve put a lot of work into developing this marketing strategy, and it shows promise. To address the issue with the target demographic, it might be beneficial to integrate more specific market research data. I can share a few resources on market analysis that could provide some valuable insights for this section.”
  • “You’ve done an excellent job balancing different aspects of the project, but I think there’s an opportunity to enhance the overall impact by integrating some feedback we received in the last review. For instance, incorporating more user testimonials could strengthen our case study section.”
  • “Your report is well-structured and informative. I would suggest revisiting the conclusions section to ensure that it aligns with the data presented earlier. Perhaps adding a summary of key findings before concluding would reinforce the report’s main takeaways.”
  • “In reviewing your work, I’m impressed by your analytical skills. I believe using ‘I’ statements could make your argument even stronger, as it would provide a personal perspective that could resonate more with the audience. For example, saying ‘I observed a notable trend…’ instead of ‘There is a notable trend…’ can add a personal touch.”
  • “Your project proposal is thought-provoking and innovative. To enhance it further, have you considered asking reflective questions at the end of each section? This could encourage the reader to engage more deeply with the material, fostering a more interactive and thought-provoking dialogue.”
  • “I can see the potential in your approach to solving this issue, and I believe with a bit more refinement, it could be very effective. Maybe a bit more focus on the scalability of the solution could highlight its long-term viability, which would be impressive to stakeholders.”
  • “I admire the dedication you’ve shown in tackling this challenging project. If you’re open to it, I would be happy to collaborate on some of the more complex aspects, especially the data analysis. Together, we might uncover some additional insights that could enhance our findings.”
  • “Your timely submission of the project draft is commendable. To make your work even more impactful, I suggest incorporating recent feedback we received on related projects. This could provide a fresh perspective and potentially uncover aspects we might not have considered.”

Sample Peer Review Phrases: Positive Reinforcement

Peer feedback examples: Positive reinforcement

Offering positive feedback to peers as opposed to constructive criticism is on the easier side when it comes to the feedback spectrum.

There are still questions that linger however, such as: “ How to offer positive feedback professionally? “

To help answer that question and make your life easier when offering positive reinforcements to peers, here are 10 positive peer review examples! Feel free to take any of the peer feedback phrases below and use them in your workplace in the right context!

  • “Your ability to distill complex information into easy-to-understand visuals is exceptional. It greatly enhances the clarity of our reports.”
  • “Congratulations on surpassing this quarter’s sales targets. Your dedication and strategic approach are truly commendable.”
  • “The innovative solution you proposed for our workflow issue was a game-changer. It’s impressive how you think outside the box.”
  • “I really appreciate the effort and enthusiasm you bring to our team meetings. It sets a positive tone that encourages everyone.”
  • “Your continuous improvement in client engagement has not gone unnoticed. Your approach to understanding and addressing their needs is exemplary.”
  • “I’ve noticed significant growth in your project management skills over the past few months. Your ability to keep things on track and communicate effectively is making a big difference.”
  • “Thank you for your proactive approach in the recent project. Your foresight in addressing potential issues was key to our success.”
  • “Your positive attitude, even when faced with challenges, is inspiring. It helps the team maintain momentum and focus.”
  • “Your detailed feedback in the peer review process was incredibly helpful. It’s clear you put a lot of thought into providing meaningful insights.”
  • “The way you facilitated the last workshop was outstanding. Your ability to engage and inspire participants sparked some great ideas.”

Peer Review Examples: Feedback Phrases On Skill Development

Sample Peer Review Phrases: Skill Development

Peer review examples on talent development are one of the most necessary forms of feedback in the workplace.

Feedback should always serve a purpose. Highlighting areas where a peer can improve their skills is a great use of peer review.

Peers have a unique perspective into each other’s daily life and aspirations and this can quite easily be used to guide each other to fresh avenues of skill development.

So here are 10 peer sample feedback phrases for peers about developing new skillsets at work:

  • “Considering your interest in data analysis, I think you’d benefit greatly from the advanced Excel course we have access to. It could really enhance your data visualization skills.”
  • “I’ve noticed your enthusiasm for graphic design. Setting a goal to master a new design tool each quarter could significantly expand your creative toolkit.”
  • “Your potential in project management is evident. How about we pair you with a senior project manager for a mentorship? It could be a great way to refine your skills.”
  • “I came across an online course on persuasive communication that seems like a perfect fit for you. It could really elevate your presentation skills.”
  • “Your technical skills are a strong asset to the team. To take it to the next level, how about leading a workshop to share your knowledge? It could be a great way to develop your leadership skills.”
  • “I think you have a knack for writing. Why not take on the challenge of contributing to our monthly newsletter? It would be a great way to hone your writing skills.”
  • “Your progress in learning the new software has been impressive. Continuing to build on this momentum will make you a go-to expert in our team.”
  • “Given your interest in market research, I’d recommend diving into analytics. Understanding data trends could provide valuable insights for our strategy discussions.”
  • “You have a good eye for design. Participating in a collaborative project with our design team could offer a deeper understanding and hands-on experience.”
  • “Your ability to resolve customer issues is commendable. Enhancing your conflict resolution skills could make you even more effective in these situations.”

Peer Review Phrase Examples: Goals And Achievements

Peer Review Phrase Examples: Goals and Achievements

Equally important as peer review and feedback is peer recognition . Being recognized and appreciated by one’s peers at work is one of the best sentiments someone can experience at work.

Peer feedback when it comes to one’s achievements often comes hand in hand with feedback about goals.

One of the best goal-setting techniques is to attach new goals to employee praise . That is why our next 10 peer review phrase examples are all about goals and achievements.

While these peer feedback examples may not directly align with your situation, customizing them according to context is simple enough!

  • “Your goal to increase client engagement has been impactful. Reviewing and aligning these goals quarterly could further enhance our outreach efforts.”
  • “Setting a goal to reduce project delivery times has been a great initiative. Breaking this down into smaller milestones could provide clearer pathways to success.”
  • “Your aim to improve team collaboration is commendable. Identifying specific collaboration tools and practices could make this goal even more attainable.”
  • “I’ve noticed your dedication to personal development. Establishing specific learning goals for each quarter could provide a structured path for your growth.”
  • “Celebrating your achievement in enhancing our customer satisfaction ratings is important. Let’s set new targets to maintain this positive trajectory.”
  • “Your goal to enhance our brand’s social media presence has yielded great results. Next, we could focus on increasing engagement rates to build deeper connections with our audience.”
  • “While striving to increase sales is crucial, ensuring we have measurable and realistic targets will help maintain team morale and focus.”
  • “Your efforts to improve internal communication are showing results. Setting specific objectives for team meetings and feedback sessions could further this progress.”
  • “Achieving certification in your field was a significant milestone. Now, setting a goal to apply this new knowledge in our projects could maximize its impact.”
  • “Your initiative to lead community engagement projects has been inspiring. Let’s set benchmarks to track the positive changes and plan our next steps in community involvement.”

Peer Evaluation Examples: Communication Skills

Communication skills.

The last area of peer feedback we will be covering in this post today is peer review examples on communication skills.

Since the simple act of delivering peer review or peer feedback depends heavily on one’s communication skills, it goes without saying that this is a crucial area.

Below you will find 10 sample peer evaluation examples that you can apply to your workplace with ease.

Go over each peer review phrase and select the ones that best reflect the feedback you want to offer to your peers!

  • “Your ability to articulate complex ideas in simple terms has been a great asset. Continuously refining this skill can enhance our team’s understanding and collaboration.”
  • “The strategies you’ve implemented to improve team collaboration have been effective. Encouraging others to share their methods can foster a more collaborative environment.”
  • “Navigating the recent conflict with diplomacy and tact was impressive. Your approach could serve as a model for effective conflict resolution within the team.”
  • “Your active listening during meetings is commendable. It not only shows respect for colleagues but also ensures that all viewpoints are considered, enhancing our decision-making process.”
  • “Your adaptability in adjusting communication styles to different team members is key to our project’s success. This skill is crucial for maintaining effective collaboration across diverse teams.”
  • “The leadership you displayed in coordinating the team project was instrumental in its success. Your ability to align everyone’s efforts towards a common goal is a valuable skill.”
  • “Your presentation skills have significantly improved, effectively engaging and informing the team. Continued focus on this area can make your communication even more impactful.”
  • “Promoting inclusivity in your communication has positively influenced our team’s dynamics. This approach ensures that everyone feels valued and heard.”
  • “Your negotiation skills during the last project were key to reaching a consensus. Developing these skills further can enhance your effectiveness in future discussions.”
  • “The feedback culture you’re fostering is creating a more dynamic and responsive team environment. Encouraging continuous feedback can lead to ongoing improvements and innovation.”

Best Way To Offer Peer Feedback: Using Feedback Software!

If you are offering feedback to peers or conducting peer review, you need a performance management tool that lets you digitize, streamline, and structure those processes effectively.

To help you do just that let us show you just how you can use the best performance management software for Microsoft Teams , Teamflect, to deliver feedback to peers!

While this particular example approaches peer review in the form of direct feedback, Teamflect can also help implement peer reviews inside performance appraisals for a complete peer evaluation.

Step 1: Head over to Teamflect’s Feedback Module

While Teamflect users can exchange feedback without leaving Microsoft Teams chat with the help of customizable feedback templates, the feedback module itself serves as a hub for all the feedback given and received.

Once inside the feedback module, all you have to do is click the “New Feedback” button to start giving structured and effective feedback to your peers!

Microsoft Teams classic

Step 2: Select a feedback template

Teamflect has an extensive library of customizable feedback templates. You can either directly pick a template that best fits the topic on which you would like to deliver feedback to your peer or create a custom feedback template specifically for peer evaluations.

Once you’ve chosen your template, you can start giving feedback right then and there!

Microsoft Teams classic 1

Optional: 360-Degree Feedback

Why stop with peer review? Include all stakeholders around the performance cycle into the feedback process with one of the most intuitive 360-degree feedback systems out there.

Microsoft Teams classic 3

Request feedback about yourself or about someone else from everyone involved in their performance, including managers, direct reports, peers, and external parties.

Optional: Summarize feedback with AI

If you have more feedback on your hands then you can go through, summarize that feedback with the help of Teamflect’s AI assistant!

Microsoft Teams classic 2

What Are The Benefits of Implementing Peer Review Systems?

Peer reviews have plenty of benefits to the individuals delivering the peer review, the ones receiving the peer evaluation, as well as the organization itself. So here are the 5 benefits of implementing peer feedback programs organization-wide.

1. Enhanced Learning and Understanding Peer feedback promotes a deeper engagement with the material or project at hand. When individuals know they will be receiving and providing feedback, they have a brand new incentive to engage more thoroughly with the content.

2. Cultivation of Open Communication and Continuous Improvement Establishing a norm where feedback is regularly exchanged fosters an environment of open communication. People become more accustomed to giving and receiving constructive criticism, reducing defensiveness, and fostering a culture where continuous improvement is the norm.

3. Multiple Perspectives Enhance Quality Peer feedback introduces multiple viewpoints, which can significantly enhance the quality of work. Different perspectives can uncover blind spots, introduce new ideas, and challenge existing ones, leading to more refined and well-rounded outcomes.

4. Encouragement of Personal and Professional Development Feedback from peers can play a crucial role in personal and professional growth. It can highlight areas of strength and identify opportunities for development, guiding individuals toward their full potential.

Related Posts:

Written by emre ok.

Emre is a content writer at Teamflect who aims to share fun and unique insight into the world of performance management.

peer review in research example

15 Performance Review Competencies to Track in 2024

promotion interview questions thumbnail

10 Best Employee Promotion Interview Questions & Answers!

Status.net

Peer Review Examples (300 Key Positive, Negative Phrases)

By Status.net Editorial Team on February 4, 2024 — 18 minutes to read

Peer review is a process that helps you evaluate your work and that of others. It can be a valuable tool in ensuring the quality and credibility of any project or piece of research. Engaging in peer review lets you take a fresh look at something you may have become familiar with. You’ll provide constructive criticism to your peers and receive the same in return, allowing everyone to learn and grow.

Finding the right words to provide meaningful feedback can be challenging. This article provides positive and negative phrases to help you conduct more effective peer reviews.

Crafting Positive Feedback

Praising professionalism.

  • Your punctuality is exceptional.
  • You always manage to stay focused under pressure.
  • I appreciate your respect for deadlines.
  • Your attention to detail is outstanding.
  • You exhibit great organizational skills.
  • Your dedication to the task at hand is commendable.
  • I love your professionalism in handling all situations.
  • Your ability to maintain a positive attitude is inspiring.
  • Your commitment to the project shows in the results.
  • I value your ability to think critically and come up with solutions.

Acknowledging Skills

  • Your technical expertise has greatly contributed to our team’s success.
  • Your creative problem-solving skills are impressive.
  • You have an exceptional way of explaining complex ideas.
  • I admire your ability to adapt to change quickly.
  • Your presentation skills are top-notch.
  • You have a unique flair for motivating others.
  • Your negotiation skills have led to wonderful outcomes.
  • Your skillful project management ensured smooth progress.
  • Your research skills have produced invaluable findings.
  • Your knack for diplomacy has fostered great relationships.

Encouraging Teamwork

  • Your ability to collaborate effectively is evident.
  • You consistently go above and beyond to help your teammates.
  • I appreciate your eagerness to support others.
  • You always bring out the best in your team members.
  • You have a gift for uniting people in pursuit of a goal.
  • Your clear communication makes collaboration a breeze.
  • You excel in creating a nurturing atmosphere for the team.
  • Your leadership qualities are incredibly valuable to our team.
  • I admire your respectful attitude towards team members.
  • You have a knack for creating a supportive and inclusive environment.

Highlighting Achievements

  • Your sales performance this quarter has been phenomenal.
  • Your cost-saving initiatives have positively impacted the budget.
  • Your customer satisfaction ratings have reached new heights.
  • Your successful marketing campaign has driven impressive results.
  • You’ve shown a strong improvement in meeting your performance goals.
  • Your efforts have led to a significant increase in our online presence.
  • The success of the event can be traced back to your careful planning.
  • Your project was executed with precision and efficiency.
  • Your innovative product ideas have provided a competitive edge.
  • You’ve made great strides in strengthening our company culture.

Formulating Constructive Criticism

Addressing areas for improvement.

When providing constructive criticism, try to be specific in your comments and avoid generalizing. Here are 30 example phrases:

  • You might consider revising this sentence for clarity.
  • This section could benefit from more detailed explanations.
  • It appears there may be a discrepancy in your data.
  • This paragraph might need more support from the literature.
  • I suggest reorganizing this section to improve coherence.
  • The introduction can be strengthened by adding context.
  • There may be some inconsistencies that need to be resolved.
  • This hypothesis needs clearer justification.
  • The methodology could benefit from additional details.
  • The conclusion may need a stronger synthesis of the findings.
  • You might want to consider adding examples to illustrate your point.
  • Some of the terminology used here could be clarified.
  • It would be helpful to see more information on your sources.
  • A summary might help tie this section together.
  • You may want to consider rephrasing this question.
  • An elaboration on your methods might help the reader understand your approach.
  • This image could be clearer if it were larger or had labels.
  • Try breaking down this complex idea into smaller parts.
  • You may want to revisit your tone to ensure consistency.
  • The transitions between topics could be smoother.
  • Consider adding citations to support your argument.
  • The tables and figures could benefit from clearer explanations.
  • It might be helpful to revisit your formatting for better readability.
  • This discussion would benefit from additional perspectives.
  • You may want to address any logical gaps in your argument.
  • The literature review might benefit from a more critical analysis.
  • You might want to expand on this point to strengthen your case.
  • The presentation of your results could be more organized.
  • It would be helpful if you elaborated on this connection in your analysis.
  • A more in-depth conclusion may better tie your ideas together.

Offering Specific Recommendations

  • You could revise this sentence to say…
  • To make this section more detailed, consider discussing…
  • To address the data discrepancy, double-check the data at this point.
  • You could add citations from these articles to strengthen your point.
  • To improve coherence, you could move this paragraph to…
  • To add context, consider mentioning…
  • To resolve these inconsistencies, check…
  • To justify your hypothesis, provide evidence from…
  • To add detail to your methodology, describe…
  • To synthesize your findings in the conclusion, mention…
  • To illustrate your point, consider giving an example of…
  • To clarify terminology, you could define…
  • To provide more information on sources, list…
  • To create a summary, touch upon these key points.
  • To rephrase this question, try asking…
  • To expand upon your methods, discuss…
  • To make this image clearer, increase its size or add labels for…
  • To break down this complex idea, consider explaining each part like…
  • To maintain a consistent tone, avoid using…
  • To smooth transitions between topics, use phrases such as…
  • To support your argument, cite sources like…
  • To explain tables and figures, add captions with…
  • To improve readability, use formatting elements like headings, bullet points, etc.
  • To include additional perspectives in your discussion, mention…
  • To address logical gaps, provide reasoning for…
  • To create a more critical analysis in your literature review, critique…
  • To expand on this point, add details about…
  • To present your results more organized, use subheadings, tables, or graphs.
  • To elaborate on connections in your analysis, show how x relates to y by…
  • To provide a more in-depth conclusion, tie together the major findings by…

Highlighting Positive Aspects

When offering constructive criticism, maintaining a friendly and positive tone is important. Encourage improvement by highlighting the positive aspects of the work. For example:

  • Great job on this section!
  • Your writing is clear and easy to follow.
  • I appreciate your attention to detail.
  • Your conclusions are well supported by your research.
  • Your argument is compelling and engaging.
  • I found your analysis to be insightful.
  • The organization of your paper is well thought out.
  • Your use of citations effectively strengthens your claims.
  • Your methodology is well explained and thorough.
  • I’m impressed with the depth of your literature review.
  • Your examples are relevant and informative.
  • You’ve made excellent connections throughout your analysis.
  • Your grasp of the subject matter is impressive.
  • The clarity of your images and figures is commendable.
  • Your transitions between topics are smooth and well-executed.
  • You’ve effectively communicated complex ideas.
  • Your writing style is engaging and appropriate for your target audience.
  • Your presentation of results is easy to understand.
  • Your tone is consistent and professional.
  • Your overall argument is persuasive.
  • Your use of formatting helps guide the reader.
  • Your tables, graphs, and illustrations enhance your argument.
  • Your interpretation of the data is insightful and well-reasoned.
  • Your discussion is balanced and well-rounded.
  • The connections you make throughout your paper are thought-provoking.
  • Your approach to the topic is fresh and innovative.
  • You’ve done a fantastic job synthesizing information from various sources.
  • Your attention to the needs of the reader is commendable.
  • The care you’ve taken in addressing counterarguments is impressive.
  • Your conclusions are well-drawn and thought-provoking.

Balancing Feedback

Combining positive and negative remarks.

When providing peer review feedback, it’s important to balance positive and negative comments: this approach allows the reviewer to maintain a friendly tone and helps the recipient feel reassured.

Examples of Positive Remarks:

  • Well-organized
  • Clear and concise
  • Excellent use of examples
  • Thorough research
  • Articulate argument
  • Engaging writing style
  • Thoughtful analysis
  • Strong grasp of the topic
  • Relevant citations
  • Logical structure
  • Smooth transitions
  • Compelling conclusion
  • Original ideas
  • Solid supporting evidence
  • Succinct summary

Examples of Negative Remarks:

  • Unclear thesis
  • Lacks focus
  • Insufficient evidence
  • Overgeneralization
  • Inconsistent argument
  • Redundant phrasing
  • Jargon-filled language
  • Poor formatting
  • Grammatical errors
  • Unconvincing argument
  • Confusing organization
  • Needs more examples
  • Weak citations
  • Unsupported claims
  • Ambiguous phrasing

Ensuring Objectivity

Avoid using emotionally charged language or personal opinions. Instead, base your feedback on facts and evidence.

For example, instead of saying, “I don’t like your choice of examples,” you could say, “Including more diverse examples would strengthen your argument.”

Personalizing Feedback

Tailor your feedback to the individual and their work, avoiding generic or blanket statements. Acknowledge the writer’s strengths and demonstrate an understanding of their perspective. Providing personalized, specific, and constructive comments will enable the recipient to grow and improve their work.

For instance, you might say, “Your writing style is engaging, but consider adding more examples to support your points,” or “I appreciate your thorough research, but be mindful of avoiding overgeneralizations.”

Phrases for Positive Feedback

  • Great job on the presentation, your research was comprehensive.
  • I appreciate your attention to detail in this project.
  • You showed excellent teamwork and communication skills.
  • Impressive progress on the task, keep it up!
  • Your creativity really shined in this project.
  • Thank you for your hard work and dedication.
  • Your problem-solving skills were crucial to the success of this task.
  • I am impressed by your ability to multitask.
  • Your time management in finishing this project was stellar.
  • Excellent initiative in solving the issue.
  • Your work showcases your exceptional analytical skills.
  • Your positive attitude is contagious!
  • You were successful in making a complex subject easier to grasp.
  • Your collaboration skills truly enhanced our team’s effectiveness.
  • You handled the pressure and deadlines admirably.
  • Your written communication is both thorough and concise.
  • Your responsiveness to feedback is commendable.
  • Your flexibility in adapting to new challenges is impressive.
  • Thank you for your consistently accurate work.
  • Your devotion to professional development is inspiring.
  • You display strong leadership qualities.
  • You demonstrate empathy and understanding in handling conflicts.
  • Your active listening skills contribute greatly to our discussions.
  • You consistently take ownership of your tasks.
  • Your resourcefulness was key in overcoming obstacles.
  • You consistently display a can-do attitude.
  • Your presentation skills are top-notch!
  • You are a valuable asset to our team.
  • Your positive energy boosts team morale.
  • Your work displays your tremendous growth in this area.
  • Your ability to stay organized is commendable.
  • You consistently meet or exceed expectations.
  • Your commitment to self-improvement is truly inspiring.
  • Your persistence in tackling challenges is admirable.
  • Your ability to grasp new concepts quickly is impressive.
  • Your critical thinking skills are a valuable contribution to our team.
  • You demonstrate impressive technical expertise in your work.
  • Your contributions make a noticeable difference.
  • You effectively balance multiple priorities.
  • You consistently take the initiative to improve our processes.
  • Your ability to mentor and support others is commendable.
  • You are perceptive and insightful in offering solutions to problems.
  • You actively engage in discussions and share your opinions constructively.
  • Your professionalism is a model for others.
  • Your ability to quickly adapt to changes is commendable.
  • Your work exemplifies your passion for excellence.
  • Your desire to learn and grow is inspirational.
  • Your excellent organizational skills are a valuable asset.
  • You actively seek opportunities to contribute to the team’s success.
  • Your willingness to help others is truly appreciated.
  • Your presentation was both informative and engaging.
  • You exhibit great patience and perseverance in your work.
  • Your ability to navigate complex situations is impressive.
  • Your strategic thinking has contributed to our success.
  • Your accountability in your work is commendable.
  • Your ability to motivate others is admirable.
  • Your reliability has contributed significantly to the team’s success.
  • Your enthusiasm for your work is contagious.
  • Your diplomatic approach to resolving conflict is commendable.
  • Your ability to persevere despite setbacks is truly inspiring.
  • Your ability to build strong relationships with clients is impressive.
  • Your ability to prioritize tasks is invaluable to our team.
  • Your work consistently demonstrates your commitment to quality.
  • Your ability to break down complex information is excellent.
  • Your ability to think on your feet is greatly appreciated.
  • You consistently go above and beyond your job responsibilities.
  • Your attention to detail consistently ensures the accuracy of your work.
  • Your commitment to our team’s success is truly inspiring.
  • Your ability to maintain composure under stress is commendable.
  • Your contributions have made our project a success.
  • Your confidence and conviction in your work is motivating.
  • Thank you for stepping up and taking the lead on this task.
  • Your willingness to learn from mistakes is encouraging.
  • Your decision-making skills contribute greatly to the success of our team.
  • Your communication skills are essential for our team’s effectiveness.
  • Your ability to juggle multiple tasks simultaneously is impressive.
  • Your passion for your work is infectious.
  • Your courage in addressing challenges head-on is remarkable.
  • Your ability to prioritize tasks and manage your own workload is commendable.
  • You consistently demonstrate strong problem-solving skills.
  • Your work reflects your dedication to continuous improvement.
  • Your sense of humor helps lighten the mood during stressful times.
  • Your ability to take constructive feedback on board is impressive.
  • You always find opportunities to learn and develop your skills.
  • Your attention to safety protocols is much appreciated.
  • Your respect for deadlines is commendable.
  • Your focused approach to work is motivating to others.
  • You always search for ways to optimize our processes.
  • Your commitment to maintaining a high standard of work is inspirational.
  • Your excellent customer service skills are a true asset.
  • You demonstrate strong initiative in finding solutions to problems.
  • Your adaptability to new situations is an inspiration.
  • Your ability to manage change effectively is commendable.
  • Your proactive communication is appreciated by the entire team.
  • Your drive for continuous improvement is infectious.
  • Your input consistently elevates the quality of our discussions.
  • Your ability to handle both big picture and detailed tasks is impressive.
  • Your integrity and honesty are commendable.
  • Your ability to take on new responsibilities is truly inspiring.
  • Your strong work ethic is setting a high standard for the entire team.

Phrases for Areas of Improvement

  • You might consider revisiting the structure of your argument.
  • You could work on clarifying your main point.
  • Your presentation would benefit from additional examples.
  • Perhaps try exploring alternative perspectives.
  • It would be helpful to provide more context for your readers.
  • You may want to focus on improving the flow of your writing.
  • Consider incorporating additional evidence to support your claims.
  • You could benefit from refining your writing style.
  • It would be useful to address potential counterarguments.
  • You might want to elaborate on your conclusion.
  • Perhaps consider revisiting your methodology.
  • Consider providing a more in-depth analysis.
  • You may want to strengthen your introduction.
  • Your paper could benefit from additional proofreading.
  • You could work on making your topic more accessible to your readers.
  • Consider tightening your focus on key points.
  • It might be helpful to add more visual aids to your presentation.
  • You could strive for more cohesion between your sections.
  • Your abstract would benefit from a more concise summary.
  • Perhaps try to engage your audience more actively.
  • You may want to improve the organization of your thoughts.
  • It would be useful to cite more reputable sources.
  • Consider emphasizing the relevance of your topic.
  • Your argument could benefit from stronger parallels.
  • You may want to add transitional phrases for improved readability.
  • It might be helpful to provide more concrete examples.
  • You could work on maintaining a consistent tone throughout.
  • Consider employing a more dynamic vocabulary.
  • Your project would benefit from a clearer roadmap.
  • Perhaps explore the limitations of your study.
  • It would be helpful to demonstrate the impact of your research.
  • You could work on the consistency of your formatting.
  • Consider refining your choice of images.
  • You may want to improve the pacing of your presentation.
  • Make an effort to maintain eye contact with your audience.
  • Perhaps adding humor or anecdotes would engage your listeners.
  • You could work on modulating your voice for emphasis.
  • It would be helpful to practice your timing.
  • Consider incorporating more interactive elements.
  • You might want to speak more slowly and clearly.
  • Your project could benefit from additional feedback from experts.
  • You might want to consider the practical implications of your findings.
  • It would be useful to provide a more user-friendly interface.
  • Consider incorporating a more diverse range of sources.
  • You may want to hone your presentation to a specific audience.
  • You could work on the visual design of your slides.
  • Your writing might benefit from improved grammatical accuracy.
  • It would be helpful to reduce jargon for clarity.
  • You might consider refining your data visualization.
  • Perhaps provide a summary of key points for easier comprehension.
  • You may want to develop your skills in a particular area.
  • Consider attending workshops or trainings for continued learning.
  • Your project could benefit from stronger collaboration.
  • It might be helpful to seek guidance from mentors or experts.
  • You could work on managing your time more effectively.
  • It would be useful to set goals and priorities for improvement.
  • You might want to identify areas where you can grow professionally.
  • Consider setting aside time for reflection and self-assessment.
  • Perhaps develop strategies for overcoming challenges.
  • You could work on increasing your confidence in public speaking.
  • Consider collaborating with others for fresh insights.
  • You may want to practice active listening during discussions.
  • Be open to feedback and constructive criticism.
  • It might be helpful to develop empathy for team members’ perspectives.
  • You could work on being more adaptable to change.
  • It would be useful to improve your problem-solving abilities.
  • Perhaps explore opportunities for networking and engagement.
  • You may want to set personal benchmarks for success.
  • You might benefit from being more proactive in seeking opportunities.
  • Consider refining your negotiation and persuasion skills.
  • It would be helpful to enhance your interpersonal communication.
  • You could work on being more organized and detail-oriented.
  • You may want to focus on strengthening leadership qualities.
  • Consider improving your ability to work effectively under pressure.
  • Encourage open dialogue among colleagues to promote a positive work environment.
  • It might be useful to develop a growth mindset.
  • Be open to trying new approaches and techniques.
  • Consider building stronger relationships with colleagues and peers.
  • It would be helpful to manage expectations more effectively.
  • You might want to delegate tasks more efficiently.
  • You could work on your ability to prioritize workload effectively.
  • It would be useful to review and update processes and procedures regularly.
  • Consider creating a more inclusive working environment.
  • You might want to seek opportunities to mentor and support others.
  • Recognize and celebrate the accomplishments of your team members.
  • Consider developing a more strategic approach to decision-making.
  • You may want to establish clear goals and objectives for your team.
  • It would be helpful to provide regular and timely feedback.
  • Consider enhancing your delegation and time-management skills.
  • Be open to learning from your team’s diverse skill sets.
  • You could work on cultivating a collaborative culture.
  • It would be useful to engage in continuous professional development.
  • Consider seeking regular feedback from colleagues and peers.
  • You may want to nurture your own personal resilience.
  • Reflect on areas of improvement and develop an action plan.
  • It might be helpful to share your progress with a mentor or accountability partner.
  • Encourage your team to support one another’s growth and development.
  • Consider celebrating and acknowledging small successes.
  • You could work on cultivating effective communication habits.
  • Be willing to take calculated risks and learn from any setbacks.

Frequently Asked Questions

How can i phrase constructive feedback in peer evaluations.

To give constructive feedback in peer evaluations, try focusing on specific actions or behaviors that can be improved. Use phrases like “I noticed that…” or “You might consider…” to gently introduce your observations. For example, “You might consider asking for help when handling multiple tasks to improve time management.”

What are some examples of positive comments in peer reviews?

  • “Your presentation was engaging and well-organized, making it easy for the team to understand.”
  • “You are a great team player, always willing to help others and contribute to the project’s success.”
  • “Your attention to detail in documentation has made it easier for the whole team to access information quickly.”

Can you suggest ways to highlight strengths in peer appraisals?

Highlighting strengths in peer appraisals can be done by mentioning specific examples of how the individual excelled or went above and beyond expectations. You can also point out how their strengths positively impacted the team. For instance:

  • “Your effective communication skills ensured that everyone was on the same page during the project.”
  • “Your creativity in problem-solving helped resolve a complex issue that benefited the entire team.”

What are helpful phrases to use when noting areas for improvement in a peer review?

When noting areas for improvement in a peer review, try using phrases that encourage growth and development. Some examples include:

  • “To enhance your time management skills, you might try prioritizing tasks or setting deadlines.”
  • “By seeking feedback more often, you can continue to grow and improve in your role.”
  • “Consider collaborating more with team members to benefit from their perspectives and expertise.”

How should I approach writing a peer review for a manager differently?

When writing a peer review for a manager, it’s important to focus on their leadership qualities and how they can better support their team. Some suggestions might include:

  • “Encouraging more open communication can help create a more collaborative team environment.”
  • “By providing clearer expectations or deadlines, you can help reduce confusion and promote productivity.”
  • “Consider offering recognition to team members for their hard work, as this can boost motivation and morale.”

What is a diplomatic way to discuss negative aspects in a peer review?

Discussing negative aspects in a peer review requires tact and empathy. Try focusing on behaviors and actions rather than personal attributes, and use phrases that suggest areas for growth. For example:

  • “While your dedication to the project is admirable, it might be beneficial to delegate some tasks to avoid burnout.”
  • “Improving communication with colleagues can lead to better alignment within the team.”
  • “By asking for feedback, you can identify potential blind spots and continue to grow professionally.”
  • Flexibility: 25 Performance Review Phrases Examples
  • Job Knowledge Performance Review Phrases (Examples)
  • Integrity: 25 Performance Review Phrases Examples
  • 60 Smart Examples: Positive Feedback for Manager in a Review
  • 30 Employee Feedback Examples (Positive & Negative)
  • Initiative: 25 Performance Review Phrases Examples

The Savvy Scientist

The Savvy Scientist

Experiences of a London PhD student and beyond

My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions

peer review in research example

Once you’ve submitted your paper to an academic journal you’re in the nerve-racking position of waiting to hear back about the fate of your work. In this post we’ll cover everything from potential responses you could receive from the editor and example peer review comments through to how to submit revisions.

My first first-author paper was reviewed by five (yes 5!) reviewers and since then I’ve published several others papers, so now I want to share the insights I’ve gained which will hopefully help you out!

This post is part of my series to help with writing and publishing your first academic journal paper. You can find the whole series here: Writing an academic journal paper .

The Peer Review Process

An overview of the academic journal peer review process.

When you submit a paper to a journal, the first thing that will happen is one of the editorial team will do an initial assessment of whether or not the article is of interest. They may decide for a number of reasons that the article isn’t suitable for the journal and may reject the submission before even sending it out to reviewers.

If this happens hopefully they’ll have let you know quickly so that you can move on and make a start targeting a different journal instead.

Handy way to check the status – Sign in to the journal’s submission website and have a look at the status of your journal article online. If you can see that the article is under review then you’ve passed that first hurdle!

When your paper is under peer review, the journal will have set out a framework to help the reviewers assess your work. Generally they’ll be deciding whether the work is to a high enough standard.

Interested in reading about what reviewers are looking for? Check out my post on being a reviewer for the first time. Peer-Reviewing Journal Articles: Should You Do It? Sharing What I Learned From My First Experiences .

Once the reviewers have made their assessments, they’ll return their comments and suggestions to the editor who will then decide how the article should proceed.

How Many People Review Each Paper?

The editor ideally wants a clear decision from the reviewers as to whether the paper should be accepted or rejected. If there is no consensus among the reviewers then the editor may send your paper out to more reviewers to better judge whether or not to accept the paper.

If you’ve got a lot of reviewers on your paper it isn’t necessarily that the reviewers disagreed about accepting your paper.

You can also end up with lots of reviewers in the following circumstance:

  • The editor asks a certain academic to review the paper but doesn’t get a response from them
  • The editor asks another academic to step in
  • The initial reviewer then responds

Next thing you know your work is being scrutinised by extra pairs of eyes!

As mentioned in the intro, my first paper ended up with five reviewers!

Potential Journal Responses

Assuming that the paper passes the editor’s initial evaluation and is sent out for peer-review, here are the potential decisions you may receive:

  • Reject the paper. Sadly the editor and reviewers decided against publishing your work. Hopefully they’ll have included feedback which you can incorporate into your submission to another journal. I’ve had some rejections and the reviewer comments were genuinely useful.
  • Accept the paper with major revisions . Good news: with some more work your paper could get published. If you make all the changes that the reviewers suggest, and they’re happy with your responses, then it should get accepted. Some people see major revisions as a disappointment but it doesn’t have to be.
  • Accept the paper with minor revisions. This is like getting a major revisions response but better! Generally minor revisions can be addressed quickly and often come down to clarifying things for the reviewers: rewording, addressing minor concerns etc and don’t require any more experiments or analysis. You stand a really good chance of getting the paper published if you’ve been given a minor revisions result.
  • Accept the paper with no revisions . I’m not sure that this ever really happens, but it is potentially possible if the reviewers are already completely happy with your paper!

Keen to know more about academic publishing? My series on publishing is now available as a free eBook. It includes my experiences being a peer reviewer. Click the image below for access.

peer review in research example

Example Peer Review Comments & Addressing Reviewer Feedback

If your paper has been accepted but requires revisions, the editor will forward to you the comments and concerns that the reviewers raised. You’ll have to address these points so that the reviewers are satisfied your work is of a publishable standard.

It is extremely important to take this stage seriously. If you don’t do a thorough job then the reviewers won’t recommend that your paper is accepted for publication!

You’ll have to put together a resubmission with your co-authors and there are two crucial things you must do:

  • Make revisions to your manuscript based off reviewer comments
  • Reply to the reviewers, telling them the changes you’ve made and potentially changes you’ve not made in instances where you disagree with them. Read on to see some example peer review comments and how I replied!

Before making any changes to your actual paper, I suggest having a thorough read through the reviewer comments.

Once you’ve read through the comments you might be keen to dive straight in and make the changes in your paper. Instead, I actually suggest firstly drafting your reply to the reviewers.

Why start with the reply to reviewers? Well in a way it is actually potentially more important than the changes you’re making in the manuscript.

Imagine when a reviewer receives your response to their comments: you want them to be able to read your reply document and be satisfied that their queries have largely been addressed without even having to open the updated draft of your manuscript. If you do a good job with the replies, the reviewers will be better placed to recommend the paper be accepted!

By starting with your reply to the reviewers you’ll also clarify for yourself what changes actually have to be made to the paper.

So let’s now cover how to reply to the reviewers.

1. Replying to Journal Reviewers

It is so important to make sure you do a solid job addressing your reviewers’ feedback in your reply document. If you leave anything unanswered you’re asking for trouble, which in this case means either a rejection or another round of revisions: though some journals only give you one shot! Therefore make sure you’re thorough, not just with making the changes but demonstrating the changes in your replies.

It’s no good putting in the work to revise your paper but not evidence it in your reply to the reviewers!

There may be points that reviewers raise which don’t appear to necessitate making changes to your manuscript, but this is rarely the case. Even for comments or concerns they raise which are already addressed in the paper, clearly those areas could be clarified or highlighted to ensure that future readers don’t get confused.

How to Reply to Journal Reviewers

Some journals will request a certain format for how you should structure a reply to the reviewers. If so this should be included in the email you receive from the journal’s editor. If there are no certain requirements here is what I do:

  • Copy and paste all replies into a document.
  • Separate out each point they raise onto a separate line. Often they’ll already be nicely numbered but sometimes they actually still raise separate issues in one block of text. I suggest separating it all out so that each query is addressed separately.
  • Form your reply for each point that they raise. I start by just jotting down notes for roughly how I’ll respond. Once I’m happy with the key message I’ll write it up into a scripted reply.
  • Finally, go through and format it nicely and include line number references for the changes you’ve made in the manuscript.

By the end you’ll have a document that looks something like:

Reviewer 1 Point 1: [Quote the reviewer’s comment] Response 1: [Address point 1 and say what revisions you’ve made to the paper] Point 2: [Quote the reviewer’s comment] Response 2: [Address point 2 and say what revisions you’ve made to the paper] Then repeat this for all comments by all reviewers!

What To Actually Include In Your Reply To Reviewers

For every single point raised by the reviewers, you should do the following:

  • Address their concern: Do you agree or disagree with the reviewer’s comment? Either way, make your position clear and justify any differences of opinion. If the reviewer wants more clarity on an issue, provide it. It is really important that you actually address their concerns in your reply. Don’t just say “Thanks, we’ve changed the text”. Actually include everything they want to know in your reply. Yes this means you’ll be repeating things between your reply and the revisions to the paper but that’s fine.
  • Reference changes to your manuscript in your reply. Once you’ve answered the reviewer’s question, you must show that you’re actually using this feedback to revise the manuscript. The best way to do this is to refer to where the changes have been made throughout the text. I personally do this by include line references. Make sure you save this right until the end once you’ve finished making changes!

Example Peer Review Comments & Author Replies

In order to understand how this works in practice I’d suggest reading through a few real-life example peer review comments and replies.

The good news is that published papers often now include peer-review records, including the reviewer comments and authors’ replies. So here are two feedback examples from my own papers:

Example Peer Review: Paper 1

Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 – Available here

This paper was reviewed by two academics and was given major revisions. The journal gave us only 10 days to get them done, which was a bit stressful!

  • Reviewer Comments
  • My reply to Reviewer 1
  • My reply to Reviewer 2

One round of reviews wasn’t enough for Reviewer 2…

  • My reply to Reviewer 2 – ROUND 2

Thankfully it was accepted after the second round of review, and actually ended up being selected for this accolade, whatever most notable means?!

Nice to see our recent paper highlighted as one of the most notable articles, great start to the week! Thanks @Materials_mdpi 😀 #openaccess & available here: https://t.co/AKWLcyUtpC @ICBiomechanics @julianrjones @saman_tavana pic.twitter.com/ciOX2vftVL — Jeff Clark (@savvy_scientist) December 7, 2020

Example Peer Review: Paper 2

Exploratory Full-Field Mechanical Analysis across the Osteochondral Tissue—Biomaterial Interface in an Ovine Model, J. Clark et al. 2020 – Available here

This paper was reviewed by three academics and was given minor revisions.

  • My reply to Reviewer 3

I’m pleased to say it was accepted after the first round of revisions 🙂

Things To Be Aware Of When Replying To Peer Review Comments

  • Generally, try to make a revision to your paper for every comment. No matter what the reviewer’s comment is, you can probably make a change to the paper which will improve your manuscript. For example, if the reviewer seems confused about something, improve the clarity in your paper. If you disagree with the reviewer, include better justification for your choices in the paper. It is far more favourable to take on board the reviewer’s feedback and act on it with actual changes to your draft.
  • Organise your responses. Sometimes journals will request the reply to each reviewer is sent in a separate document. Unless they ask for it this way I stick them all together in one document with subheadings eg “Reviewer 1” etc.
  • Make sure you address each and every question. If you dodge anything then the reviewer will have a valid reason to reject your resubmission. You don’t need to agree with them on every point but you do need to justify your position.
  • Be courteous. No need to go overboard with compliments but stay polite as reviewers are providing constructive feedback. I like to add in “We thank the reviewer for their suggestion” every so often where it genuinely warrants it. Remember that written language doesn’t always carry tone very well, so rather than risk coming off as abrasive if I don’t agree with the reviewer’s suggestion I’d rather be generous with friendliness throughout the reply.

2. How to Make Revisions To Your Paper

Once you’ve drafted your replies to the reviewers, you’ve actually done a lot of the ground work for making changes to the paper. Remember, you are making changes to the paper based off the reviewer comments so you should regularly be referring back to the comments to ensure you’re not getting sidetracked.

Reviewers could request modifications to any part of your paper. You may need to collect more data, do more analysis, reformat some figures, add in more references or discussion or any number of other revisions! So I can’t really help with everything, even so here is some general advice:

  • Use tracked-changes. This is so important. The editor and reviewers need to be able to see every single change you’ve made compared to your first submission. Sometimes the journal will want a clean copy too but always start with tracked-changes enabled then just save a clean copy afterwards.
  • Be thorough . Try to not leave any opportunity for the reviewers to not recommend your paper to be published. Any chance you have to satisfy their concerns, take it. For example if the reviewers are concerned about sample size and you have the means to include other experiments, consider doing so. If they want to see more justification or references, be thorough. To be clear again, this doesn’t necessarily mean making changes you don’t believe in. If you don’t want to make a change, you can justify your position to the reviewers. Either way, be thorough.
  • Use your reply to the reviewers as a guide. In your draft reply to the reviewers you should have already included a lot of details which can be incorporated into the text. If they raised a concern, you should be able to go and find references which address the concern. This reference should appear both in your reply and in the manuscript. As mentioned above I always suggest starting with the reply, then simply adding these details to your manuscript once you know what needs doing.

Putting Together Your Paper Revision Submission

  • Once you’ve drafted your reply to the reviewers and revised manuscript, make sure to give sufficient time for your co-authors to give feedback. Also give yourself time afterwards to make changes based off of their feedback. I ideally give a week for the feedback and another few days to make the changes.
  • When you’re satisfied that you’ve addressed the reviewer comments, you can think about submitting it. The journal may ask for another letter to the editor, if not I simply add to the top of the reply to reviewers something like:
“Dear [Editor], We are grateful to the reviewer for their positive and constructive comments that have led to an improved manuscript.  Here, we address their concerns/suggestions and have tracked changes throughout the revised manuscript.”

Once you’re ready to submit:

  • Double check that you’ve done everything that the editor requested in their email
  • Double check that the file names and formats are as required
  • Triple check you’ve addressed the reviewer comments adequately
  • Click submit and bask in relief!

You won’t always get the paper accepted, but if you’re thorough and present your revisions clearly then you’ll put yourself in a really good position. Remember to try as hard as possible to satisfy the reviewers’ concerns to minimise any opportunity for them to not accept your revisions!

Best of luck!

I really hope that this post has been useful to you and that the example peer review section has given you some ideas for how to respond. I know how daunting it can be to reply to reviewers, and it is really important to try to do a good job and give yourself the best chances of success. If you’d like to read other posts in my academic publishing series you can find them here:

Blog post series: Writing an academic journal paper

Subscribe below to stay up to date with new posts in the academic publishing series and other PhD content.

Share this:

  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)

Related Posts

Self portrait photo of me thinking about the key lessons from my PhD

The Five Most Powerful Lessons I Learned During My PhD

8th August 2024 8th August 2024

Image with a title showing 'How to make PhD thesis corrections' with a cartoon image of a man writing on a piece of paper, while holding a test tube, with a stack of books on the desk beside him

Minor Corrections: How To Make Them and Succeed With Your PhD Thesis

2nd June 2024 2nd June 2024

Graphic of data from experiments written on a notepad with the title "How to manage data"

How to Master Data Management in Research

25th April 2024 4th August 2024

2 Comments on “My Complete Guide to Academic Peer Review: Example Comments & How to Make Paper Revisions”

Excellent article! Thank you for the inspiration!

No worries at all, thanks for your kind comment!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Notify me of follow-up comments by email.

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Privacy Overview

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on 6 May 2022 by Tegan George . Revised on 2 September 2022.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, frequently asked questions about peer review.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Prevent plagiarism, run a free check.

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymised) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymised comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymised) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymised) review – where the identities of the author, reviewers, and editors are all anonymised – does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimises potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymise everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimise back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarise the argument in your own words

Summarising the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organised. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticised, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the ‘compliment sandwich’, where you ‘sandwich’ your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarised or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

Peer review is a process of evaluating submissions to an academic journal. Utilising rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication.

For this reason, academic journals are often considered among the most credible sources you can use in a research project – provided that the journal itself is trustworthy and well regarded.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field.

It acts as a first defence, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps:

  • Reject the manuscript and send it back to author, or
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

George, T. (2022, September 02). What Is Peer Review? | Types & Examples. Scribbr. Retrieved 9 September 2024, from https://www.scribbr.co.uk/research-methods/peer-reviews/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what is a double-blind study | introduction & examples, a quick guide to experimental design | 5 steps & examples, data cleaning | a guide with examples & steps.

Broad Institute of MIT and Harvard

Peer Review – Best Practices

“Peer review is broken. But let’s do it as effectively and as conscientiously as possible.” — Rosy Hosking, CommLab

“ A thoughtful, well-presented evaluation of a manuscript, with tangible suggestions for improvement and a recommendation that is supported by the comments, is the most valuable contribution that you can make as a reviewer, and such a review is greatly appreciated by both the authors of the manuscript and the editors of the journal. ” — ACS Reviewer Lab

Criteria for success

A successful peer review:

  • Contains a brief summary of the entire manuscript. Show the editors and authors what you think the main claims of the paper are, and your assessment of its impact on the field. What did the authors try to show and what did they try to claim?
  • Clearly directs the editor on the path forward. Should this paper be accepted, rejected, or revised?
  • Identifies any major (internal inconsistencies, missing data, etc.) concerns, and clearly locates them within the document. Why do you think that the direction specified is correct? What were the issues you identified that led you to that decision?
  • Lists (if appropriate — i.e. if you are suggesting revision or acceptance) minor concerns to help the authors make the paper watertight (typographical errors, grammatical errors, missing references, unclear explanations of methodology, etc.).
  • Explain how the arguments can be better defended through analysis, experiments, etc.
  • Is reasonable within the original manuscript scope ; does not suggest modifications that would require excessive time or expense, or that could instead be addressed by adjusting the manuscript’s claims .

Structure Diagram

A typical peer review is 1-2 pages long. You can divide your content roughly as follows:

peer review in research example

Identify your purpose

The purpose of your pre-publication peer review is two-fold:

  • Scientific integrity (which can be handled with editorial office assistance)
  • Quality of data collection methods and data analysis
  • Veracity of conclusions presented in the manuscript
  • Determine match between the proposed submission and the journal scope (subject matter and potential impact). For example, a paper that holds significance only for a particular subfield of chemical engineering is not appropriate for a broad multidisciplinary journal. Determining match is usually done in partnership with the editor, who can answer questions of journal scope.

Analyze your audience

The audience for your peer review work is unusual compared to most other kinds of communication you will undertake as a scientist. Your primary audience is the journal editor, who will use your feedback to make a decision to accept or reject the manuscript. Your secondary audience is the author, who will use your suggestions to make improvements to the manuscript. Typically, you will be known to the journal editors, but anonymous to the authors of the manuscript. For this reason, it is important that you balance your review between these two parties.

The editors are most interested in hearing your critical feedback on the science that is presented, and whether there are any claims that need to be adjusted. The editors need to know:

  • Your areas of expertise within the manuscript
  • The paper’s significance to your particular field

To help you, most journals willhave guidelines for reviewers to follow, which can be found on the journal’s website (e.g., Cell Guidelines ).

The authors are interested in:

  • Understanding what aspects of their logic are not easily understood
  • Other layers of experimentation or discussion that would be necessary to support claims
  • Any additional information they would need to convince you in their arguments

Format Your Document in a Standard Way

Peer review feedback is most easily digested and understood by both editors and authors when it arrives in a clear, logical format. Most commonly the format is (1) Summary, (2) Decision, (3) Major Concerns, and (4) Minor Concerns (see also Structure Diagram above).

There is also often a multiple choice form to “rate” the paper on a number of criteria. This numerical scoring guide may be used by editors to weigh the manuscript against other submissions; think of it mostly as a checklist of topics to cover in your review.

The summary grounds the remainder of your review. You need to demonstrate that you have read and understood the manuscript, which helps the authors understand what other readers are understanding to be the manuscript’s main claims. This is also an opportunity to demonstrate your own expertise and critical thinking, which makes a positive impression on the editors who often may be important people in your field.

It is helpful to use the following guidelines:

  • Start with a one-sentence description of the paper’s main point, followed by several sentences summarizing specific important findings that lead to the paper’s logical conclusion.
  • Then, highlight the significance of the important findings that were shown in the paper.
  • Conclude with the reviewer’s overall opinion of what the manuscript does and does not do well.

Your decision must be clearly stated to aid the interpretation of the rest of your comments (see Criteria for Success). Do this either as part of the concluding sentence in the summary paragraph, or as a separate sentence after the summary. In general, you try to categorize within the following framework:

  • Accept with no revisions
  • Accept with minor revisions
  • Accept with major revisions

Some journals will have specific rules or different wording, so make sure you understand what your options are.

Most reviews also contain the option to provide confidential comments to the editor, which can be used to provide the editor with more detail on the decision. In extreme cases, this can also be where concerns about plagiarism, data manipulation, or other ethical issues can be raised.

The Decision area is also where you can state which aspects of complex manuscripts you feel you have the expertise to comment on.

Major Concerns (where relevant)

Depending on the journal that you are reviewing for, there might be criteria for significance, novelty, industrial relevance, or other field-specific criteria that need to be accounted for in your major concerns. Major concerns, if they are serious, typically lead to decisions that are either “reject” or “accept with major revisions.”

Major concerns include…

  • issues with the arguments presented in the paper that are not internally consistent,
  • or present arguments that go against significant understanding in the field, without the necessary data to back it up .
  • a lack of key experimental or computational data that are vital to justify the claims made in the paper.
  • Examples: a study that reports the identity of an unexpected peak in a GC-MS spectrum without accounting for common interferences, or claims pertaining to human health when all the data presented is in a model organism or in vitro .

One of the most important aspects of providing a review with major concerns is your ability to cite resolutions. For example…

  • If you think that someone’s argument is going against the laws of thermodynamics, what data would they need to show you to convince you otherwise?
  • What types of new statistical analysis would you need to see to believe the claims being made about the clinical trials presented in this work?
  • Are there additional control experiments that are needed to show that this catalyst is actually promoting the reaction along the pathway suggested?

Minor Concerns (optional)

Minor concerns are primarily issues that are raised that would improve the clarity of the message, but don’t impact the logic of the argument. Most commonly these are…

  • Grammatical errors within the manuscript
  • Typographical errors
  • Missing references
  • Insufficient background or methods information (e.g., an introduction section with only five references)
  • Insufficient or possibly extraneous detail
  • Unclear or poorly worded explanations (e.g., a paragraph in the discussion section that seems to contradict other parts of the paper)
  • Possible options for improving the readability of any graphics (e.g., incorrect labels on a figure)

While minor concerns are not always present in the case of reviews with many major concerns, they are almost always included in the case of manuscripts where the decision is an accept or accept with minor revisions.

Offer revisions that are reasonable and in scope

Think about the feasibility of the experiments you suggest to address your concerns. Are you suggesting 3 years’ more work that could form the basis for a whole other publication? If you are suggesting vast amounts of animal work or sequencing, then are the experiments going to be prohibitively expensive? If the paper would stand without this next layer of experimentation, then think seriously about the real value of these additional experiments. One of the major issues with scientific publishing is the length of time taken to get to the finish line. Don’t muddy the water for fellow authors unnecessarily!

As an alternative to more experiments, does the author need to adjust their claims to fit the extent of their evidence rather than the other way round? If they did that, would this still be a good paper for the journal you are reviewing for?

Structure your comments in a way that makes sense to the audience

Formatting choices:

  • Separate each of your concerns clearly with line breaks (or numbering) and organize them in the order they appear in the manuscript.
  • Quote directly from the text and bold or italicize relevant phrases to illustrate your points
  • Include page and line/paragraph numbers for easy reference.

Style/Concision:

  • Keep your comments as brief as possible by simply stating the issue and your suggestion for fixing it in a few sentences or less.

Offer feedback that is constructive and professional

Be unbiased and professional.

Although the identities of the authors are sometimes kept anonymous during the review process (this is rare in chemical and biological research), research communities are typically small and you may try to “guess” who the author is based on the methodology used or the writing style. Regardless, it is important to remain unbiased and professional in your review. Do not assume anything about the paper based on your perception of, for example, the author’s status or the impact their results may have on your own research. If you feel that this might be an issue for you, you must inform the editor that there is a conflict of interest and you should not review this manuscript.

Be polite and diplomatic .

Receiving critical feedback, even when constructive, can be difficult and possibly emotional for the authors. Since you are not anonymous to the editors, being unnecessarily harsh in your feedback will reflect badly on you in the end. Use similar language to what you would use when discussing research at a conference, or when talking with your advisor in a meeting. Manuscript peer review is a good way to practice these “soft” skills which are important yet often neglected in the science community.

Additional resources about effective peer reviewing

  • American Chemical Society Reviewer Lab
  • Nature.com offers a peer review training course for purchase:
  • https://masterclasses.nature.com/courses/205
  • http://senseaboutscience.org/activities/peer-review-the-nuts-and-bolts/
  • http://asapbio.org/six-essential-reads-on-peer-review

This article was written by Mike Orella (MIT Chem E Comm Lab); edited by Mica Smith (MIT Chem E Comm Lab) and Rosy Hosking (Broad Comm Lab)

Back Home

  • Science Notes Posts
  • Contact Science Notes
  • Todd Helmenstine Biography
  • Anne Helmenstine Biography
  • Free Printable Periodic Tables (PDF and PNG)
  • Periodic Table Wallpapers
  • Interactive Periodic Table
  • Periodic Table Posters
  • Science Experiments for Kids
  • How to Grow Crystals
  • Chemistry Projects
  • Fire and Flames Projects
  • Holiday Science
  • Chemistry Problems With Answers
  • Physics Problems
  • Unit Conversion Example Problems
  • Chemistry Worksheets
  • Biology Worksheets
  • Periodic Table Worksheets
  • Physical Science Worksheets
  • Science Lab Worksheets
  • My Amazon Books

Understanding Peer Review in Science

Peer Review Process

Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps of the process, and and how to approach peer review if you are asked to assess a manuscript.

What Is Peer Review?

Peer review is the evaluation of work by peers, who are people with comparable experience and competency. Peers assess each others’ work in educational settings, in professional settings, and in the publishing world. The goal of peer review is improving quality, defining and maintaining standards, and helping people learn from one another.

In the context of scientific publication, peer review helps editors determine which submissions merit publication and improves the quality of manuscripts prior to their final release.

Types of Peer Review for Manuscripts

There are three main types of peer review:

  • Single-blind review: The reviewers know the identities of the authors, but the authors do not know the identities of the reviewers.
  • Double-blind review: Both the authors and reviewers remain anonymous to each other.
  • Open peer review: The identities of both the authors and reviewers are disclosed, promoting transparency and collaboration.

There are advantages and disadvantages of each method. Anonymous reviews reduce bias but reduce collaboration, while open reviews are more transparent, but increase bias.

Key Elements of Peer Review

Proper selection of a peer group improves the outcome of the process:

  • Expertise : Reviewers should possess adequate knowledge and experience in the relevant field to provide constructive feedback.
  • Objectivity : Reviewers assess the manuscript impartially and without personal bias.
  • Confidentiality : The peer review process maintains confidentiality to protect intellectual property and encourage honest feedback.
  • Timeliness : Reviewers provide feedback within a reasonable timeframe to ensure timely publication.

Steps of the Peer Review Process

The typical peer review process for scientific publications involves the following steps:

  • Submission : Authors submit their manuscript to a journal that aligns with their research topic.
  • Editorial assessment : The journal editor examines the manuscript and determines whether or not it is suitable for publication. If it is not, the manuscript is rejected.
  • Peer review : If it is suitable, the editor sends the article to peer reviewers who are experts in the relevant field.
  • Reviewer feedback : Reviewers provide feedback, critique, and suggestions for improvement.
  • Revision and resubmission : Authors address the feedback and make necessary revisions before resubmitting the manuscript.
  • Final decision : The editor makes a final decision on whether to accept or reject the manuscript based on the revised version and reviewer comments.
  • Publication : If accepted, the manuscript undergoes copyediting and formatting before being published in the journal.

Pros and Cons

While the goal of peer review is improving the quality of published research, the process isn’t without its drawbacks.

  • Quality assurance : Peer review helps ensure the quality and reliability of published research.
  • Error detection : The process identifies errors and flaws that the authors may have overlooked.
  • Credibility : The scientific community generally considers peer-reviewed articles to be more credible.
  • Professional development : Reviewers can learn from the work of others and enhance their own knowledge and understanding.
  • Time-consuming : The peer review process can be lengthy, delaying the publication of potentially valuable research.
  • Bias : Personal biases of reviews impact their evaluation of the manuscript.
  • Inconsistency : Different reviewers may provide conflicting feedback, making it challenging for authors to address all concerns.
  • Limited effectiveness : Peer review does not always detect significant errors or misconduct.
  • Poaching : Some reviewers take an idea from a submission and gain publication before the authors of the original research.

Steps for Conducting Peer Review of an Article

Generally, an editor provides guidance when you are asked to provide peer review of a manuscript. Here are typical steps of the process.

  • Accept the right assignment: Accept invitations to review articles that align with your area of expertise to ensure you can provide well-informed feedback.
  • Manage your time: Allocate sufficient time to thoroughly read and evaluate the manuscript, while adhering to the journal’s deadline for providing feedback.
  • Read the manuscript multiple times: First, read the manuscript for an overall understanding of the research. Then, read it more closely to assess the details, methodology, results, and conclusions.
  • Evaluate the structure and organization: Check if the manuscript follows the journal’s guidelines and is structured logically, with clear headings, subheadings, and a coherent flow of information.
  • Assess the quality of the research: Evaluate the research question, study design, methodology, data collection, analysis, and interpretation. Consider whether the methods are appropriate, the results are valid, and the conclusions are supported by the data.
  • Examine the originality and relevance: Determine if the research offers new insights, builds on existing knowledge, and is relevant to the field.
  • Check for clarity and consistency: Review the manuscript for clarity of writing, consistent terminology, and proper formatting of figures, tables, and references.
  • Identify ethical issues: Look for potential ethical concerns, such as plagiarism, data fabrication, or conflicts of interest.
  • Provide constructive feedback: Offer specific, actionable, and objective suggestions for improvement, highlighting both the strengths and weaknesses of the manuscript. Don’t be mean.
  • Organize your review: Structure your review with an overview of your evaluation, followed by detailed comments and suggestions organized by section (e.g., introduction, methods, results, discussion, and conclusion).
  • Be professional and respectful: Maintain a respectful tone in your feedback, avoiding personal criticism or derogatory language.
  • Proofread your review: Before submitting your review, proofread it for typos, grammar, and clarity.
  • Couzin-Frankel J (September 2013). “Biomedical publishing. Secretive and subjective, peer review proves resistant to study”. Science . 341 (6152): 1331. doi: 10.1126/science.341.6152.1331
  • Lee, Carole J.; Sugimoto, Cassidy R.; Zhang, Guo; Cronin, Blaise (2013). “Bias in peer review”. Journal of the American Society for Information Science and Technology. 64 (1): 2–17. doi: 10.1002/asi.22784
  • Slavov, Nikolai (2015). “Making the most of peer review”. eLife . 4: e12708. doi: 10.7554/eLife.12708
  • Spier, Ray (2002). “The history of the peer-review process”. Trends in Biotechnology . 20 (8): 357–8. doi: 10.1016/S0167-7799(02)01985-6
  • Squazzoni, Flaminio; Brezis, Elise; Marušić, Ana (2017). “Scientometrics of peer review”. Scientometrics . 113 (1): 501–502. doi: 10.1007/s11192-017-2518-4

Related Posts

American Psychological Association Logo

Peer review

man typing on laptop keyboard with notebook and pencil next to him

A key convention in the publication of research is the peer review process, in which the quality and potential contribution of each manuscript is evaluated by one's peers in the scientific community.

Like other scientific journals, APA journals utilize a peer review process to guide manuscript selection and publication decisions.

Toward the goal of impartiality, the majority of APA journals follow a masked review policy, in which authors' and reviewers' identities are concealed from each other. Reviewer identities are never shared unless the reviewer requests to sign their review.

APA journal reviewers are qualified individuals selected by the action editor (typically, the journal editor or associate editor) to review a manuscript on the basis of their expertise in particular content areas of their field.

The role of a peer reviewer is to highlight unique, original manuscripts that fit within the scope of the journal.

To aid the editor's objectivity, two to three peer reviewers are selected to evaluate a manuscript.

These reviewers should be able to provide fair reviews, free from conflicts of interest, as well as submit the reviews on time.

In addition to technical expertise, criteria for selection of reviewers may include familiarity with a particular controversy or attention to a balance of perspectives (APA, 2010, p. 226).

Whereas the journal editor holds final responsibility for a manuscript, the action editor usually weighs reviewers' inputs heavily.

Authors can expect their manuscripts to be reviewed fairly, in a skilled, conscientious manner. The comments received should be constructive, respectful and specific.

Reviewers must present a clear decision recommendation regarding publication, considering the quality of the manuscript, its scientific contribution, and its appropriateness for the particular journal; support the recommendation with a detailed, comprehensive analysis of the quality and coherence of the study's conceptual basis, methods, results, and interpretations; and offer specific, constructive suggestions to authors.

Journal editors may request that reviewers evaluate manuscripts based on specific criteria, which may vary across journals or for non-empirical article types, such as commentaries or reviews.

The action editor scans the paper to gain an independent view of the work. This "quick read" provides a foundation for the more thorough reading that follows — it by no means determines the final decision, but does parallel how authors can expect many reviewers (and readers) to approach their papers.

First, the editor scans the paper from beginning to end for obvious flaws in the research substance and writing style. If problems show on the surface, a deeper reading is likely to uncover other matters needing attention.

After this initial examination of your manuscript, the action editors, as well as any peer reviewers, will follow these general guidelines:

Read the abstract

Major problems in the abstract often reflect internal flaws.

The major goal in reading the abstract is to understand the research question:

  • Is it clearly defined, relevant, and supported by the methodology?
  • What is the sense of the research question, methodology, findings, and interpretations?

APA publication policy emphasizes conclusion-oriented abstracts: What did the research find, and what do the findings mean?

Examine the full manuscript

If it is more than 35 typed, double-spaced pages (including references, tables, and figures), this could pose a problem for some journals.

  • How long are the Introduction and the Discussion sections relative to other sections of the paper?
  • Does the paper adhere to journal-specific guidelines?

These guidelines can be found on the Manuscript Submission tab of each journal's webpage.

Scan the paper's headings

  • Are they well organized?
  • Does a clear structure emerge?

If not, the author has not achieved coherence.

Scan the references

  • Are they in APA Style?

If not, the author is not using APA publication format.

Scan the tables and figures

  • Do they portray the information clearly?
  • Can they stand alone without captions?
  • Are they well constructed and in APA Style?

A "no" to any of these questions suggests problems in the author's presentation of findings.

  • If the text contains a large number of statistics, could they be more appropriately put into tables or figures?

The editor drafting the decision letter should be synthesizing the input from multiple reviewers into a cohesive list of improvements that should be made to the manuscript. Any comments from the reviewers will be appended to the official decision letter.

These categories constitute the editorial actions that may be taken on a manuscript.

The flaws that lead to this decision generally center on substantive or methodological issues. A manuscript is usually rejected because it is outside the area of coverage of the journal; it contains serious flaws of design, methodology, analysis, or interpretation; or it is judged to make only a limited novel contribution to the field.

Revise and resubmit

In most cases, manuscripts may have publication potential but are not yet ready for final publication. The study as presented may not merit acceptance as is but may warrant consideration after substantive revision (e.g., reorganizing the conceptual structure, conducting additional experiments, or modifying analyses).

The action editor will give the author an invitation to revise and resubmit for another round of reviews (usually with the same reviewers). An action editor cannot guarantee acceptance of a revised manuscript, but authors who respond flexibly and attend closely to suggested revisions enhance their chances for an acceptance.

Authors must include a detailed cover letter outlining their responses to the revisions. Authors may receive this decision multiple times prior to acceptance.

In very few cases, a manuscript may be accepted for publication on first reading, with only minor revisions required. More typically, acceptances follow the successful revision of a manuscript previously rejected with invitation to revise and resubmit.

Once a manuscript is accepted and appropriate paperwork has been obtained, it enters the production phase of publication. At this point, no further changes can be made by the author other than those suggested by the copyeditor.

  • Guidelines for Effective Manuscript Evaluation (from Psychotherapy )
  • Peer review ethics: Six things every author should know (from Division Dialogue , March 2018)
  • Current Peer Review Trends and Standards

If your manuscript is rejected, and if you believe a pertinent point was overlooked or misunderstood by the reviewers, you may appeal the editorial decision by contacting the editor responsible for the journal.

The editor might then decide to send the appeal to the (associate) editor who handled the initial submission.

If you appeal to the editor and are not satisfied with the editor's response, the next step in the APA editorial appeal procedure is to contact the APA chief editorial advisor .

If a satisfactory resolution is still not achieved, and you still believe that the process was unfair, you may appeal to the Publications and Communications (P&C) Board.

An initial review by the journals publisher and P&C Board chair and chair-elect will determine if the appeal will go before the full board for final decision.

Cases in which an appeal might not go before the full board are those in which an author submitted a manuscript against a journal’s policy (e.g., if a rejected submission was revised and submitted as a new submission without invitation to do so).

  • APA publishing resources
  • Reviewer Resource Center
  • Editor resource center
  • Why publish with APA Journals™
  • Equity, diversity, and inclusion in APA Journals™

APA Publishing Insider

APA Publishing Insider is a free monthly newsletter with tips on APA Style, open science initiatives, active calls for papers, research summaries, and more.

Visit the APA Style website for style and grammar guidelines, free instructional aids, reference examples, the APA Style blog, APA Style products, and more.

Contact Journals

70 samples of peer review examples for employees

  • Performance Management

70 Peer Review Examples: Powerful Phrases You Can Use

Picture of Surabhi

  • October 30, 2023

The blog is tailored for HR professionals looking to set up and improve peer review feedback within their organization. Share the article with your employees as a guide to help them understand how to craft insightful peer review feedback.

Peer review is a critical part of personal development, allowing colleagues to learn from each other and excel at their job. Crafting meaningful and impactful feedback for peers is an art. It’s not just about highlighting strengths and weaknesses; it’s about doing so in a way that motivates others. 

In this blog post, we will explore some of the most common phrases you can use to give peer feedback. Whether you’re looking for a comment on a job well done, offer constructive criticism , or provide balanced and fair feedback, these peer review examples will help you communicate your feedback with clarity and empathy.

Peer review feedback is the practice of colleagues and co-workers assessing and providing meaningful feedback on each other’s performance. It is a valuable instrument that helps organizations foster professional development, teamwork, and continuous improvement.

Peoplebox lets you conduct effective peer reviews within minutes. You can customize feedback, use tailored surveys, and seamlessly integrate it with your collaboration tools. It’s a game-changer for boosting development and collaboration in your team.

See Peoplebox in Action

Why are Peer Reviews Important?

Here are some compelling reasons why peer review feedback is so vital:

Broader Perspective: Peer feedback offers a well-rounded view of an employee’s performance. Colleagues witness their day-to-day efforts and interactions, providing a more comprehensive evaluation compared to just a supervisor’s perspective.

Skill Enhancement: It serves as a catalyst for skill enhancement. Constructive feedback from peers highlights areas of improvement and offers opportunities for skill development.

Encourages Accountability: Peer review fosters a culture of accountability . Knowing that one’s work is subject to review by peers can motivate individuals to perform at their best consistently.

Team Cohesion: It strengthens team cohesion by promoting open communication. and constructive communication. Teams that actively engage in peer feedback often develop a stronger sense of unity and shared purpose.

Fair and Unbiased Assessment: By involving colleagues, peer review helps ensure a fair and unbiased assessment. It mitigates the potential for supervisor bias and personal favoritism in performance evaluations .

Identifying Blind Spots: Peers can identify blind spots that supervisors may overlook. This means addressing issues at an early stage, preventing them from escalating.

Motivation and Recognition: Positive peer feedback can motivate employees and offer well-deserved recognition for their efforts. Acknowledgment from colleagues can be equally, if not more, rewarding than praise from higher-ups.

Now, let us look at the best practices for giving peer feedback in order to leverage its benefits effectively.

Best practices to follow while giving peer feedback

30 Positive Peer Feedback Examples

Now that we’ve established the importance of peer review feedback, the next step is understanding how to use powerful phrases to make the most of this evaluation process.  In this section, we’ll equip you with various examples of phrases to use during peer reviews, making the journey more confident and effective for you and your team .

Must Read: 60+ Self-Evaluation Examples That Can Make You Shine

Peer Review Example on Work Quality

When it comes to recognizing excellence, quality work is often the first on the list. Here are some peer review examples highlighting the work quality:

  • “Kudos to Sarah for consistently delivering high-quality reports that never fail to impress both clients and colleagues. Her meticulous attention to detail and creative problem-solving truly set the bar high.”
  • “John’s attention to detail and unwavering commitment to excellence make his work a gold standard for the entire team. His consistently high-quality contributions ensure our projects shine.”
  • “Alexandra’s dedication to maintaining the project’s quality standards sets a commendable benchmark for the entire department. Her willingness to go the extra mile is a testament to her work ethic and quality focus.”
  • “Patrick’s dedication to producing error-free code is a testament to his commitment to work quality. His precise coding and knack for bug spotting make his work truly outstanding.”

Peer Review Examples on Competency and Job-Related Skills

Competency and job-related skills set the stage for excellence. Here’s how you can write a peer review highlighting this particular skill set:

  • “Michael’s extensive knowledge and problem-solving skills have been instrumental in overcoming some of our most challenging technical hurdles. His ability to analyze complex issues and find creative solutions is remarkable. Great job, Michael!”
  • “Emily’s ability to quickly grasp complex concepts and apply them to her work is truly commendable. Her knack for simplifying the intricate is a gift that benefits our entire team.”
  • “Daniel’s expertise in data analysis has significantly improved the efficiency of our decision-making processes. His ability to turn data into actionable insights is an invaluable asset to the team.”
  • “Sophie’s proficiency in graphic design has consistently elevated the visual appeal of our projects. Her creative skills and artistic touch add a unique, compelling dimension to our work.”

Peer Review Sample on Leadership Skills

Leadership ability extends beyond a mere title; it’s a living embodiment of vision and guidance, as seen through these exceptional examples:

  • “Under Lisa’s leadership, our team’s morale and productivity have soared, a testament to her exceptional leadership skills and hard work. Her ability to inspire, guide, and unite the team in the right direction is truly outstanding.”
  • “James’s ability to inspire and lead by example makes him a role model for anyone aspiring to be a great leader. His approachability and strong sense of ethics create an ideal leadership model.”
  • “Rebecca’s effective delegation and strategic vision have been the driving force behind our project’s success. Her ability to set clear objectives, give valuable feedback, and empower team members is truly commendable.”
  • “Victoria’s leadership style fosters an environment of trust and innovation, enabling our team to flourish in a great way. Her encouragement of creativity and openness to diverse ideas is truly inspiring.”

Feedback on Teamwork and Collaboration Skills

Teamwork is where individual brilliance becomes collective success. Here are some peer review examples highlighting teamwork:

  • “Mark’s ability to foster a collaborative environment is infectious; his team-building skills unite us all. His open-mindedness and willingness to listen to new ideas create a harmonious workspace.”
  • “Charles’s commitment to teamwork has a ripple effect on the entire department, promoting cooperation and synergy. His ability to bring out the best in the rest of the team is truly remarkable.”
  • “David’s talent for bringing diverse perspectives together enhances the creativity and effectiveness of our group projects. His ability to unite us under a common goal fosters a sense of belonging.”

Peer Review Examples on Professionalism and Work Ethics

Professionalism and ethical conduct define a thriving work culture. Here’s how you can write a peer review highlighting work ethics in performance reviews :

  • “Rachel’s unwavering commitment to deadlines and ethical work practices is a model for us all. Her dedication to punctuality and ethics contributes to a culture of accountability.”
  • “Timothy consistently exhibits the highest level of professionalism, ensuring our clients receive impeccable service. His courtesy and reliability set a standard of excellence.”
  • “Daniel’s punctuality and commitment to deadlines set a standard of professionalism we should all aspire to. His sense of responsibility is an example to us all.”
  • “Olivia’s unwavering dedication to ethical business practices makes her a trustworthy and reliable colleague. Her ethical principles create an atmosphere of trust and respect within our team, leading to a more positive work environment.”

Feedback on Mentoring and Support

Mentoring and support pave the way for future success. Check out these peer review examples focusing on mentoring:

  • “Ben’s dedication to mentoring new team members is commendable; his guidance is invaluable to our junior colleagues. His approachability and patience create an environment where learning flourishes.”
  • “David’s mentorship has been pivotal in nurturing the talents of several team members beyond his direct report, fostering a culture of continuous improvement. His ability to transfer knowledge is truly outstanding.”
  • “Laura’s patient mentorship and continuous support for her colleagues have helped elevate our team’s performance. Her constructive feedback and guidance have made a remarkable difference.”
  • “William’s dedication to knowledge sharing and mentoring is a driving force behind our team’s constant learning and growth. His commitment to others’ development is inspiring.”

Peer Review Examples on Communication Skills

Effective communication is the linchpin of harmonious collaboration. Here are some peer review examples to highlight your peer’s communication skills:

  • “Grace’s exceptional communication skills ensure clarity and cohesion in our team’s objectives. Her ability to articulate complex ideas in a straightforward manner is invaluable.”
  • “Oliver’s ability to convey complex ideas with simplicity greatly enhances our project’s success. His effective communication style fosters a productive exchange of ideas.”
  • “Aiden’s proficiency in cross-team communication ensures that our projects move forward efficiently. His ability to bridge gaps in understanding is truly commendable.”

Peer Review Examples on Time Management and Productivity

Time management and productivity are the engines that drive accomplishments. Here are some peer review examples highlighting time management:

  • “Ella’s time management is nothing short of exemplary; it sets a benchmark for us all. Her efficient task organization keeps our projects on track.”
  • “Robert’s ability to meet deadlines and manage time efficiently significantly contributes to our team’s overall productivity. His time management skills are truly remarkable.”
  • “Sophie’s time management skills are a cornerstone of her impressive productivity, inspiring us all to be more efficient. Her ability to juggle multiple tasks is impressive.”
  • “Liam’s time management skills are key to his consistently high productivity levels. His ability to organize work efficiently is an example for all of us to follow.”

Though these positive feedback examples are valuable, it’s important to recognize that there will be instances when your team needs to convey constructive or negative feedback. In the upcoming section, we’ll present 40 examples of constructive peer review feedback. Keep reading!

40 Constructive Peer Review Feedback

Receiving peer review feedback, whether positive or negative, presents a valuable chance for personal and professional development. Let’s explore some examples your team can employ to provide constructive feedback , even in situations where criticism is necessary, with a focus on maintaining a supportive and growth-oriented atmosphere.

Constructive Peer Review Feedback on Work Quality

  • “I appreciate John’s meticulous attention to detail, which enhances our projects. However, I noticed a few minor typos in his recent report. To maintain an impeccable standard, I’d suggest dedicating more effort to proofreading.”
  • “Sarah’s research is comprehensive, and her insights are invaluable. Nevertheless, for the sake of clarity and brevity, I recommend distilling her conclusions to their most essential points.”
  • “Michael’s coding skills are robust, but for the sake of team collaboration, I’d suggest that he provides more detailed comments within the code to enhance readability and consistency.”
  • “Emma’s creative design concepts are inspiring, yet consistency in her chosen color schemes across projects could further bolster brand recognition.”
  • “David’s analytical skills are thorough and robust, but it might be beneficial to present data in a more reader-friendly format to enhance overall comprehension.”
  • “I’ve observed Megan’s solid technical skills, which are highly proficient. To further her growth, I recommend taking on more challenging projects to expand her expertise.”
  • “Robert’s industry knowledge is extensive and impressive. To become a more well-rounded professional, I’d suggest he focuses on honing his client relationship and communication skills.”
  • “Alice’s project management abilities are impressive, and she’s demonstrated an aptitude for handling complexity. I’d recommend she refines her risk assessment skills to excel further in mitigating potential issues.”
  • “Daniel’s presentation skills are excellent, and his reports are consistently informative. Nevertheless, there is room for improvement in terms of interpreting data and distilling it into actionable insights.”
  • “Laura’s sales techniques are effective, and she consistently meets her targets. I encourage her to invest time in honing her negotiation skills for even greater success in securing deals and partnerships.”

Peer Review Examples on Leadership Skills

  • “I’ve noticed James’s commendable decision-making skills. However, to foster a more inclusive and collaborative environment, I’d suggest he be more open to input from team members during the decision-making process.”
  • “Sophia’s delegation is efficient, and her team trusts her leadership. To further inspire the team, I’d suggest she share credit more generously and acknowledge the collective effort.”
  • “Nathan’s vision and strategic thinking are clear and commendable. Enhancing his conflict resolution skills is suggested to promote a harmonious work environment and maintain team focus.”
  • “Olivia’s accountability is much appreciated. I’d encourage her to strengthen her mentoring approach to develop the team’s potential even further and secure a strong professional legacy.”
  • “Ethan’s adaptability is an asset that brings agility to the team. Cultivating a more motivational leadership style is recommended to uplift team morale and foster a dynamic work environment.”

Peer Review Examples on Teamwork and Collaboration

  • “Ava’s collaboration is essential to the team’s success. She should consider engaging more actively in group discussions to contribute her valuable insights.”
  • “Liam’s teamwork is exemplary, but he could motivate peers further by sharing credit more openly and recognizing their contributions.”
  • “Chloe’s flexibility in teamwork is invaluable. To become an even more effective team player, she might invest in honing her active listening skills.”
  • “William’s contributions to group projects are consistently valuable. To maximize his impact, I suggest participating in inter-departmental collaborations and fostering cross-functional teamwork.”
  • “Zoe’s conflict resolution abilities create a harmonious work environment. Expanding her ability to mediate conflicts and find mutually beneficial solutions is advised to enhance team cohesion.”
  • “Noah’s punctuality is an asset to the team. To maintain professionalism consistently, he should adhere to deadlines with unwavering dedication, setting a model example for peers.”
  • “Grace’s integrity and ethical standards are admirable. To enhance professionalism further, I’d recommend that she maintain a higher level of discretion in discussing sensitive matters.”
  • “Logan’s work ethics are strong, and his commitment is evident. Striving for better communication with colleagues regarding project updates is suggested, ensuring everyone remains well-informed.”
  • “Sophie’s reliability is appreciated. Maintaining a high level of attention to confidentiality when handling sensitive information would enhance her professionalism.”
  • “Jackson’s organizational skills are top-notch. Upholding professionalism by maintaining a tidy and organized workspace is recommended.”

Peer Review Feedback Examples on Mentoring and Support

  • “Aiden provides invaluable mentoring to junior team members. He should consider investing even more time in offering guidance and support to help them navigate their professional journeys effectively.”
  • “Harper’s commendable support to peers is noteworthy. She should develop coaching skills to maximize their growth, ensuring their development matches their potential.”
  • “Samuel’s patience in teaching is a valuable asset. He should tailor support to individual learning styles to enhance their understanding and retention of key concepts.”
  • “Ella’s mentorship plays a pivotal role in the growth of colleagues. She should expand her role in offering guidance for long-term career development, helping them set and achieve their professional goals.”
  • “Benjamin’s exceptional helpfulness fosters a more supportive atmosphere where everyone can thrive. He should encourage team members to seek assistance when needed.”
  • “Mia’s communication skills are clear and effective. To cater to different audience types, she should use more varied communication channels to convey her message more comprehensively.”
  • “Lucas’s ability to articulate ideas is commendable, and his verbal communication is strong. He should polish non-verbal communication to ensure that his body language aligns with his spoken message.”
  • “Evelyn’s appreciated active listening skills create strong relationships with colleagues. She should foster stronger negotiation skills for client interactions, ensuring both parties are satisfied with the outcomes.”
  • “Jack’s presentation skills are excellent. He should elevate written communication to match the quality of verbal presentations, offering more comprehensive and well-structured documentation.”
  • “Avery’s clarity in explaining complex concepts is valued by colleagues. She should develop persuasive communication skills to enhance her ability to secure project proposals and buy-in from stakeholders.”

Feedback on Time Management and Productivity

  • “Isabella’s efficient time management skills contribute to the team’s success. She should explore time-tracking tools to further optimize her workflow and maximize her efficiency.”
  • “Henry’s remarkable productivity sets a high standard. He should maintain a balanced approach to tasks to prevent burnout and ensure sustainable long-term performance.”
  • “Luna’s impressive task prioritization and strategic time allocation should be fine-tuned with goal-setting techniques to ensure consistent productivity aligned with objectives.”
  • “Leo’s great deadline adherence is commendable. He should incorporate short breaks into the schedule to enhance productivity and focus, allowing for the consistent meeting of high standards.”
  • “Mila’s multitasking abilities are a valuable skill. She should strive to implement regular time-blocking sessions into the daily routine to further enhance time management capabilities.”

Do’s and Don’t of Peer Review Feedback

Peer review feedback can be extremely helpful for intellectual growth and professional development. Engaging in this process with thoughtfulness and precision can have a profound impact on both the reviewer and the individual seeking feedback.

However, there are certain do’s and don’ts that must be observed to ensure that the feedback is not only constructive but also conducive to a positive and productive learning environment.

Do’s and don’t for peer review feedback

The Do’s of Peer Review Feedback:

Empathize and Relate : Put yourself in the shoes of the person receiving the feedback. Recognize the effort and intention behind their work, and frame your comments with sensitivity.

Ground Feedback in Data : Base your feedback on concrete evidence and specific examples from the work being reviewed. This not only adds credibility to your comments but also helps the recipient understand precisely where improvements are needed.

Clear and Concise Writing : Express your thoughts in a clear and straightforward manner. Avoid jargon or ambiguous language that may lead to misinterpretation.

Offer Constructive Criticism : Focus on providing feedback that can guide improvement. Instead of simply pointing out flaws, suggest potential solutions or alternatives.

Highlight Strength s: Acknowledge and commend the strengths in the work. Recognizing what’s done well can motivate the individual to build on their existing skills.

The Don’ts of Peer Review Feedback:

Avoid Ambiguity : Vague or overly general comments such as “It’s not good” do not provide actionable guidance. Be specific in your observations.

Refrain from Personal Attacks : Avoid making the feedback personal or overly critical. Concentrate on the work and its improvement, not on the individual.

Steer Clear of Subjective Opinions : Base your feedback on objective criteria and avoid opinions that may not be universally applicable.

Resist Overloading with Suggestions : While offering suggestions for improvement is important, overwhelming the recipient with a laundry list of changes can be counterproductive.

Don’t Skip Follow-Up : Once you’ve provided feedback, don’t leave the process incomplete. Follow up and engage in a constructive dialogue to ensure that the feedback is understood and applied effectively.

Remember that the art of giving peer review feedback is a valuable skill, and when done right, it can foster professional growth, foster collaboration, and inspire continuous improvement. This is where performance management software like Peoplebox come into play.

Start Collecting Peer Review Feedback On Peoplebox 

In a world where the continuous improvement of your workforce is paramount, harnessing the potential of peer review feedback is a game-changer. Peoplebox offers a suite of powerful features that revolutionize performance management, simplifying the alignment of people with business goals and driving success. Want to experience it first hand? Take a quick tour of our product.

Take a Product Tour

Through Peoplebox, you can effortlessly establish peer reviews, customizing key aspects such as:

  • Allowing the reviewee to select their peers
  • Seeking managerial approval for chosen peers to mitigate bias
  • Determining the number of peers eligible for review, and more.

Peoplebox lets you choose your peers to review

And the best part? Peoplebox lets you do all this from right within Slack.

Use Peoplebox to collect performance reviews on Slack

Peer Review Feedback Template That You Can Use Right Away

Still on the fence about using software for performance reviews? Here’s a quick ready-to-use peer review template you can use to kickstart the peer review process.

Free peer review template on Google form

Download the Free Peer Review Feedback Form here.

If you ever reconsider and are looking for a more streamlined approach to handle 360 feedback, give Peoplebox a shot!

Frequently Asked Questions

Why is peer review feedback important.

Peer review feedback provides a well-rounded view of employee performance, fosters skill enhancement, encourages accountability, strengthens team cohesion, ensures fair assessment, and identifies blind spots early on.

How does peer review feedback benefit employees?

Peer review feedback offers employees valuable insights for growth, helps them identify areas for improvement, provides recognition for their efforts, and fosters a culture of collaboration and continuous learning.

What are some best practices for giving constructive peer feedback?

Best practices include grounding feedback in specific examples, offering both praise and areas for improvement, focusing on actionable suggestions, maintaining professionalism, and ensuring feedback is clear and respectful.

What role does HR software like Peoplebox play in peer review feedback?

HR software like Peoplebox streamlines the peer review process by allowing customizable feedback, integration with collaboration tools like Slack, easy selection of reviewers, and providing templates and tools for effective feedback.

How can HR professionals promote a culture of feedback and openness in their organization?

HR professionals can promote a feedback culture by leading by example, providing training on giving and receiving feedback, recognizing and rewarding constructive feedback, creating safe spaces for communication, and fostering a culture of continuous improvement.

What is peer review?

A peer review is a collaborative evaluation process where colleagues assess each other’s work. It’s a cornerstone of professional development, enhancing accountability and shared learning. By providing constructive feedback , peers contribute to overall team improvement. Referencing peer review examples can guide effective implementation within your organization.

What should I write in a peer review?

In a peer review, you should focus on providing constructive, balanced feedback. Highlight strengths such as effective communication or leadership, and offer specific suggestions for improvement. The goal is to help peers grow professionally by addressing areas like skill development or performance gaps. Use clear and supportive language to ensure your feedback is actionable. By incorporating peer review examples, you can provide valuable insights to enhance performance.

What are some examples of peer review phrases?

Statements like ‘ Your ability to articulate complex ideas is impressive ‘ or ‘ I recommend focusing on time management to improve project delivery ‘ are examples of peer review phrases. These phrases help peers identify specific strengths and areas for growth. Customizing feedback to fit the context ensures it’s relevant and actionable. Exploring different peer review examples can inspire you to craft impactful feedback that drives growth.

Why is it called peer review?

It’s called peer review because the evaluation is conducted by colleagues or peers who share similar expertise or roles. This ensures that the feedback is relevant and credible, as it comes from individuals who understand the challenges and standards of the work being assessed. Analyzing peer review examples can reveal best practices for implementing this process effectively.

What are the types of peer reviews?

Peer reviews can be formal or informal. Formal reviews are typically structured, documented, and tied to performance evaluation. Informal reviews offer more frequent, real-time feedback. Both types are valuable for development. Exploring peer review examples can help you determine the best approach for your team or organization.

Table of Contents

What’s Next?

peer review in research example

Get Peoplebox Demo

Get a 30-min. personalized demo of our OKR, Performance Management and People Analytics Platform Schedule Now

peer review in research example

Take Product Tour

Watch a product tour to see how Peoplebox makes goals alignment, performance management and people analytics seamless. Take a product tour

Subscribe to our blog & newsletter

Popular Categories

  • Employee Engagement
  • One on Ones
  • People Analytics
  • Strategy Execution
  • Remote Work

Recent Blogs

Stack Ranking

Stack Ranking: Does it work in 2024?

What is Workplace Insubordination?

HR’s Guide to Effectively Dealing With Insubordination at Workplace

peer review in research example

Your one-stop guide to FAST Goals

peer review in research example

  • Performance Reviews
  • 360 Degree Reviews
  • Performance Reviews in Slack
  • 1:1 Meetings
  • Business Reviews
  • Engagement Survey
  • Anonymous Messaging
  • Engagement Insights
  • Org Chart Tool
  • Integrations
  • Why Peoplebox
  • Our Customers
  • Customer Success Stories
  • Product Tours
  • Peoplebox Analytics Talk
  • The Peoplebox Pulse Newsletter
  • OKR Podcast
  • OKR Examples
  • One-on-one-meeting questions
  • Performance Review Templates
  • Request Demo
  • Help Center
  • Careers (🚀 We are hiring)
  • Privacy Policy
  • Terms & Conditions
  • GDPR Compliance
  • Data Processing Addendum
  • Responsible Disclosure
  • Cookies Policy

Share this blog

  • Reviewer Guidelines
  • Peer review model
  • Scope & article eligibility
  • Reviewer eligibility
  • Peer reviewer code of conduct
  • Guidelines for reviewing
  • How to submit
  • The peer-review process
  • Peer Reviewing Tips
  • Benefits for Reviewers

The genesis of this paper is the proposal that genomes containing a poor percentage of guanosine and cytosine (GC) nucleotide pairs lead to proteomes more prone to aggregation than those encoded by GC-rich genomes. As a consequence these organisms are also more dependent on the protein folding machinery. If true, this interesting hypothesis could establish a direct link between the tendency to aggregate and the genomic code.

In their paper, the authors have tested the hypothesis on the genomes of eubacteria using a genome-wide approach based on multiple machine learning models. Eubacteria are an interesting set of organisms which have an appreciably high variation in their nucleotide composition with the percentage of CG genetic material ranging from 20% to 70%. The authors classified different eubacterial proteomes in terms of their aggregation propensity and chaperone-dependence. For this purpose, new classifiers had to be developed which were based on carefully curated data. They took account for twenty-four different features among which are sequence patterns, the pseudo amino acid composition of phenylalanine, aspartic and glutamic acid, the distribution of positively charged amino acids, the FoldIndex score and the hydrophobicity. These classifiers seem to be altogether more accurate and robust than previous such parameters.

The authors found that, contrary to what expected from the working hypothesis, which would predict a decrease in protein aggregation with an increase in GC richness, the aggregation propensity of proteomes increases with the GC content and thus the stability of the proteome against aggregation increases with the decrease in GC content. The work also established a direct correlation between GC-poor proteomes and a lower dependence on GroEL. The authors conclude by proposing that a decrease in eubacterial GC content may have been selected in organisms facing proteostasis problems. A way to test the overall results would be through in vitro evolution experiments aimed at testing whether adaptation to low GC content provide folding advantage.

The main strengths of this paper is that it addresses an interesting and timely question, finds a novel solution based on a carefully selected set of rules, and provides a clear answer. As such this article represents an excellent and elegant bioinformatics genome-wide study which will almost certainly influence our thinking about protein aggregation and evolution. Some of the weaknesses are the not always easy readability of the text which establishes unclear logical links between concepts.

Another possible criticism could be that, as any in silico study, it makes strong assumptions on the sequence features that lead to aggregation and strongly relies on the quality of the classifiers used. Even though the developed classifiers seem to be more robust than previous such parameters, they remain only overall indications which can only allow statistical considerations. It could of course be argued that this is good enough to reach meaningful conclusions in this specific case.

The paper by Chevalier et al. analyzed whether late sodium current (I NaL ) can be assessed using an automated patch-clamp device. To this end, the I NaL effects of ranolazine (a well known I NaL inhibitor) and veratridine (an I NaL activator) were described. The authors tested the CytoPatch automated patch-clamp equipment and performed whole-cell recordings in HEK293 cells stably transfected with human Nav1.5. Furthermore, they also tested the electrophysiological properties of human induced pluripotent stem cell-derived cardiomyocytes (hiPS) provided by Cellular Dynamics International. The title and abstract are appropriate for the content of the text. Furthermore, the article is well constructed, the experiments were well conducted, and analysis was well performed.

I NaL is a small current component generated by a fraction of Nav1.5 channels that instead to entering in the inactivated state, rapidly reopened in a burst mode. I NaL critically determines action potential duration (APD), in such a way that both acquired (myocardial ischemia and heart failure among others) or inherited (long QT type 3) diseases that augmented the I NaL magnitude also increase the susceptibility to cardiac arrhythmias. Therefore, I NaL has been recognized as an important target for the development of drugs with either antiischemic or antiarrhythmic effects. Unfortunately, accurate measurement of I NaL is a time consuming and technical challenge because of its extra-small density. The automated patch clamp device tested by Chevalier et al. resolves this problem and allows fast and reliable I NaL measurements.

The results here presented merit some comments and arise some unresolved questions. First, in some experiments (such is the case in experiments B and D in Figure 2) current recordings obtained before the ranolazine perfusion seem to be quite unstable. Indeed, the amplitude progressively increased to a maximum value that was considered as the control value (highlighted with arrows). Can this problem be overcome? Is this a consequence of a slow intracellular dialysis? Is it a consequence of a time-dependent shift of the voltage dependence of activation/inactivation? Second, as shown in Figure 2, intensity of drug effects seems to be quite variable. In fact, experiments A, B, C, and D in Figure 2 and panel 2D, demonstrated that veratridine augmentation ranged from 0-400%. Even assuming the normal biological variability, we wonder as to whether this broad range of effect intensities can be justified by changes in the perfusion system. Has been the automated dispensing system tested? If not, we suggest testing the effects of several K + concentrations on inward rectifier currents generated by Kir2.1 channels (I Kir2.1 ).

The authors demonstrated that the recording quality was so high that the automated device allows to the differentiation between noise and current, even when measuring currents of less than 5 pA of amplitude. In order to make more precise mechanistic assumptions, the authors performed an elegant estimation of current variance (σ 2 ) and macroscopic current (I) following the procedure described more than 30 years ago by Van Driessche and Lindemann 1 . By means of this method, Chevalier et al. reducing the open channel probability, while veratridine increases the number of channels in the burst mode. We respectfully would like to stress that these considerations must be put in context from a pharmacological point of view. We do not doubt that ranolazine acts as an open channel blocker, what it seems clear however, is that its onset block kinetics has to be “ultra” slow, otherwise ranolazine would decrease peak I NaL even at low frequencies of stimulation. This comment points towards the fact that for a precise mechanistic study of ionic current modifying drugs it is mandatory to analyze drug effects with much more complicated pulse protocols. Questions thus are: does this automated equipment allow to the analysis of the frequency-, time-, and voltage-dependent effects of drugs? Can versatile and complicated pulse protocols be applied? Does it allow to a good voltage control even when generated currents are big and fast? If this is not possible, and by means of its extraordinary discrimination between current and noise, this automated patch-clamp equipment will only be helpful for rapid I NaL -modifying drug screening. Obviously it will also be perfect to test HERG blocking drug effects as demanded by the regulatory authorities.

Finally, as cardiac electrophysiologists, we would like to stress that it seems that our dream of testing drug effects on human ventricular myocytes seems to come true. Indeed, human atrial myocytes are technically, ethically and logistically difficult to get, but human ventricular are almost impossible to be obtained unless from the explanted hearts from patients at the end stage of cardiac diseases. Here the authors demonstrated that ventricular myocytes derived from hiPS generate beautiful action potentials that can be recorded with this automated equipment. The traces shown suggested that there was not alternation in the action potential duration. Is this a consistent finding? How long do last these stable recordings? The only comment is that resting membrane potential seems to be somewhat variable. Can this be resolved? Is it an unexpected veratridine effect? Standardization of maturation methods of ventricular myocytes derived from hiPS will be a big achievement for cardiac cellular electrophysiology which was obliged for years to the imprecise extrapolation of data obtained from a combination of several species none of which was representative of human electrophysiology. The big deal will be the maturation of human atrial myocytes derived from hiPS that fulfil the known characteristics of human atrial cells.

We suggest suppressing the initial sentence of section 3. We surmise that results obtained from the experiments described in this section cannot serve to understand the role of I NaL in arrhythmogenesis.

1. Van Driessche W, Lindemann B: Concentration dependence of currents through single sodium-selective pores in frog skin. Nature . 1979; 282 (5738): 519-520 PubMed Abstract | Publisher Full Text

The authors have clarified several of the questions I raised in my previous review. Unfortunately, most of the major problems have not been addressed by this revision. As I stated in my previous review, I deem it unlikely that all those issues can be solved merely by a few added paragraphs. Instead there are still some fundamental concerns with the experimental design and, most critically, with the analysis. This means the strong conclusions put forward by this manuscript are not warranted and I cannot approve the manuscript in this form.

  • The greatest concern is that when I followed the description of the methods in the previous version it was possible to decode, with almost perfect accuracy, any arbitrary stimulus labels I chose. See https://doi.org/10.6084/m9.figshare.1167456 for examples of this reanalysis. Regardless of whether we pretend that the actual stimulus appeared at a later time or was continuously alternating between signal and silence, the decoding is always close to perfect. This is an indication that the decoding has nothing to do with the actual stimulus heard by the Sender but is opportunistically exploiting some other features in the data. The control analysis the authors performed, reversing the stimulus labels, cannot address this problem because it suffers from the exact same problem. Essentially, what the classifier is presumably using is the time that has passed since the recording started.
  • The reason for this is presumably that the authors used non-independent data for training and testing. Assuming I understand correctly (see point 3), random sampling one half of data samples from an EEG trace are not independent data . Repeating the analysis five times – the control analysis the authors performed – is not an adequate way to address this concern. Randomly selecting samples from a time series containing slow changes (such as the slow wave activity that presumably dominates these recordings under these circumstances) will inevitably contain strong temporal correlations. See TemporalCorrelations.jpg in https://doi.org/10.6084/m9.figshare.1185723 for 2D density histograms and a correlation matrix demonstrating this.
  • While the revised methods section provides more detail now, it still is unclear about exactly what data were used. Conventional classification analysis report what data features (usual columns in the data matrix) and what observations (usual rows) were used. Anything could be a feature but typically this might be the different EEG channels or fMRI voxels etc. Observations are usually time points. Here I assume the authors transformed the raw samples into a different space using principal component analysis. It is not stated if the dimensionality was reduced using the eigenvalues. Either way, I assume the data samples (collected at 128 Hz) were then used as observations and the EEG channels transformed by PCA were used as features. The stimulus labels were assigned as ON or OFF for each sample. A set of 50% of samples (and labels) was then selected at random for training, and the rest was used for testing. Is this correct?
  • A powerful non-linear classifier can capitalise on such correlations to discriminate arbitrary labels. In my own analyses I used both an SVM with RBF as well as a k-nearest neighbour classifier, both of which produce excellent decoding of arbitrary stimulus labels (see point 1). Interestingly, linear classifiers or less powerful SVM kernels fare much worse – a clear indication that the classifier learns about the complex non-linear pattern of temporal correlations that can describe the stimulus label. This is further corroborated by the fact that when using stimulus labels that are chosen completely at random (i.e. with high temporal frequency) decoding does not work.
  • The authors have mostly clarified how the correlation analysis was performed. It is still left unclear, however, how the correlations for individual pairs were averaged. Was Fisher’s z-transformation used, or were the data pooled across pairs? More importantly, it is not entirely surprising that under the experimental conditions there will be some correlation between the EEG signals for different participants, especially in low frequency bands. Again, this further supports the suspicion that the classification utilizes slow frequency signals that are unrelated to the stimulus and the experimental hypothesis. In fact, a quick spot check seems to confirm this suspicion: correlating the time series separately for each channel from the Receiver in pair 1 with those from the Receiver in pair 18 reveals 131 significant (p‹0.05, Bonferroni corrected) out of 196 (14x14 channels) correlations… One could perhaps argue that this is not surprising because both these pairs had been exposed to identical stimulus protocols: one minute of initial silence and only one signal period (see point 6). However, it certainly argues strongly against the notion that the decoding is any way related to the mental connection between the particular Sender and Receiver in a given pair because it clearly works between Receivers in different pairs! However, to further control for this possibility I repeated the same analysis but now comparing the Receiver from pair 1 to the Receiver from pair 15. This pair was exposed to a different stimulus paradigm (2 minutes of initial silence and a longer paradigm with three signal periods). I only used the initial 3 minutes for the correlation analysis. Therefore, both recordings would have been exposed to only one signal period but at different times (at 1 min and 2 min for pair 1 and 15, respectively). Even though the stimulus protocol was completely different the time courses for all the channels are highly correlated and 137 out of 196 correlations are significant. Considering that I used the raw data for this analysis it should not surprise anyone that extracting power from different frequency bands in short time windows will also reveal significant correlations. Crucially, it demonstrates that correlations between Sender and Receiver are artifactual and trivial.
  • The authors argue in their response and the revision that predictive strategies were unlikely. After having performed these additional analyses I am inclined to agree. The excellent decoding almost certainly has nothing to do with expectation or imagery effects and it is irrelevant whether participants could guess the temporal design of the experiment. Rather, the results are almost entirely an artefact of the analysis. However, this does not mean that predictability is not an issue. The figure StimulusTimecourses.jpg in https://doi.org/10.6084/m9.figshare.1185723 plots the stimulus time courses for all 20 pairs as can be extracted from the newly uploaded data. This confirms what I wrote in my previous review, in fact, with the corrected data sets the problem with predictability is even greater. Out of the 20 pairs, 13 started with 1 min of initial silence. The remaining 7 had 2 minutes of initial silence. Most of the stimulus paradigms are therefore perfectly aligned and thus highly correlated. This also proves incorrect the statement that initial silence periods were 1, 2, or 3 minutes. No pair had 3 min of initial silence. It would therefore have been very easy for any given Receiver to correctly guess the protocol. It should be clear that this is far from optimal for testing such an unorthodox hypothesis. Any future experiments should employ more randomization to decrease predictability. Even if this wasn’t the underlying cause of the present results, this is simply not great experimental design.
  • The authors now acknowledge in their response that all the participants were authors. They say that this is also acknowledged in the methods section, but I did not see any statement about that in the revised manuscript. As before, I also find it highly questionable to include only authors in an experiment of this kind. It is not sufficient to claim that Receivers weren’t guessing their stimulus protocol. While I am giving the authors (and thus the participants) the benefit of the doubt that they actually believe they weren’t guessing/predicting the stimulus protocols, this does not rule out that they did. It may in fact be possible to make such predictions subconsciously (Now, if you ask me, this is an interesting scientific question someone should do an experiment on!). The fact familiar with the protocol may help that. Any future experiments should take steps to prevent this.
  • I do not follow the explanation for the binomial test the authors used. Based on the excessive Bayes Factor of 390,625 it is clear that the authors assumed a chance level of 50% on their binomial test. Because the design is not balanced, this is not correct.
  • In general, the Bayes Factor and the extremely high decoding accuracy should have given the authors reason to start. Considering the unusual hypothesis did the authors not at any point wonder if these results aren’t just far too good to be true? Decoding mental states from brain activity is typically extremely noisy and hardly affords accuracies at the level seen here. Extremely accurate decoding and Bayes Factors in the hundreds of thousands should be a tell-tale sign to check that there isn’t an analytical flaw that makes the result entirely trivial. I believe this is what happened here and thus I think this experiment serves as a very good demonstration for the pitfalls of applying such analysis without sanity checks. In order to make claims like this, the experimental design must contain control conditions that can rule out these problems. Presumably, recordings without any Sender, and maybe even when the “Receiver” is aware of this fact, should produce very similar results.

Based on all these factors, it is impossible for me to approve this manuscript. I should however state that it is laudable that the authors chose to make all the raw data of their experiment publicly available. Without this it would have impossible for me to carry out the additional analyses, and thus the most fundamental problem in the analysis would have remained unknown. I respect the authors’ patience and professionalism in dealing with what I can only assume is a rather harsh review experience. I am honoured by the request for an adversarial collaboration. I do not rule out such efforts at some point in the future. However, for all of the reasons outlined in this and my previous review, I do not think the time is right for this experiment to proceed to this stage. Fundamental analytical flaws and weaknesses in the design should be ruled out first. An adversarial collaboration only really makes sense to me for paradigms were we can be confident that mundane or trivial factors have been excluded.

This manuscript does an excellent job demonstrating significant strain differences in Burdian's paradigm. Since each Drosophila lab has their own wild type (usually Canton-S) isolate, this issue of strain differences is actually a very important one for between lab reproducibility. This work is a good reminder for all geneticists to pay attention to the population effects in the background controls, and presumably the mutant lines we are comparing.

I was very pleased to see the within-isolate behavior was consistent in replicate experiments one year apart. The authors further argue that the between-isolate differences in behavior arise from a Founder's effect, at least in the differences in locomotor behavior between the Paris lines CS_TP and CS_JC. I believe this is a very reasonable and testable hypothesis. It predicts that genetic variability for these traits exist within the populations. It should now be possible to perform selection experiments from the original CS_TP population to replicate the founding event and estimate the heritability of these traits.

Two other things that I liked about this manuscript are the ability to adjust parameters in figure 3, and our ability to download the raw data. After reading the manuscript, I was a little disappointed that the performance of the five strains in each 12 behavioral variables weren't broken down individually in a table or figure. I thought this may help us readers understand what the principle components were representing. The authors have made this data readily accessible in a downloadable spreadsheet.

This is an exceptionally good review and balanced assessment of the status of CETP inhibitors and ASCVD from a world authority in the field. The article highlights important data that might have been overlooked when promulgating the clinical value of CETPIs and related trials.

Only 2 areas need revision:

  • Page 3, para 2: the notion that these data from Papp et al . convey is critical and the message needs an explicit sentence or two at end of paragraph.
  • Page 4, Conclusion: the assertion concerning the ethics of the two Phase 3 clinical trials needs toning down. Perhaps rephrase to indicate that the value and sense of doing these trials is open to question, with attendant ethical implications, or softer wording to that effect.

The Wiley et al . manuscript describes a beautiful synthesis of contemporary genetic approaches to, with astonishing efficiency, identify lead compounds for therapeutic approaches to a serious human disease. I believe the importance of this paper stems from the applicability of the approach to the several thousand of rare human disease genes that Next-Gen sequencing will uncover in the next few years and the challenge we will have in figuring out the function of these genes and their resulting defects. This work presents a paradigm that can be broadly and usefully applied.

In detail, the authors begin with gene responsible for X-linked spinal muscular atrophy and express both the wild-type version of that human gene as well as a mutant form of that gene in S. pombe . The conceptual leap here is that progress in genetics is driven by phenotype, and this approach involving a yeast with no spine or muscles to atrophy is nevertheless and N-dimensional detector of phenotype.

The study is not without a small measure of luck in that expression of the wild-type UBA1 gene caused a slow growth phenotype which the mutant did not. Hence there was something in S. pombe that could feel the impact of this protein. Given this phenotype, the authors then went to work and using the power of the synthetic genetic array approach pioneered by Boone and colleagues made a systematic set of double mutants combining the human expressed UBA1 gene with knockout alleles of a plurality of S. pombe genes. They found well over a hundred mutations that either enhanced or suppressed the growth defect of the cells expressing UBI1. Most of these have human orthologs. My hunch is that many human genes expressed in yeast will have some comparably exploitable phenotype, and time will tell.

Building on the interaction networks of S. pombe genes already established, augmenting these networks by the protein interaction networks from yeast and from human proteome studies involving these genes, and from the structure of the emerging networks, the authors deduced that an E3 ligase modulated UBA1 and made the leap that it therefore might also impact X-linked Spinal Muscular Atrophy.

Here, the awesome power of the model organism community comes into the picture as there is a zebrafish model of spinal muscular atrophy. The principle of phenologs articulated by the Marcotte group inspire the recognition of the transitive logic of how phenotypes in one organism relate to phenotypes in another. With this zebrafish model, they were able to confirm that an inhibitor of E3 ligases and of the Nedd8-E1 activating suppressed the motor axon anomalies, as predicted by the effect of mutations in S. pombe on the phenotypes of the UBA1 overexpression.

I believe this is an important paper to teach in intro graduate courses as it illustrates beautifully how important it is to know about and embrace the many new sources of systematic genetic information and apply them broadly.

This paper by Amrhein et al. criticizes a paper by Bradley Efron that discusses Bayesian statistics ( Efron, 2013a ), focusing on a particular example that was also discussed in Efron (2013b) . The example concerns a woman who is carrying twins, both male (as determined by sonogram and we ignore the possibility that gender has been observed incorrectly). The parents-to-be ask Efron to tell them the probability that the twins are identical.

This is my first open review, so I'm not sure of the protocol. But given that there appears to be errors in both Efron (2013b) and the paper under review, I am sorry to say that my review might actually be longer than the article by Efron (2013a) , the primary focus of the critique, and the critique itself. I apologize in advance for this. To start, I will outline the problem being discussed for the sake of readers.

This problem has various parameters of interest. The primary parameter is the genetic composition of the twins in the mother’s womb. Are they identical (which I describe as the state x = 1) or fraternal twins ( x = 0)? Let y be the data, with y = 1 to indicate the twins are the same gender. Finally, we wish to obtain Pr( x = 1 | y = 1), the probability the twins are identical given they are the same gender 1 . Bayes’ rule gives us an expression for this:

Pr( x = 1 | y = 1) = Pr( x =1) Pr( y = 1 | x = 1) / {Pr( x =1) Pr( y = 1 | x = 1) + Pr( x =0) Pr( y = 1 | x = 0)}

Now we know that Pr( y = 1 | x = 1) = 1; twins must be the same gender if they are identical. Further, Pr( y = 1 | x = 0) = 1/2; if twins are not identical, the probability of them being the same gender is 1/2.

Finally, Pr( x = 1) is the prior probability that the twins are identical. The bone of contention in the Efron papers and the critique by Amrhein et al. revolves around how this prior is treated. One can think of Pr( x = 1) as the population-level proportion of twins that are identical for a mother like the one being considered.

However, if we ignore other forms of twins that are extremely rare (equivalent to ignoring coins finishing on their edges when flipping them), one incontrovertible fact is that Pr( x = 0) = 1 − Pr( x = 1); the probability that the twins are fraternal is the complement of the probability that they are identical.

The above values and expressions for Pr( y = 1 | x = 1), Pr( y = 1 | x = 0), and Pr( x = 0) leads to a simpler expression for the probability that we seek ‐ the probability that the twins are identical given they have the same gender:

Pr( x = 1 | y = 1) = 2 Pr( x =1) / [1 + Pr( x =1)] (1)

We see that the answer depends on the prior probability that the twins are identical, Pr( x =1). The paper by Amrhein et al. points out that this is a mathematical fact. For example, if identical twins were impossible (Pr( x = 1) = 0), then Pr( x = 1| y = 1) = 0. Similarly, if all twins were identical (Pr( x = 1) = 1), then Pr( x = 1| y = 1) = 1. The “true” prior lies somewhere in between. Apparently, the doctor knows that one third of twins are identical 2 . Therefore, if we assume Pr( x = 1) = 1/3, then Pr( x = 1| y = 1) = 1/2.

Now, what would happen if we didn't have the doctor's knowledge? Laplace's “Principle of Insufficient Reason” would suggest that we give equal prior probability to all possibilities, so Pr( x = 1) = 1/2 and Pr( x = 1| y = 1) = 2/3, an answer different from 1/2 that was obtained when using the doctor's prior of 1/3.

Efron(2013a) highlights this sensitivity to the prior, representing someone who defines an uninformative prior as a “violator”, with Laplace as the “prime violator”. In contrast, Amrhein et al. correctly points out that the difference in the posterior probabilities is merely a consequence of mathematical logic. No one is violating logic – they are merely expressing ignorance by specifying equal probabilities to all states of nature. Whether this is philosophically valid is debatable ( Colyvan 2008 ), but weight to that question, and it is well beyond the scope of this review. But setting Pr( x = 1) = 1/2 is not a violation; it is merely an assumption with consequences (and one that in hindsight might be incorrect 2 ).

Alternatively, if we don't know Pr( x = 1), we could describe that probability by its own probability distribution. Now the problem has two aspects that are uncertain. We don’t know the true state x , and we don’t know the prior (except in the case where we use the doctor’s knowledge that Pr( x = 1) = 1/3). Uncertainty in the state of x refers to uncertainty about this particular set of twins. In contrast, uncertainty in Pr( x = 1) reflects uncertainty in the population-level frequency of identical twins. A key point is that the state of one particular set of twins is a different parameter from the frequency of occurrence of identical twins in the population.

Without knowledge about Pr( x = 1), we might use Pr( x = 1) ~ dunif(0, 1), which is consistent with Laplace. Alternatively, Efron (2013b) notes another alternative for an uninformative prior: Pr( x = 1) ~ dbeta(0.5, 0.5), which is the Jeffreys prior for a probability.

Here I disagree with Amrhein et al. ; I think they are confusing the two uncertain parameters. Amrhein et al. state:

“We argue that this example is not only flawed, but useless in illustrating Bayesian data analysis because it does not rely on any data. Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal), Efron does not use it to update prior knowledge. Instead, Efron combines different pieces of expert knowledge from the doctor and genetics using Bayes’ theorem.”

This claim might be correct when describing uncertainty in the population-level frequency of identical twins. The data about the twin boys is not useful by itself for this purpose – they are a biased sample (the data have come to light because their gender is the same; they are not a random sample of twins). Further, a sample of size one, especially if biased, is not a firm basis for inference about a population parameter. While the data are biased, the claim by Amrheim et al. that there are no data is incorrect.

However, the data point (the twins have the same gender) is entirely relevant to the question about the state of this particular set of twins. And it does update the prior. This updating of the prior is given by equation (1) above. The doctor’s prior probability that the twins are identical (1/3) becomes the posterior probability (1/2) when using information that the twins are the same gender. The prior is clearly updated with Pr( x = 1| y = 1) ≠ Pr( x = 1) in all but trivial cases; Amrheim et al. ’s statement that I quoted above is incorrect in this regard.

This possible confusion between uncertainty about these twins and uncertainty about the population level frequency of identical twins is further suggested by Amrhein et al. ’s statements:

“Second, for the uninformative prior, Efron mentions erroneously that he used a uniform distribution between zero and one, which is clearly different from the value of 0.5 that was used. Third, we find it at least debatable whether a prior can be called an uninformative prior if it has a fixed value of 0.5 given without any measurement of uncertainty.”

Note, if the prior for Pr( x = 1) is specified as 0.5, or dunif(0,1), or dbeta(0.5, 0.5), the posterior probability that these twins are identical is 2/3 in all cases. Efron (2013b) says the different priors lead to different results, but this result is incorrect, and the correct answer (2/3) is given in Efron (2013a) 3 . Nevertheless, a prior that specifies Pr( x = 1) = 0.5 does indicate uncertainty about whether this particular set of twins is identical (but certainty in the population level frequency of twins). And Efron’s (2013a) result is consistent with Pr( x = 1) having a uniform prior. Therefore, both claims in the quote above are incorrect.

It is probably easiest to show the (lack of) influence of the prior using MCMC sampling. Here is WinBUGS code for the case using Pr( x = 1) = 0.5.

Running this model in WinBUGS shows that the posterior mean of x is 2/3; this is the posterior probability that x = 1.

Instead of using pr_ident_twins <- 0.5, we could set this probability as being uncertain and define pr_ident_twins ~ dunif(0,1), or pr_ident_twins ~ dbeta(0.5,0.5). In either case, the posterior mean value of x remains 2/3 (contrary to Efron 2013b , but in accord with the correction in Efron 2013a ).

Note, however, that the value of the population level parameter pr_ident_twins is different in all three cases. In the first it remains unchanged at 1/2 where it was set. In the case where the prior distribution for pr_ident_twins is uniform or beta, the posterior distributions remain broad, but they differ depending on the prior (as they should – different priors lead to different posteriors 4 ). However, given the biased sample size of 1, the posterior distribution for this particular parameter is likely to be misleading as an estimate of the population-level frequency of twins.

So why doesn’t the choice of prior influence the posterior probability that these twins are identical? Well, for these three priors, the prior probability that any single set of twins is identical is 1/2 (this is essentially the mean of the prior distributions in these three cases).

If, instead, we set the prior as dbeta(1,2), which has a mean of 1/3, then the posterior probability that these twins are identical is 1/2. This is the same result as if we had set Pr( x = 1) = 1/3. In both these cases (choosing dbeta(1,2) or 1/3), the prior probability that a single set of twins is identical is 1/3, so the posterior is the same (1/2) given the data (the twins have the same gender).

Further, Amrhein et al. also seem to misunderstand the data. They note:

“Although there is one data point (a couple is due to be parents of twin boys, and the twins are fraternal)...”

This is incorrect. The parents simply know that the twins are both male. Whether they are fraternal is unknown (fraternal twins being the complement of identical twins) – that is the question the parents are asking. This error of interpretation makes the calculations in Box 1 and subsequent comments irrelevant.

Box 1 also implies Amrhein et al. are using the data to estimate the population frequency of identical twins rather than the state of this particular set of twins. This is different from the aim of Efron (2013a) and the stated question.

Efron suggests that Bayesian calculations should be checked with frequentist methods when priors are uncertain. However, this is a good example where this cannot be done easily, and Amrhein et al. are correct to point this out. In this case, we are interested in the probability that the hypothesis is true given the data (an inverse probability), not the probabilities that the observed data would be generated given particular hypotheses (frequentist probabilities). If one wants the inverse probability (the probability the twins are identical given they are the same gender), then Bayesian methods (andtherefore a prior) are required. A logical answer simply requires that the prior is constructed logically. Whether that answer is “correct” will be, in most cases, only known in hindsight.

However, one possible way to analyse this example using frequentist methods would be to assess the likelihood of obtaining the data for each of the two hypothesis (the twins are identical or fraternal). The likelihood of the twins having the same gender under the hypothesis that they are identical is 1. The likelihood of the twins having the same gender under the hypothesis that they are fraternal is 0.5. Therefore, the weight of evidence in favour of identical twins is twice that of fraternal twins. Scaling these weights so they sum to one ( Burnham and Anderson 2002 ), gives a weight of 2/3 for identical twins and 1/3 for fraternal twins. These scaled weights have the same numerical values as the posterior probabilities based on either a Laplace or Jeffreys prior. Thus, one might argue that the weight of evidence for each hypothesis when using frequentist methods is equivalent to the posterior probabilities derived from an uninformative prior. So, as a final aside in reference to Efron (2013a) , if we are being “violators” when using a uniform prior, are we also being “violators” when using frequentist methods to weigh evidence? Regardless of the answer to this rhetorical question, “checking” the results with frequentist methods doesn’t give any more insight than using uninformative priors (in this case). However, this analysis shows that the question can be analysed using frequentist methods; the single data point is not a problem for this. The claim in Armhein et al. that a frequentist analyis "is impossible because there is only one data point, and frequentist methods generally cannot handle such situations" is not supported by this example.

In summary, the comment by Amrhein et al. raises some interesting points that seem worth discussing, but it makes important errors in analysis and interpretation, and misrepresents the results of Efron (2013a) . This means the current version should not be approved.

Burnham, K.P. & D.R. Anderson. 2002. Model Selection and Multi-model Inference: a Practical Information-theoretic Approach. Springer-Verlag, New York.

Colyvan, M. 2008. Is Probability the Only Coherent Approach to Uncertainty? Risk Anal. 28: 645-652.

Efron B. (2003a) Bayes’ Theorem in the 21st Century. Science 340(6137): 1177-1178.

Efron B. (2013b) A 250-year argument: Belief, behavior, and the bootstrap. Bull Amer. Math Soc. 50: 129-146.

  • The twins are both male. However, if the twins were both female, the statistical results would be the same, so I will simply use the data that the twins are the same gender.
  • In reality, the frequency of twins that are identical is likely to vary depending on many factors but we will accept 1/3 for now.
  • Efron (2013b) reports the posterior probability for these twins being identical as “a whopping 61.4% with a flat Laplace prior” but as 2/3 in Efron (2013a) . The latter (I assume 2/3 is “even more whopping”!) is the correct answer, which I confirmed via email with Professor Efron. Therefore, Efron (2013b) incorrectly claims the posterior probability is sensitive to the choice between a Jeffreys or Laplace uninformative prior.
  • When the data are very informative relative to the different priors, the posteriors will be similar, although not identical.

I am very glad the authors wrote this essay. It is a well-written, needed, and useful summary of the current status of “data publication” from a certain perspective. The authors, however, need to be bolder and more analytical. This is an opinion piece, yet I see little opinion. A certain view is implied by the organization of the paper and the references chosen, but they could be more explicit.

The paper would be both more compelling and useful to a broad readership if the authors moved beyond providing a simple summary of the landscape and examined why there is controversy in some areas and then use the evidence they have compiled to suggest a path forward. They need to be more forthright in saying what data publication means to them, or what parts of it they do not deal with. Are they satisfied with the Lawrence et al. definition? Do they accept the critique of Parsons and Fox? What is the scope of their essay?

The authors take a rather narrow view of data publication, which I think hinders their analyses. They describe three types of (digital) data publication: Data as a supplement to an article; data as the subject of a paper; and data independent of a paper. The first two types are relatively new and they represent very little of the data actually being published or released today. The last category, which is essentially an “other” category, is rich in its complexity and encompasses the vast majority of data released. I was disappointed that the examples of this type were only the most bare-bones (Zenodo and Figshare). I think a deeper examination of this third category and its complexity would help the authors better characterize the current landscape and suggest paths forward.

Some questions the authors might consider: Are these really the only three models in consideration or does the publication model overstate a consensus around a certain type of data publication? Why are there different models and which approach is better for different situations? Do they have different business models or imply different social contracts? Might it also be worthy of typing “publishers” instead of “publications”? For example, do domain repositories vs. institutional repositories vs. publishers address the issues differently? Are these models sustaining models or just something to get us through the next 5-10 years while we really figure it out?

I think this oversimplification inhibited some deeper analysis in other areas as well. I would like to see more examination of the validation requirement beyond the lens of peer review, and I would like a deeper examination of incentives and credit beyond citation.

I thought the validation section of the paper was very relevant, but somewhat light. I like the choice of the term validation as more accurate than “quality” and it fits quite well with Callaghan’s useful distinction between technical and scientific review, but I think the authors overemphasize the peer-review style approach. The authors rightly argue that “peer-review” is where the publication metaphor leads us, but it may be a false path. They overstate some difficulties of peer-review (No-one looks at every data value? No, they use statistics, visualization, and other techniques.) while not fully considering who is responsible for what. We need a closer examination of different roles and who are appropriate validators (not necessarily conventional peers). The narrowly defined models of data publication may easily allow for a conventional peer-review process, but it is much more complex in the real-world “other” category. The authors discuss some of this in what they call “independent data validation,” but they don’t draw any conclusions.

Only the simplest of research data collections are validated only by the original creators. More often there are teams working together to develop experiments, sampling protocols, algorithms, etc. There are additional teams who assess, calibrate, and revise the data as they are collected and assembled. The authors discuss some of this in their examples like the PDS and tDAR, but I wish they were more analytical and offered an opinion on the way forward. Are there emerging practices or consensus in these team-based schemes? The level of service concept illustrated by Open Context may be one such area. Would formalizing or codifying some of these processes accomplish the same as peer-review or more? What is the role of the curator or data scientist in all of this? Given the authors’s backgrounds, I was surprised this role was not emphasized more. Finally, I think it is a mistake for science review to be the main way to assess reuse value. It has been shown time and again that data end up being used effectively (and valued) in ways that original experts never envisioned or even thought valid.

The discussion of data citation was good and captured the state of the art well, but again I would have liked to see some views on a way forward. Have we solved the basic problem and are now just dealing with edge cases? Is the “just-in-time identifier” the way to go? What are the implications? Will the more basic solutions work in the interim? More critically, are we overemphasizing the role of citation to provide academic credit? I was gratified that the authors referenced the Parsons and Fox paper which questions the whole data publication metaphor, but I was surprised that they only discussed the “data as software” alternative metaphor. That is a useful metaphor, but I think the ecosystem metaphor has broader acceptance. I mention this because the authors critique the software metaphor because “using it to alter or affect the academic reward system is a tricky prospect”. Yet there is little to suggest that data publication and corresponding citation alters that system either. Indeed there is little if any evidence that data publication and citation incentivize data sharing or stewardship. As Christine Borgman suggests, we need to look more closely at who we are trying to incentivize to do what. There is no reason to assume it follows the same model as research literature publication. It may be beyond the scope of this paper to fully examine incentive structures, but it at least needs to be acknowledged that building on the current model doesn’t seem to be working.

Finally, what is the takeaway message from this essay? It ends rather abruptly with no summary, no suggested directions or immediate challenges to overcome, no call to action, no indications of things we should stop trying, and only brief mention of alternative perspectives. What do the authors want us to take away from this paper?

Overall though, this is a timely and needed essay. It is well researched and nicely written with rich metaphor. With modifications addressing the detailed comments below and better recognizing the complexity of the current data publication landscape, this will be a worthwhile review paper. With more significant modification where the authors dig deeper into the complexities and controversies and truly grapple with their implications to suggest a way forward, this could be a very influential paper. It is possible that the definitions of “publication” and “peer-review” need not be just stretched but changed or even rejected.

  • The whole paper needs a quick copy edit. There are a few typos, missing words, and wrong verb tenses. Note the word “data” is a plural noun. E.g., Data are not software, nor are they literature. (NSICD, instead of NSIDC)
  • Page 2, para 2: “citability is addressed by assigning a PID.” This is not true, as the authors discuss on page 4, para 4. Indeed, page 4, para 4 seems to contradict itself. Citation is more than a locator/identifier.
  • In the discussion of “Data independent of any paper” it is worth noting that there may often be linkages between these data and myriad papers. Indeed a looser concept of a data paper has existed for some time, where researchers request a citation to a paper even though it is not the data nor fully describes the data (e.g the CRU temp records)
  • Page 4, para 1: I’m not sure it’s entirely true that published data cannot involve requesting permission. In past work with Indigenous knowledge holders, they were willing to publish summary data and then provide the details when satisfied the use was appropriate and not exploitive. I think those data were “published” as best they could be. A nit, perhaps, but it highlights that there are few if any hard and fast rules about data publication.
  • Page 4, para 2: You may also want to mention the WDS certification effort, which is combining with the DSA via an RDA Working Group:
  • Page 4, para 2: The joint declaration of data citation principles involved many more organizations than Force11, CODATA, and DCC. Please credit them all (maybe in a footnote). The glory of the effort was that it was truly a joint effort across many groups. There is no leader. Force11 was primarily a convener.
  • Page 4, para 6: The deep citation approach recommended by ESIP is not to just to list variables or a range of data. It is to identify a “structural index” for the data and to use this to reference subsets. In Earth science this structural index is often space and time, but many other indices are possible--location in a gene sequence, file type, variable, bandwidth, viewing angle, etc. It is not just for “straightforward” data sets.
  • Page 5, para 5: I take issue with the statement that few repositories provide scientific review. I can think of a couple dozen that do just off the top of my head, and I bet most domain repositories have some level of science review. The “scientists” may not always be in house, but the repository is a team facilitator. See my general comments.
  • Page 5, para 10: The PDS system is only unusual in that it is well documented and advertised. As mentioned, this team style approach is actually fairly common.
  • Page 6, para 3: Parsons and Fox don’t just argue that the data publication metaphor is limiting. They also say it is misleading. That should be acknowledged at least, if not actively grappled with.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions.
  • Artifact removal: Unfortunately the authors have not updated the paper with a 2x2 table showing guns and smiles by removed data points. This could dispel criticism that an asymmetrical expectation bias that has been shown to exist in similar experiments is not driving a bias leading to inappropriate conclusions. This is my strongest criticism of the paper and should be easily addressed as per my previous review comment. The fact that this simple data presentation was not performed to remove a clear potential source of spurious results is disappointing.
  • The authors have added 95% CIs to figures S1 and S2. This clarifies the scope for expectation bias in these data. The addition of error bars permits the authors’ assumption of a linear trend, indicating that the effect of sequences of either guns or smiles may not skew results. Equally, there could be either a downwards or upwards trend fitting within the confidence intervals that could be indicative of a cognitive bias that may violate the assumptions of the authors, leading to spurious results. One way to remove these doubts could be to stratify the analyses by the length of sequences of identical symbols. If the results hold up in each of the strata, this potential bias could be shown to not be present in the data. If the bias is strong, particularly in longer runs, this could indicate that the positive result was due to small numbers of longer identical runs combined with a cognitive bias rather than an ability to predict future events.

Chamberlain and Szöcs present the taxize R package, a set of functions that provides interfaces to several web tools and databases, and simplifies the process of checking, updating, correcting and manipulating taxon names for researchers working with ecological/biological data. A key feature that is repeated throughout is the need for reproducibility of science workflows and taxize provides a means to achieve this within the R software ecosystem for taxonomic search.

The manuscript is well-written and nicely presented, with a good balance of descriptive text and discourse and practical illustration of package usage. A number of examples illustrate the scope of the package, something that is fully expanded upon in the two appendices, which are a welcome addition to the paper.

As to the package, I am not overly fond of long function names; the authors should consider dropping the data source abbreviations from the function names in a future update/revision of the package. Likewise there is some inconsistency in the naming conventions used. For example there is the ’tpl_search()’ function to search The Plant List, but the equivalent function to search uBio is ’ubio_namebank()’. Whilst this may reflect specific aspects of terminology in use at the respective data stores, it does not help the user gain familiarity with the package by having them remember inconsistent function names.

One advantage of taxize is that it draws together a rich selection of data stores to query. A further suggestion for a future update would be to add generic function names, that apply to a database connection/information object. The latter would describe the resource the user wants to search and any other required information, such as the API key, etc., for example:

The user function to search would then be ’search(foo, "Abies")’. Similar generically named functions would provide the primary user-interface, thus promoting a more consistent toolbox at the R level. This will become increasingly relevant as the scope of taxize increases through the addition of new data stores that the package can access.

In terms of presentation in the paper, I really don’t like the way the R code inputs merge with the R outputs. I know the author of Knitr doesn’t like the demarcation of output being polluted by the R prompt, but I do find it difficult parsing the inputs/outputs you show because often there is no space between them and users not familiar with R will have greater difficulties than I. Consider adding in more conventional indications of R outputs, or physically separate input from output by breaking up the chunks of code to have whitespace between the grey-background chunks. Related, in one location I noticed something amiss with the layout; in the first code block at the top of page 5, the printed output looks wrong here. I would expect the attributes to print on their own line and the data in the attribute to also be on its own separate line.

Note also, the inconsistency in the naming of the output object columns. For example, in the two code chunks shown in column 1 of page 4, the first block has an object printed with column names ’matched_name’ and ’data_source_title’, whilst camelCase is used in the outputs shown in the second block. As the package is revised and developed, consider this and other aspects of providing a consistent presentation to the user.

I was a little confused about the example in the section Resolve Taxonomic Names on page 4. Should the taxon name be “Helianthus annuus” or “Helianthus annus” ? In the ‘mynames’ definition you include ‘Helianthus annuus’ in the character vector but the output shown suggests that the submitted name was ‘Helianthus annus’ (1 “u”) in rows with rownames 9 and 10 in the output shown.

Other than that there were the following minor observations:

  • Abstract: replace “easy” with “simple” in “...fashion that’s easy...” , and move the details about availability and the URI to the end of the sentence.
  • Page 2, Column 1, Paragraph 2: You have “In addition, there is no one authoritative taxonomic names source...” , which is a little clumsy to read. How about “In addition, there is no one authoritative source of taxonomic names... ” ?
  • Pg 2, C1, P2-3: The abbreviated data sources are presented first (in paragraph 2) and subsequently defined (in para 3). Restructure this so that the abbreviated forms are explained upon first usage.
  • Pg 2, C2, P2: Most R packages are “in development” so I would drop the qualifier and reword the opening sentence of the paragraph.
  • Pg 2, C2, P6: Change “and more can easily be added” to “and more can be easily added” seems to flow better?
  • Pg 5, paragraph above Figure 1: You refer to converting the object to an **ape** *phylo* object and then repeat essentially the same information in the next sentence. Remove the repetition.
  • Pg 6, C1: The header may be better as “Which taxa are children of the taxon of interest” .
  • Pg 6: In the section “IUCN status”, the term “we” is used to refer to both the authors and the user. This is confusing. Reserve “we” for reference to the authors and use something else (“a user” perhaps) for the other instances. Check this throughout the entire manuscript.
  • Pg 6, C2: in the paragraph immediately below the ‘grep()’ for “RAG1”, two consecutive sentences begin with “However”.
  • Pg 7: The first sentence of “Aggregating data....” reads “In biology, one can asks questions...” . It should be “one asks” or “one can ask” .
  • Pg 7, Conclusions: The first sentence reads “information is increasingly sought out by biologists” . I would drop “out” as “sought” is sufficient on its own.
  • Appendices: Should the two figures in the Appendices have a different reference to differentiate them from Figure 1 in the main body of the paper? As it stands, the paper has two Figure 1s, one on page 5 and a second on page 12 in the Appendix.
  • On Appendix Figure 2: The individual points are a little large. Consider reducing the plotting character size. I appreciate the effect you were going for with the transparency indicating density of observation through overplotting, but the effect is weakened by the size of the individual points.
  • Should the phylogenetic trees have some scale to them? I presume the height of the stems is an indication of phylogenetic distance but the figure is hard to calibrate without an associated scale. A quick look at Paradis (2012) Analysis of Phylogenetics and Evolution with R would suggest however that a scale is not consistently applied to these trees. I am happy to be guided by the authors as they will be more familiar with the conventions than I.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed.

Hydbring and Badalian-Very summarize in this review, the current status in the potential development of clinical applications based on miRNAs’ biology. The article gives an interesting historical and scientific perspective on a field that has only recently boomed; focusing mostly on the two main products in the pipeline of several biotech companies (in Europe and USA) which work with miRNAs-based agents, disease diagnostics and therapeutics. Interestingly, not only the specific agents that are being produced are mentioned, but also clever insights in the important cellular pathways regulated by key miRNAs are briefly discussed.

Minor points to consider in subsequent versions:

  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : the concept of miRNA clusters and precursors could be a bit better explained.
  • Page 2; paragraph ‘Genomic location and transcription of microRNAs’ : when discussing the paper by the laboratory of Richard Young (reference 16); I think it is important to mention that that particular study refers to stem cells.
  • Page 2; paragraph ‘Processing of microRNAs’ : “Argonate” should be replaced by “Argonaute”.
  • Page 3; paragraph ‘MicroRNAs in disease diagnostics’ : are miR-15a and 16-1 two different miRNAs? I suggest mentioning them as: miR-15a and miR-16-1 and not using a slash sign (/) between them.
  • Page 4; paragraph ‘Circulating microRNAs’ : I am a bit bothered by the description of multiple sclerosis (MS) only as an autoimmune disease. Without being an expert in the field, I believe that there are other hypotheses related to the etiology of MS.
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : Does ‘hsa’ in hsa-miR-205 mean something?
  • Page 5; paragraph ‘Clinical microRNA diagnostics’ : the authors mention the company Asuragen, Austin, TX, USA but they do not really say anything about their products. I suggest to either remove the reference to that company or to include their current pipeline efforts.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : in the first paragraph the authors suggest that miRNAs-based therapeutics should be able to be applied with “minimal side-effects”. Since one miRNA can affect a whole gene program, I found this a bit counterintuitive; I was wondering if any data has been published to support that statement. Also, in the same paragraph, the authors compare miRNAs to protein inhibitors, which are described as more specific and/or selective. I think there are now good indications to think that protein inhibitors are not always that specific and/or selective and that such a property actually could be important for their evidenced therapeutic effects.
  • Page 6; paragraph ‘MicroRNAs in therapeutics’ : I think the concept of “antagomir” is an important one and could be better highlighted in the text.
  • Throughout the text (pages 3, 5, 6, and 7): I am a bit bothered by separating the word “miRNA” or “miRNAs” at the end of a sentence in the following way: “miR-NA” or “miR-NAs”. It is a bit confusing considering the particular nomenclature used for miRNAs. That was probably done during the formatting and editing step of the paper.
  • I was wondering if the authors could develop a bit more the general concept that seems to indicate that in disease (and in particular in cancer) the expression and levels of miRNAs are in general downregulated. Maybe some papers have been published about this phenomenon?

The authors describe their attempt to reproduce a study in which it was claimed that mild acid treatment was sufficient to reprogramme postnatal splenocytes from a mouse expressing GFP in the oct4 locus to pluripotent stem cells. The authors followed a protocol that has recently become available as a technical update of the original publication.

They report obtaining no pluripotent stem cells expressing GFP driven over the same time period of several days described in the original publication. They describe observation of some green fluorescence that they attributed to autofluorescence rather than GFP since it coincided with PI positive dead cells. They confirmed the absence of oct4 expression by RT-PCR and also found no evidence for Nanog or Sox2, also markers of pluripotent stem cells.

The paper appears to be an authentic attempt to reproduce the original study, although the study might have had additional value with more controls: “failure to reproduce” studies need to be particularly well controlled.

Examples that could have been valuable to include are:

  • For the claim of autofluorescence: the emission spectrum of the samples would likely have shown a broad spectrum not coincident with that of GFP.
  • The reprogramming efficiency of postnatal mouse splenocytes using more conventional methods in the hands of the authors would have been useful as a comparison. Idem the lung fibroblasts.
  • There are no positive control samples (conventional mESC or miPSC) in the qPCR experiments for pluripotency markers. This would have indicated the biological sensitivity of the assay.
  • Although perhaps a sensitive issue, it might have been helpful if the authors had been able to obtain samples of cells (or their mRNA) from the original authors for simultaneous analysis.

In summary, this is a useful study as it is citable and confirms previous blog reports, but it could have been improved by more controls.

The article is well written, treats an actual problem (the risk of development of valvulopathy after long-term cabergoline treatment in patients with macroprolactinoma) and provides evidence about the reversibility of valvular changes after timely discontinuation of DA treatment.

Title and abstract: The title is appropriate for the content of the article. The abstract is concise and accurately summarizes the essential information of the paper although it would be better if the authors define more precisely the anatomic specificity of valvulopathy – mild mitral regurgitation.

Case report: The clinical case presentation is comprehensive and detailed but there are some minor points that should be clarified:

  • Please clarify the prolactin levels at diagnosis. In the Presentation section (line 3) “At presentation, prolactin level was found to be greater than 1000 ng/ml on diluted testing” but in the section describing the laboratory evaluation at diagnosis (line 7) “Prolactin level was 55 ng/ml”. Was the difference due to so called “hook effect”?
  • Figure 1: In the text the follow-up MR imaging is indicated to be “after 10 months of cabergoline treatment” . However, the figures 1C and 1D represent 2 years post-treatment MR images. Please clarify.
  • Figure 2: Echocardiograms 2A and 2B are defined as baseline but actually they correspond to the follow-up echocardiographic assessment at the 4th year of cabergoline treatment. Did the patient undergo a baseline (prior to dopamine agonist treatment) echocardiographic evaluation? If he did not, it should be mentioned as study limitation in the Discussion section.
  • The mitral valve thickness was mentioned to be normal. Did the echographic examination visualize increased echogenicity (hyperechogenicity) of the mitral cusps?
  • How could you explain the decrease of LV ejection fraction (from 60-65% to 50-55%) after switching from cabergoline to bromocriptine treatment and respectively its increase to 62% after doubling the bromocriptine daily dose? Was LV function estimated always by the same method during the follow-up?
  • Final paragraph: Authors conclude that early discontinuation and management with bromocriptine may be effective in reversing cardiac valvular dysfunction. Even though, regular echocardiographic follow up should be considered in patients who are expected to be on long-term high dose treatment with bromocriptine regarding its partial 5-HT2b agonist activity.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set.

This is an interesting topic: as the authors note, the way that communicators imagine their audiences will shape their output in significant ways. And I enjoyed what clearly has the potential to be a very rich data set. But I have some reservations about the adequacy of that data set, as it currently stands, given the claims the authors make; the relevance of the analytical framework(s) they draw upon; and the extent to which their analysis has offered significant new insights ‐ by which I mean, I would be keen to see the authors push their discussion further. My suggestions are essentially that they extend the data set they are working with to ensure that their analysis is both rigorous and generalisable, an re-consider the analytical frame they use. I will make some more concrete comments below.

With regard to the data: my feeling is that 14 interviews is a rather slim data set, and that this is heightened by the fact that they were all carried out in a single location, and recruited via snowball sampling and personal contacts. What efforts have the authors made to ensure that they are not speaking to a single, small, sub-community in the much wider category of science communicators? ‐ a case study, if you like, of a particular group of science communicators in North Carolina? In addition, though the authors reference grounded theory as a method for analysis, I got little sense of the data reaching saturation. The reliance on one-off quotes, and on the stories and interests of particular individuals, left me unsure as to how representative interview extracts were. I would therefore recommend either that the data set is extended by carrying out more interviews, in a wider variety of locations (e.g. other sites in the US), or that it is redeveloped as a case study of a particular local professional community. (Which would open up some fascinating questions ‐ how many of these people know each other? What spaces, online or offline, do they interact in, and do they share knowledge, for instance about their audiences? Are there certain touchstone events or publics they communally make reference to?)

As a more minor point with regard to the data set and what the authors want it to do, there were some inconsistencies as to how the study was framed. On p.2 they variously describe the purpose as to “understand the experiences and perspectives of science communicators” and the goals as identifying “the basic interests and value orientations attributed to lay audiences by science communicators”. Later, on p.5, they note that the “research is inductive and seeks to build theory rather than generalizable claims”, while in the Discussion they talk again about having identified communicators‘ “personal motivations” (p.12). There are a number of questions left hanging: is the purpose to understand communicator experiences ‐ in which case why focus on perceptions of audiences? Where is theory being built, and in what ways can this be mobilised in future work? The way that the study is framed and argued as a whole needs, I would suggest, to be clarified.

Relatedly, my sense is that some of this confusion is derived from what I find a rather busy analytical framework. I was not convinced of the value of combining inductive and deductive coding: if the ‘human value typology’ the authors use is ‘universal’, then what is added by open coding? Or, alternatively, why let their open coding, and their findings from this, be constrained by an additional, rather rigid, framework? The addition of the considerable literature on news values to the mix makes the discussion more confusing again. I would suggest that the authors either make much more clear the value of combining these different approaches ‐ building new theory outlining how they relate, and can be jointly mobilised in practice ‐ or fix on one. (My preference would be to focus on the findings from the open coding ‐ but that reflects my own disciplinary biases.)

A more minor analytical point: the authors note that their interviewees come from slightly different professions, and communicate through different formats, have different levels of experience, and different educational backgrounds ‐ but as far as I can see there is no comparative analysis based on this. Were there noticeable differences in the interview talk based on these categorisations? Or was the data set too small to identify any potential contrasts or themes? A note explaining this would be useful.

My final point has reference to the potential that this data set has, particularly if it is extended and developed. I would like to encourage the authors to take their analysis further: at the moment, I was not particularly surprised by the ways in which the communicators referenced news values or imagined their audiences. But it seems to me that the analytical work is not yet complete. What does it mean that communicators imagine audience values and preferences in the way that they do ‐ who is included and excluded by these imaginations? One experiment might be to consider what ‘ideal type’ publics are created in the communicators’ talk. What are the characteristics of the audiences constructed in the interviews and ‐ presumably ‐ in the communicative products of interviewees? What would these people look like? There are also some tantalizing hints in the Discussion that are not really discussed in the Findings ‐ of, for instance, the way in which communicator’s personal motivations may combine with their perceptions of audiences to shape their products. How does this happen? These are, of course, suggestions. But my wider point is that the authors need to show more clearly what is original and useful in their findings ‐ what it is, exactly, that will be important to other scholars in the field.

I hope my comments make sense ‐ please do not hesitate to contact me if not.

This is an interesting article and piece of software. I think it contributes towards further alternatives to easily visualize high dimensionality data on the web. It’s simple and easy to embed into other web frameworks or applications.

a) About the software

  • CSV format . It was hard to guess the expected format. The authors need to add a syntax description of the CSV format at the help page.
  • Simple HTML example . It will be easy to test HeatmapViewer (HmV) if you add a simple downloadable example file with the minimum required HTML-JavaScript to set up a HmV (without all the CSV import code).
  • Color scale . HmV only implements a simple three point linear color scale. For me this is the major weakness of HmV. It will be very convenient that in the next HmV release the user can give as a parameter a function that manages the score to color conversion.

b) About the paper

  • http://www.broadinstitute.org/gsea (desktop)
  • http://jheatmap.github.io/jheatmap/ (website)
  • http://www.gitools.org/ (desktop)
  • http://blog.nextgenetics.net/demo/entry0044/ (website)
  • http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogram2d.html (python)
  • http://matplotlib.org/api/pyplot_api.html (python)
  • Predicted protein mutability landscape: The authors say: “Without using a tool such as the HeatmapViewer, we could hardly obtain an overview of the protein mutability landscape”. This paragraph seems to suggest that you can explore the data with HmV. I think that HmV is a good tool to report your data, but not to explore it.
  • Conclusions: The authors say: “... provides a new, powerful way to generate and display matrix data in web presentations and in publications.” To use heat maps in web presentations and publications is nothing new. I think that HmV makes it easier and user-friendly, but it’s not new.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation.

This article addresses the links between habitat condition and an endangered bird species in an important forest reserve (ASF) in eastern Kenya. It addresses an important topic, especially given ongoing anthropogenic pressures on this and similar types of forest reserves in eastern Kenya and throughout the tropics. Despite the rather small temporal and spatial extent of the study, it should make an important contribution to bird and forest conservation. There are a number of issues with the methods and analysis that need to be clarified/addressed however; furthermore, some of the conclusions overreach the data collected, while other important results are given less emphasis that they warrant. Below are more specific comments by section:

The conclusion that human-driven tree removal is an important contributor to the degradation of ASF is reasonable given the data reported in the article. Elephant damage, while clearly likely a very big contributor to habitat modification in ASF, was not the focus of the study (the authors state clearly in the Discussion that elephant damage was not systematically quantified, and thus no data were analyzed) ‐ and thus should only be mentioned in passing here ‐ if at all.

More information about the life history ecology of A. Sokokensis would provide welcome context here. A bit more detail about breeding sites as well as dispersal behavior etc. would be helpful – and especially why these and other aspects render the Pipit a good indicator species/proxy for habitat condition. This could be revisited in the Discussion as links are made between habitat conditions and occurrence of the bird (where you discuss the underlying mechanisms for why it thrives in some parts of ASF and not others, and why it’s abundance correlate strongly with some types of disturbance and not others). Again, you reference other studies that have explored other species in ASF and forest disturbance, but do not really explicitly state why the Pipit is a particularly important indicator of forest condition.

  • Bird Survey: As described, all sightings and calls were recorded and incorporated into distance analysis – but it is not clear here whether or not distances to both auditory and visual encounters were measured the same way (i.e., with the rangefinder). Please clarify.
  • Floor litter sampling: Not clear here whether or not litter cover was recorded as a continuous or categorical variable (percentage). If not, please describe percentage “categories” used.
  • Mean litter depth graph (Figure 2) and accompanying text reports the means and sd but no post-hoc comparison test (e.g. Tukey HSD) – need to report the stats on which differences were/were not significant.
  • Figure 3 – you indicate litter depth was better predictor of bird abundance than litter cover, but r-squared is higher for litter cover. Need to clarify (and also indicate why you chose only to shown depth values in Figure 3.
  • The linear equation can be put in Figure 3 caption (not necessary to include in text).
  • Figure 4 – stats aren’t presented here; also, the caption states that tree loss and leaf litter are inversely correlated – this might be taken to mean, given discussion (below) about pruning, that there could be a poaching threshold below which poaching may pay dividends to Pipits (and above which Pipits are negatively affected). This warrants further exploration/elaboration.
  • The pruning result is arguably the most important one here – this suggests an intriguing trade-off between poaching and bird conservation (in particular, the suggestion that pruning by poachers may bolster Pipit populations – or at the very least mitigate against other aspects of habitat degradation). Worth highlighting this more in Discussion.
  • Last sentence on p. 7 suggests causality (“That is because…”) – but your data only support correlation (one can imagine that there may have been other extrinsic or intrinsic drivers of population decline).
  • P. 8: discussion of classification of habitat types in ASF is certainly interesting, but could be made much more succinct in keeping with focus of this paper.
  • P. 9, top: first paragraph could be expanded – as noted before, tradeoff between poaching/pruning and Pipit abundance is worth exploring in more depth. Could your results be taken as a prescription for understory pruning as a conservation tool for the Sokoke Pipit or other threatened species? More detail here would be welcome (and also in Conclusion); in subsequent paragraph about Pipit foraging behavior and specific relationship to understory vegetation at varying heights could be incorporated into this discussion. Is there any info about optimal perch height for foraging or for flying through the understory? Linking to results of other studies in ASF, is there potential for positive correlations with optimal habitat conditions for the other important bird species in ASF in order to make more general conclusions about management?

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior.

Bierbach and co-authors investigated the topic of the evolution of the audience effect in live bearing fishes, by applying a comparative method. They specifically focused on the hypothesis that sperm competition risk, arising from male mate choice copying, and avoidance of aggressive interactions play a key role in driving the evolution of audience-induced changes in male mate choice behavior. The authors found support to their hypothesis of an influence of SCR on the evolution of deceptive behavior as their findings at species level showed a positive correlation between mean sexual activity and the occurrence of deceptive behavior. Moreover, they found a positive correlation between mean aggressiveness and sexual activity but they did not detect a relationship between aggressiveness and audience effects.

The manuscript is certainly well written and attractive, but I have some major concerns on the data analyses that prevent me to endorse its acceptance at the present stage.

I see three main problems with the statistics that could have led to potentially wrong results and, thus, to completely misleading conclusions.

  • First of all the Authors cannot run an ANCOVA in which there is a significant interaction between factor and covariate Tab. 2 (a). Indeed, when the assumption of common slopes is violated (as in their case), all other significant terms are meaningless. They might want to consider alternative statistical procedures, e.g. Johnson—Neyman method.
  • Second, the Authors cannot retain into the model a non significant interaction term, as this may affect estimations for the factors Tab. 2 (d). They need to remove the species x treatment interaction (as they did for other non significant terms, see top left of the same page 7).
  • The third problem I see regards all the GLMs in which species are compared. Authors entered the 'species' level as fixed factor when species are clearly a random factor. Entering species as fixed factors has the effect of badly inflating the denominator degrees of freedom, making authors’ conclusions far too permissive. They should, instead, use mixed LMs, in which species are the random factor. They should also take care that the degrees of freedom are approximately equal to the number of species (not the number of trials). To do so, they can enter as random factor the interaction between treatment and species.

Data need to be re-analyzed relying on the proper statistical procedures to confirm results and conclusions.

A more theoretical objection to the authors’ interpretation of results (supposing that results will be confirmed by the new analyses) could emerge from the idea that male success in mating with the preferred female may reduce the probability of immediate female’s re-mating, and thus reduce the risk of sperm competition on the short term. As a consequence, it may be not beneficial to significantly increase the risk of losing a high quality and inseminated female for a cost that will not be paid with certainty. The authors might want to consider also this for discussion.

Lastly, I think that the scenario generated from comparative studies at species level may be explained by phylogenetic factors other than sexual selection. Only the inclusion of phylogeny, that allow to account for the shared history among species, into data analyses can lead to unequivocal adaptive explanations for the observed patterns. I see the difficulty in doing this with few species, as it is the case of the present study, but I would suggest the Authors to consider also this future perspective. Moreover, a phylogenetic comparative study would be aided by the recent development of a well-resolved phylogenetic tree for the genus Poecilia (Meredith 2011).

Page 3: the authors should specify that also part of data on male aggressiveness (3 species from Table 1) come from previous studies, as they do for data on deceptive male mating behavior.

Page 5: since data on mate choice come from other studies is it so necessary to report a detailed description of methods for this section? Maybe the authors could refer to the already published methods and only give a brief additional description.

Page 6: how do the authors explain the complete absence of aggressive displays between the focal male and the audience male during the mate choice experiments? This sounds curious if considering that in all the examined species aggressive behaviors and dominance establishment are always observed during dyadic encounters.

In their response to my previous comments, the authors have clarified that only the data from the “Experimental phase” were used to calculate prediction accuracy. However, if I now understand the analysis procedure correctly, there are serious concerns with the approach adopted.

First, let me state what I now understand the analysis procedure to be:

  • For each subject the PD values across the 20 trials were converted to z-scores.
  • For each stimulus, the mean z-score was calculated.
  • The sign of the mean z-score for each stimulus was used to make predictions.
  • For each of the 20 trials, if the sign of the z-score on that trial was the same as for the mean z-score for that stimulus, a hit (correct prediction) was assigned. In contrast, if the sign of the z-score on that trial was the opposite as for the mean z-score for that stimulus, a miss (incorrect prediction) was assigned.
  • For each stimulus the total hits and misses were calculated.
  • Average hits (correct prediction) for each stimulus was calculated across subjects.

If this is a correct description of the procedure, the problem is that the same data were used to determine the sign of the z-score that would be associated with a correct prediction and to determine the actual correct predictions. This will effectively guarantee a correct prediction rate above chance.

To check if this is true, I quickly generated random data and used the analysis procedure as laid out above (see MATLAB code below). Across 10,000 iterations of 100 random subjects, the average “prediction” accuracy was ~57% for each stimulus (standard deviation, 1.1%), remarkably similar to the values reported by the authors in their two studies. In this simulation, I assumed that all subjects contributed 20 trials, but in the actual data analyzed in the study, some subjects contributed fewer than 20 trials due to artifacts in the pupil measurements.

If the above description of the analysis procedure is correct, then I think the authors have provided no evidence to support pupil dilation prediction of random events, with the results reflecting circularity in the analysis procedure.

However, if the above description of the procedure is incorrect, the authors need to clarify exactly what the analysis procedure was, perhaps by providing their analysis scripts.

I think this paper excellent and is an important addition to the literature. I really like the conceptualization of a self-replicating cycle as it illustrates the concept that the “problem” starts with the neuron, i.e., due to one or more of a variety of insults, the neuron is negatively impacted and releases H1, which in turn activates microglia with over expression of cytokines that may, when limited, foster repair but when activated becomes chronic (as is demonstrated here with the potential of cyclic H1 release) and thus facilitates neurotoxicity. I hope the authors intend to measure cytokine expression soon, especially IL-1 and TNF in both astrocytes and microglia, and S100B in astrocytes.

In more detail, Gilthorpe and colleagues provide novel experimental data that demonstrate a new role for a specific histone protein—the linker histone, H1—in neurodegeneration. This study, which was originally designed to identify axonal chemorepellents, actually provided a previously unknown role for H1, as well as other novel and thought provoking results. Fortuitously, as sometimes happens, the authors had a pleasant surprise: their results set some old dogmas on their respective ears and opened up new avenues of approach for studying the role of histones in self-amplification of neurodegenerative cycles. In point, they show that H1 is not just a nice little partner of nuclear DNA as previously thought. H1 is released from ‘damaged’ (or leaky) neurons, kills adjacent healthy neurons, and promotes a proinflammatory profile in both microglia and astrocytes.

Interestingly, the authors’ conceptualization of a damaged neuron → H1 release → healthy neuron killing cycle does not take into account the H1-mediated proinflammatory glial response. This facet of the study opens for these investigators a new avenue they may wish to follow: the role of H1 in stimulation of neuroinflammation with overexpression of cytokines. This is interesting, as neuronal injury has been shown to set in motion an acute phase response that activates glia, increases their expression of cytokines (interleukin-1 and S100B), which, in turn, induce neurons to produce excess Alzheimer-related proteins such as βAPP and ApoE (favoring formation of mature Aβ/ApoE plaques), activated MAPK-p38 and hyperphosphorylated tau (favoring formation of neurofibrillary tangles), and α synuclein (favoring formation of Lewy bodies). To date, the neuronal response shown responsible for stimulating glia is neuronal stress related release of sAPP, but these H1 results from Gilthorpe and colleagues may contribute to or exacerbate the role of sAPP.

The email address should be the one you originally registered with F1000.

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here .

If you still need help with your Google account password, please click here .

You registered with F1000 via Facebook, so we cannot reset your password.

If you still need help with your Facebook account password, please click here .

If your email address is registered with us, we will email you instructions to reset your password.

If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.

Banner

Peer Reviewed Literature

What is peer review, terminology, peer review what does that mean, what types of articles are peer-reviewed, what information is not peer-reviewed, what about google scholar.

  • How do I find peer-reviewed articles?
  • Scholarly vs. Popular Sources

Research Librarian

For more help on this topic, please contact our Research Help Desk: [email protected] or 781-768-7303. Stay up-to-date on our current hours . Note: all hours are EST.

peer review in research example

This Guide was created by Carolyn Swidrak (retired).

Research findings are communicated in many ways.  One of the most important ways is through publication in scholarly, peer-reviewed journals.

Research published in scholarly journals is held to a high standard.  It must make a credible and significant contribution to the discipline.  To ensure a very high level of quality, articles that are submitted to scholarly journals undergo a process called peer-review.

Once an article has been submitted for publication, it is reviewed by other independent, academic experts (at least two) in the same field as the authors.  These are the peers.  The peers evaluate the research and decide if it is good enough and important enough to publish.  Usually there is a back-and-forth exchange between the reviewers and the authors, including requests for revisions, before an article is published. 

Peer review is a rigorous process but the intensity varies by journal.  Some journals are very prestigious and receive many submissions for publication.  They publish only the very best, most highly regarded research. 

The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.

Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.)  For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

Peer-reviewed and refereed are identical terms.

From  Peer Review in 3 Minutes  [Video], by the North Carolina State University Library, 2014, YouTube (https://youtu.be/rOCQZ7QnoN0).

Peer reviewed articles can include:

  • Original research (empirical studies)
  • Review articles
  • Systematic reviews
  • Meta-analyses

There is much excellent, credible information in existence that is NOT peer-reviewed.  Peer-review is simply ONE MEASURE of quality. 

Much of this information is referred to as "gray literature."

Government Agencies

Government websites such as the Centers for Disease Control (CDC) publish high level, trustworthy information.  However, most of it is not peer-reviewed.  (Some of their publications are peer-reviewed, however. The journal Emerging Infectious Diseases, published by the CDC is one example.)

Conference Proceedings

Papers from conference proceedings are not usually peer-reviewed.  They may go on to become published articles in a peer-reviewed journal. 

Dissertations

Dissertations are written by doctoral candidates, and while they are academic they are not peer-reviewed.

Many students like Google Scholar because it is easy to use.  While the results from Google Scholar are generally academic they are not necessarily peer-reviewed.  Typically, you will find:

  • Peer reviewed journal articles (although they are not identified as peer-reviewed)
  • Unpublished scholarly articles (not peer-reviewed)
  • Masters theses, doctoral dissertations and other degree publications (not peer-reviewed)
  • Book citations and links to some books (not necessarily peer-reviewed)
  • Next: How do I find peer-reviewed articles? >>
  • Last Updated: Aug 14, 2024 10:25 AM
  • URL: https://libguides.regiscollege.edu/peer_review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.25(3); 2014 Oct

Logo of ejifcc

Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide

Jacalyn kelly.

1 Clinical Biochemistry, Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada

Tara Sadeghieh

Khosrow adeli.

2 Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada

3 Chair, Communications and Publications Division (CPD), International Federation for Sick Clinical Chemistry (IFCC), Milan, Italy

The authors declare no conflicts of interest regarding publication of this article.

Peer review has been defined as a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It functions to encourage authors to meet the accepted high standards of their discipline and to control the dissemination of research data to ensure that unwarranted claims, unacceptable interpretations or personal views are not published without prior expert review. Despite its wide-spread use by most journals, the peer review process has also been widely criticised due to the slowness of the process to publish new findings and due to perceived bias by the editors and/or reviewers. Within the scientific community, peer review has become an essential component of the academic writing process. It helps ensure that papers published in scientific journals answer meaningful research questions and draw accurate conclusions based on professionally executed experimentation. Submission of low quality manuscripts has become increasingly prevalent, and peer review acts as a filter to prevent this work from reaching the scientific community. The major advantage of a peer review process is that peer-reviewed articles provide a trusted form of scientific communication. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. Despite the positive impacts of peer review, critics argue that the peer review process stifles innovation in experimentation, and acts as a poor screen against plagiarism. Despite its downfalls, there has not yet been a foolproof system developed to take the place of peer review, however, researchers have been looking into electronic means of improving the peer review process. Unfortunately, the recent explosion in online only/electronic journals has led to mass publication of a large number of scientific articles with little or no peer review. This poses significant risk to advances in scientific knowledge and its future potential. The current article summarizes the peer review process, highlights the pros and cons associated with different types of peer review, and describes new methods for improving peer review.

WHAT IS PEER REVIEW AND WHAT IS ITS PURPOSE?

Peer Review is defined as “a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field” ( 1 ). Peer review is intended to serve two primary purposes. Firstly, it acts as a filter to ensure that only high quality research is published, especially in reputable journals, by determining the validity, significance and originality of the study. Secondly, peer review is intended to improve the quality of manuscripts that are deemed suitable for publication. Peer reviewers provide suggestions to authors on how to improve the quality of their manuscripts, and also identify any errors that need correcting before publication.

HISTORY OF PEER REVIEW

The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ( 2 ). The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ( 2 ). There, he stated that physicians must take notes describing the state of their patients’ medical conditions upon each visit. Following treatment, the notes were scrutinized by a local medical council to determine whether the physician had met the required standards of medical care. If the medical council deemed that the appropriate standards were not met, the physician in question could receive a lawsuit from the maltreated patient ( 2 ).

The invention of the printing press in 1453 allowed written documents to be distributed to the general public ( 3 ). At this time, it became more important to regulate the quality of the written material that became publicly available, and editing by peers increased in prevalence. In 1620, Francis Bacon wrote the work Novum Organum, where he described what eventually became known as the first universal method for generating and assessing new science ( 3 ). His work was instrumental in shaping the Scientific Method ( 3 ). In 1665, the French Journal des sçavans and the English Philosophical Transactions of the Royal Society were the first scientific journals to systematically publish research results ( 4 ). Philosophical Transactions of the Royal Society is thought to be the first journal to formalize the peer review process in 1665 ( 5 ), however, it is important to note that peer review was initially introduced to help editors decide which manuscripts to publish in their journals, and at that time it did not serve to ensure the validity of the research ( 6 ). It did not take long for the peer review process to evolve, and shortly thereafter papers were distributed to reviewers with the intent of authenticating the integrity of the research study before publication. The Royal Society of Edinburgh adhered to the following peer review process, published in their Medical Essays and Observations in 1731: “Memoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the author.” ( 7 ). The Royal Society of London adopted this review procedure in 1752 and developed the “Committee on Papers” to review manuscripts before they were published in Philosophical Transactions ( 6 ).

Peer review in the systematized and institutionalized form has developed immensely since the Second World War, at least partly due to the large increase in scientific research during this period ( 7 ). It is now used not only to ensure that a scientific manuscript is experimentally and ethically sound, but also to determine which papers sufficiently meet the journal’s standards of quality and originality before publication. Peer review is now standard practice by most credible scientific journals, and is an essential part of determining the credibility and quality of work submitted.

IMPACT OF THE PEER REVIEW PROCESS

Peer review has become the foundation of the scholarly publication system because it effectively subjects an author’s work to the scrutiny of other experts in the field. Thus, it encourages authors to strive to produce high quality research that will advance the field. Peer review also supports and maintains integrity and authenticity in the advancement of science. A scientific hypothesis or statement is generally not accepted by the academic community unless it has been published in a peer-reviewed journal ( 8 ). The Institute for Scientific Information ( ISI ) only considers journals that are peer-reviewed as candidates to receive Impact Factors. Peer review is a well-established process which has been a formal part of scientific communication for over 300 years.

OVERVIEW OF THE PEER REVIEW PROCESS

The peer review process begins when a scientist completes a research study and writes a manuscript that describes the purpose, experimental design, results, and conclusions of the study. The scientist then submits this paper to a suitable journal that specializes in a relevant research field, a step referred to as pre-submission. The editors of the journal will review the paper to ensure that the subject matter is in line with that of the journal, and that it fits with the editorial platform. Very few papers pass this initial evaluation. If the journal editors feel the paper sufficiently meets these requirements and is written by a credible source, they will send the paper to accomplished researchers in the field for a formal peer review. Peer reviewers are also known as referees (this process is summarized in Figure 1 ). The role of the editor is to select the most appropriate manuscripts for the journal, and to implement and monitor the peer review process. Editors must ensure that peer reviews are conducted fairly, and in an effective and timely manner. They must also ensure that there are no conflicts of interest involved in the peer review process.

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g001.jpg

Overview of the review process

When a reviewer is provided with a paper, he or she reads it carefully and scrutinizes it to evaluate the validity of the science, the quality of the experimental design, and the appropriateness of the methods used. The reviewer also assesses the significance of the research, and judges whether the work will contribute to advancement in the field by evaluating the importance of the findings, and determining the originality of the research. Additionally, reviewers identify any scientific errors and references that are missing or incorrect. Peer reviewers give recommendations to the editor regarding whether the paper should be accepted, rejected, or improved before publication in the journal. The editor will mediate author-referee discussion in order to clarify the priority of certain referee requests, suggest areas that can be strengthened, and overrule reviewer recommendations that are beyond the study’s scope ( 9 ). If the paper is accepted, as per suggestion by the peer reviewer, the paper goes into the production stage, where it is tweaked and formatted by the editors, and finally published in the scientific journal. An overview of the review process is presented in Figure 1 .

WHO CONDUCTS REVIEWS?

Peer reviews are conducted by scientific experts with specialized knowledge on the content of the manuscript, as well as by scientists with a more general knowledge base. Peer reviewers can be anyone who has competence and expertise in the subject areas that the journal covers. Reviewers can range from young and up-and-coming researchers to old masters in the field. Often, the young reviewers are the most responsive and deliver the best quality reviews, though this is not always the case. On average, a reviewer will conduct approximately eight reviews per year, according to a study on peer review by the Publishing Research Consortium (PRC) ( 7 ). Journals will often have a pool of reviewers with diverse backgrounds to allow for many different perspectives. They will also keep a rather large reviewer bank, so that reviewers do not get burnt out, overwhelmed or time constrained from reviewing multiple articles simultaneously.

WHY DO REVIEWERS REVIEW?

Referees are typically not paid to conduct peer reviews and the process takes considerable effort, so the question is raised as to what incentive referees have to review at all. Some feel an academic duty to perform reviews, and are of the mentality that if their peers are expected to review their papers, then they should review the work of their peers as well. Reviewers may also have personal contacts with editors, and may want to assist as much as possible. Others review to keep up-to-date with the latest developments in their field, and reading new scientific papers is an effective way to do so. Some scientists use peer review as an opportunity to advance their own research as it stimulates new ideas and allows them to read about new experimental techniques. Other reviewers are keen on building associations with prestigious journals and editors and becoming part of their community, as sometimes reviewers who show dedication to the journal are later hired as editors. Some scientists see peer review as a chance to become aware of the latest research before their peers, and thus be first to develop new insights from the material. Finally, in terms of career development, peer reviewing can be desirable as it is often noted on one’s resume or CV. Many institutions consider a researcher’s involvement in peer review when assessing their performance for promotions ( 11 ). Peer reviewing can also be an effective way for a scientist to show their superiors that they are committed to their scientific field ( 5 ).

ARE REVIEWERS KEEN TO REVIEW?

A 2009 international survey of 4000 peer reviewers conducted by the charity Sense About Science at the British Science Festival at the University of Surrey, found that 90% of reviewers were keen to peer review ( 12 ). One third of respondents to the survey said they were happy to review up to five papers per year, and an additional one third of respondents were happy to review up to ten.

HOW LONG DOES IT TAKE TO REVIEW ONE PAPER?

On average, it takes approximately six hours to review one paper ( 12 ), however, this number may vary greatly depending on the content of the paper and the nature of the peer reviewer. One in every 100 participants in the “Sense About Science” survey claims to have taken more than 100 hours to review their last paper ( 12 ).

HOW TO DETERMINE IF A JOURNAL IS PEER REVIEWED

Ulrichsweb is a directory that provides information on over 300,000 periodicals, including information regarding which journals are peer reviewed ( 13 ). After logging into the system using an institutional login (eg. from the University of Toronto), search terms, journal titles or ISSN numbers can be entered into the search bar. The database provides the title, publisher, and country of origin of the journal, and indicates whether the journal is still actively publishing. The black book symbol (labelled ‘refereed’) reveals that the journal is peer reviewed.

THE EVALUATION CRITERIA FOR PEER REVIEW OF SCIENTIFIC PAPERS

As previously mentioned, when a reviewer receives a scientific manuscript, he/she will first determine if the subject matter is well suited for the content of the journal. The reviewer will then consider whether the research question is important and original, a process which may be aided by a literature scan of review articles.

Scientific papers submitted for peer review usually follow a specific structure that begins with the title, followed by the abstract, introduction, methodology, results, discussion, conclusions, and references. The title must be descriptive and include the concept and organism investigated, and potentially the variable manipulated and the systems used in the study. The peer reviewer evaluates if the title is descriptive enough, and ensures that it is clear and concise. A study by the National Association of Realtors (NAR) published by the Oxford University Press in 2006 indicated that the title of a manuscript plays a significant role in determining reader interest, as 72% of respondents said they could usually judge whether an article will be of interest to them based on the title and the author, while 13% of respondents claimed to always be able to do so ( 14 ).

The abstract is a summary of the paper, which briefly mentions the background or purpose, methods, key results, and major conclusions of the study. The peer reviewer assesses whether the abstract is sufficiently informative and if the content of the abstract is consistent with the rest of the paper. The NAR study indicated that 40% of respondents could determine whether an article would be of interest to them based on the abstract alone 60-80% of the time, while 32% could judge an article based on the abstract 80-100% of the time ( 14 ). This demonstrates that the abstract alone is often used to assess the value of an article.

The introduction of a scientific paper presents the research question in the context of what is already known about the topic, in order to identify why the question being studied is of interest to the scientific community, and what gap in knowledge the study aims to fill ( 15 ). The introduction identifies the study’s purpose and scope, briefly describes the general methods of investigation, and outlines the hypothesis and predictions ( 15 ). The peer reviewer determines whether the introduction provides sufficient background information on the research topic, and ensures that the research question and hypothesis are clearly identifiable.

The methods section describes the experimental procedures, and explains why each experiment was conducted. The methods section also includes the equipment and reagents used in the investigation. The methods section should be detailed enough that it can be used it to repeat the experiment ( 15 ). Methods are written in the past tense and in the active voice. The peer reviewer assesses whether the appropriate methods were used to answer the research question, and if they were written with sufficient detail. If information is missing from the methods section, it is the peer reviewer’s job to identify what details need to be added.

The results section is where the outcomes of the experiment and trends in the data are explained without judgement, bias or interpretation ( 15 ). This section can include statistical tests performed on the data, as well as figures and tables in addition to the text. The peer reviewer ensures that the results are described with sufficient detail, and determines their credibility. Reviewers also confirm that the text is consistent with the information presented in tables and figures, and that all figures and tables included are important and relevant ( 15 ). The peer reviewer will also make sure that table and figure captions are appropriate both contextually and in length, and that tables and figures present the data accurately.

The discussion section is where the data is analyzed. Here, the results are interpreted and related to past studies ( 15 ). The discussion describes the meaning and significance of the results in terms of the research question and hypothesis, and states whether the hypothesis was supported or rejected. This section may also provide possible explanations for unusual results and suggestions for future research ( 15 ). The discussion should end with a conclusions section that summarizes the major findings of the investigation. The peer reviewer determines whether the discussion is clear and focused, and whether the conclusions are an appropriate interpretation of the results. Reviewers also ensure that the discussion addresses the limitations of the study, any anomalies in the results, the relationship of the study to previous research, and the theoretical implications and practical applications of the study.

The references are found at the end of the paper, and list all of the information sources cited in the text to describe the background, methods, and/or interpret results. Depending on the citation method used, the references are listed in alphabetical order according to author last name, or numbered according to the order in which they appear in the paper. The peer reviewer ensures that references are used appropriately, cited accurately, formatted correctly, and that none are missing.

Finally, the peer reviewer determines whether the paper is clearly written and if the content seems logical. After thoroughly reading through the entire manuscript, they determine whether it meets the journal’s standards for publication,

and whether it falls within the top 25% of papers in its field ( 16 ) to determine priority for publication. An overview of what a peer reviewer looks for when evaluating a manuscript, in order of importance, is presented in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g002.jpg

How a peer review evaluates a manuscript

To increase the chance of success in the peer review process, the author must ensure that the paper fully complies with the journal guidelines before submission. The author must also be open to criticism and suggested revisions, and learn from mistakes made in previous submissions.

ADVANTAGES AND DISADVANTAGES OF THE DIFFERENT TYPES OF PEER REVIEW

The peer review process is generally conducted in one of three ways: open review, single-blind review, or double-blind review. In an open review, both the author of the paper and the peer reviewer know one another’s identity. Alternatively, in single-blind review, the reviewer’s identity is kept private, but the author’s identity is revealed to the reviewer. In double-blind review, the identities of both the reviewer and author are kept anonymous. Open peer review is advantageous in that it prevents the reviewer from leaving malicious comments, being careless, or procrastinating completion of the review ( 2 ). It encourages reviewers to be open and honest without being disrespectful. Open reviewing also discourages plagiarism amongst authors ( 2 ). On the other hand, open peer review can also prevent reviewers from being honest for fear of developing bad rapport with the author. The reviewer may withhold or tone down their criticisms in order to be polite ( 2 ). This is especially true when younger reviewers are given a more esteemed author’s work, in which case the reviewer may be hesitant to provide criticism for fear that it will damper their relationship with a superior ( 2 ). According to the Sense About Science survey, editors find that completely open reviewing decreases the number of people willing to participate, and leads to reviews of little value ( 12 ). In the aforementioned study by the PRC, only 23% of authors surveyed had experience with open peer review ( 7 ).

Single-blind peer review is by far the most common. In the PRC study, 85% of authors surveyed had experience with single-blind peer review ( 7 ). This method is advantageous as the reviewer is more likely to provide honest feedback when their identity is concealed ( 2 ). This allows the reviewer to make independent decisions without the influence of the author ( 2 ). The main disadvantage of reviewer anonymity, however, is that reviewers who receive manuscripts on subjects similar to their own research may be tempted to delay completing the review in order to publish their own data first ( 2 ).

Double-blind peer review is advantageous as it prevents the reviewer from being biased against the author based on their country of origin or previous work ( 2 ). This allows the paper to be judged based on the quality of the content, rather than the reputation of the author. The Sense About Science survey indicates that 76% of researchers think double-blind peer review is a good idea ( 12 ), and the PRC survey indicates that 45% of authors have had experience with double-blind peer review ( 7 ). The disadvantage of double-blind peer review is that, especially in niche areas of research, it can sometimes be easy for the reviewer to determine the identity of the author based on writing style, subject matter or self-citation, and thus, impart bias ( 2 ).

Masking the author’s identity from peer reviewers, as is the case in double-blind review, is generally thought to minimize bias and maintain review quality. A study by Justice et al. in 1998 investigated whether masking author identity affected the quality of the review ( 17 ). One hundred and eighteen manuscripts were randomized; 26 were peer reviewed as normal, and 92 were moved into the ‘intervention’ arm, where editor quality assessments were completed for 77 manuscripts and author quality assessments were completed for 40 manuscripts ( 17 ). There was no perceived difference in quality between the masked and unmasked reviews. Additionally, the masking itself was often unsuccessful, especially with well-known authors ( 17 ). However, a previous study conducted by McNutt et al. had different results ( 18 ). In this case, blinding was successful 73% of the time, and they found that when author identity was masked, the quality of review was slightly higher ( 18 ). Although Justice et al. argued that this difference was too small to be consequential, their study targeted only biomedical journals, and the results cannot be generalized to journals of a different subject matter ( 17 ). Additionally, there were problems masking the identities of well-known authors, introducing a flaw in the methods. Regardless, Justice et al. concluded that masking author identity from reviewers may not improve review quality ( 17 ).

In addition to open, single-blind and double-blind peer review, there are two experimental forms of peer review. In some cases, following publication, papers may be subjected to post-publication peer review. As many papers are now published online, the scientific community has the opportunity to comment on these papers, engage in online discussions and post a formal review. For example, online publishers PLOS and BioMed Central have enabled scientists to post comments on published papers if they are registered users of the site ( 10 ). Philica is another journal launched with this experimental form of peer review. Only 8% of authors surveyed in the PRC study had experience with post-publication review ( 7 ). Another experimental form of peer review called Dynamic Peer Review has also emerged. Dynamic peer review is conducted on websites such as Naboj, which allow scientists to conduct peer reviews on articles in the preprint media ( 19 ). The peer review is conducted on repositories and is a continuous process, which allows the public to see both the article and the reviews as the article is being developed ( 19 ). Dynamic peer review helps prevent plagiarism as the scientific community will already be familiar with the work before the peer reviewed version appears in print ( 19 ). Dynamic review also reduces the time lag between manuscript submission and publishing. An example of a preprint server is the ‘arXiv’ developed by Paul Ginsparg in 1991, which is used primarily by physicists ( 19 ). These alternative forms of peer review are still un-established and experimental. Traditional peer review is time-tested and still highly utilized. All methods of peer review have their advantages and deficiencies, and all are prone to error.

PEER REVIEW OF OPEN ACCESS JOURNALS

Open access (OA) journals are becoming increasingly popular as they allow the potential for widespread distribution of publications in a timely manner ( 20 ). Nevertheless, there can be issues regarding the peer review process of open access journals. In a study published in Science in 2013, John Bohannon submitted 304 slightly different versions of a fictional scientific paper (written by a fake author, working out of a non-existent institution) to a selected group of OA journals. This study was performed in order to determine whether papers submitted to OA journals are properly reviewed before publication in comparison to subscription-based journals. The journals in this study were selected from the Directory of Open Access Journals (DOAJ) and Biall’s List, a list of journals which are potentially predatory, and all required a fee for publishing ( 21 ). Of the 304 journals, 157 accepted a fake paper, suggesting that acceptance was based on financial interest rather than the quality of article itself, while 98 journals promptly rejected the fakes ( 21 ). Although this study highlights useful information on the problems associated with lower quality publishers that do not have an effective peer review system in place, the article also generalizes the study results to all OA journals, which can be detrimental to the general perception of OA journals. There were two limitations of the study that made it impossible to accurately determine the relationship between peer review and OA journals: 1) there was no control group (subscription-based journals), and 2) the fake papers were sent to a non-randomized selection of journals, resulting in bias.

JOURNAL ACCEPTANCE RATES

Based on a recent survey, the average acceptance rate for papers submitted to scientific journals is about 50% ( 7 ). Twenty percent of the submitted manuscripts that are not accepted are rejected prior to review, and 30% are rejected following review ( 7 ). Of the 50% accepted, 41% are accepted with the condition of revision, while only 9% are accepted without the request for revision ( 7 ).

SATISFACTION WITH THE PEER REVIEW SYSTEM

Based on a recent survey by the PRC, 64% of academics are satisfied with the current system of peer review, and only 12% claimed to be ‘dissatisfied’ ( 7 ). The large majority, 85%, agreed with the statement that ‘scientific communication is greatly helped by peer review’ ( 7 ). There was a similarly high level of support (83%) for the idea that peer review ‘provides control in scientific communication’ ( 7 ).

HOW TO PEER REVIEW EFFECTIVELY

The following are ten tips on how to be an effective peer reviewer as indicated by Brian Lucey, an expert on the subject ( 22 ):

1) Be professional

Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it.

2) Be pleasant

If the paper is of low quality, suggest that it be rejected, but do not leave ad hominem comments. There is no benefit to being ruthless.

3) Read the invite

When emailing a scientist to ask them to conduct a peer review, the majority of journals will provide a link to either accept or reject. Do not respond to the email, respond to the link.

4) Be helpful

Suggest how the authors can overcome the shortcomings in their paper. A review should guide the author on what is good and what needs work from the reviewer’s perspective.

5) Be scientific

The peer reviewer plays the role of a scientific peer, not an editor for proofreading or decision-making. Don’t fill a review with comments on editorial and typographic issues. Instead, focus on adding value with scientific knowledge and commenting on the credibility of the research conducted and conclusions drawn. If the paper has a lot of typographical errors, suggest that it be professionally proof edited as part of the review.

6) Be timely

Stick to the timeline given when conducting a peer review. Editors track who is reviewing what and when and will know if someone is late on completing a review. It is important to be timely both out of respect for the journal and the author, as well as to not develop a reputation of being late for review deadlines.

7) Be realistic

The peer reviewer must be realistic about the work presented, the changes they suggest and their role. Peer reviewers may set the bar too high for the paper they are editing by proposing changes that are too ambitious and editors must override them.

8) Be empathetic

Ensure that the review is scientific, helpful and courteous. Be sensitive and respectful with word choice and tone in a review.

Remember that both specialists and generalists can provide valuable insight when peer reviewing. Editors will try to get both specialised and general reviewers for any particular paper to allow for different perspectives. If someone is asked to review, the editor has determined they have a valid and useful role to play, even if the paper is not in their area of expertise.

10) Be organised

A review requires structure and logical flow. A reviewer should proofread their review before submitting it for structural, grammatical and spelling errors as well as for clarity. Most publishers provide short guides on structuring a peer review on their website. Begin with an overview of the proposed improvements; then provide feedback on the paper structure, the quality of data sources and methods of investigation used, the logical flow of argument, and the validity of conclusions drawn. Then provide feedback on style, voice and lexical concerns, with suggestions on how to improve.

In addition, the American Physiology Society (APS) recommends in its Peer Review 101 Handout that peer reviewers should put themselves in both the editor’s and author’s shoes to ensure that they provide what both the editor and the author need and expect ( 11 ). To please the editor, the reviewer should ensure that the peer review is completed on time, and that it provides clear explanations to back up recommendations. To be helpful to the author, the reviewer must ensure that their feedback is constructive. It is suggested that the reviewer take time to think about the paper; they should read it once, wait at least a day, and then re-read it before writing the review ( 11 ). The APS also suggests that Graduate students and researchers pay attention to how peer reviewers edit their work, as well as to what edits they find helpful, in order to learn how to peer review effectively ( 11 ). Additionally, it is suggested that Graduate students practice reviewing by editing their peers’ papers and asking a faculty member for feedback on their efforts. It is recommended that young scientists offer to peer review as often as possible in order to become skilled at the process ( 11 ). The majority of students, fellows and trainees do not get formal training in peer review, but rather learn by observing their mentors. According to the APS, one acquires experience through networking and referrals, and should therefore try to strengthen relationships with journal editors by offering to review manuscripts ( 11 ). The APS also suggests that experienced reviewers provide constructive feedback to students and junior colleagues on their peer review efforts, and encourages them to peer review to demonstrate the importance of this process in improving science ( 11 ).

The peer reviewer should only comment on areas of the manuscript that they are knowledgeable about ( 23 ). If there is any section of the manuscript they feel they are not qualified to review, they should mention this in their comments and not provide further feedback on that section. The peer reviewer is not permitted to share any part of the manuscript with a colleague (even if they may be more knowledgeable in the subject matter) without first obtaining permission from the editor ( 23 ). If a peer reviewer comes across something they are unsure of in the paper, they can consult the literature to try and gain insight. It is important for scientists to remember that if a paper can be improved by the expertise of one of their colleagues, the journal must be informed of the colleague’s help, and approval must be obtained for their colleague to read the protected document. Additionally, the colleague must be identified in the confidential comments to the editor, in order to ensure that he/she is appropriately credited for any contributions ( 23 ). It is the job of the reviewer to make sure that the colleague assisting is aware of the confidentiality of the peer review process ( 23 ). Once the review is complete, the manuscript must be destroyed and cannot be saved electronically by the reviewers ( 23 ).

COMMON ERRORS IN SCIENTIFIC PAPERS

When performing a peer review, there are some common scientific errors to look out for. Most of these errors are violations of logic and common sense: these may include contradicting statements, unwarranted conclusions, suggestion of causation when there is only support for correlation, inappropriate extrapolation, circular reasoning, or pursuit of a trivial question ( 24 ). It is also common for authors to suggest that two variables are different because the effects of one variable are statistically significant while the effects of the other variable are not, rather than directly comparing the two variables ( 24 ). Authors sometimes oversee a confounding variable and do not control for it, or forget to include important details on how their experiments were controlled or the physical state of the organisms studied ( 24 ). Another common fault is the author’s failure to define terms or use words with precision, as these practices can mislead readers ( 24 ). Jargon and/or misused terms can be a serious problem in papers. Inaccurate statements about specific citations are also a common occurrence ( 24 ). Additionally, many studies produce knowledge that can be applied to areas of science outside the scope of the original study, therefore it is better for reviewers to look at the novelty of the idea, conclusions, data, and methodology, rather than scrutinize whether or not the paper answered the specific question at hand ( 24 ). Although it is important to recognize these points, when performing a review it is generally better practice for the peer reviewer to not focus on a checklist of things that could be wrong, but rather carefully identify the problems specific to each paper and continuously ask themselves if anything is missing ( 24 ). An extremely detailed description of how to conduct peer review effectively is presented in the paper How I Review an Original Scientific Article written by Frederic G. Hoppin, Jr. It can be accessed through the American Physiological Society website under the Peer Review Resources section.

CRITICISM OF PEER REVIEW

A major criticism of peer review is that there is little evidence that the process actually works, that it is actually an effective screen for good quality scientific work, and that it actually improves the quality of scientific literature. As a 2002 study published in the Journal of the American Medical Association concluded, ‘Editorial peer review, although widely used, is largely untested and its effects are uncertain’ ( 25 ). Critics also argue that peer review is not effective at detecting errors. Highlighting this point, an experiment by Godlee et al. published in the British Medical Journal (BMJ) inserted eight deliberate errors into a paper that was nearly ready for publication, and then sent the paper to 420 potential reviewers ( 7 ). Of the 420 reviewers that received the paper, 221 (53%) responded, the average number of errors spotted by reviewers was two, no reviewer spotted more than five errors, and 35 reviewers (16%) did not spot any.

Another criticism of peer review is that the process is not conducted thoroughly by scientific conferences with the goal of obtaining large numbers of submitted papers. Such conferences often accept any paper sent in, regardless of its credibility or the prevalence of errors, because the more papers they accept, the more money they can make from author registration fees ( 26 ). This misconduct was exposed in 2014 by three MIT graduate students by the names of Jeremy Stribling, Dan Aguayo and Maxwell Krohn, who developed a simple computer program called SCIgen that generates nonsense papers and presents them as scientific papers ( 26 ). Subsequently, a nonsense SCIgen paper submitted to a conference was promptly accepted. Nature recently reported that French researcher Cyril Labbé discovered that sixteen SCIgen nonsense papers had been used by the German academic publisher Springer ( 26 ). Over 100 nonsense papers generated by SCIgen were published by the US Institute of Electrical and Electronic Engineers (IEEE) ( 26 ). Both organisations have been working to remove the papers. Labbé developed a program to detect SCIgen papers and has made it freely available to ensure publishers and conference organizers do not accept nonsense work in the future. It is available at this link: http://scigendetect.on.imag.fr/main.php ( 26 ).

Additionally, peer review is often criticized for being unable to accurately detect plagiarism. However, many believe that detecting plagiarism cannot practically be included as a component of peer review. As explained by Alice Tuff, development manager at Sense About Science, ‘The vast majority of authors and reviewers think peer review should detect plagiarism (81%) but only a minority (38%) think it is capable. The academic time involved in detecting plagiarism through peer review would cause the system to grind to a halt’ ( 27 ). Publishing house Elsevier began developing electronic plagiarism tools with the help of journal editors in 2009 to help improve this issue ( 27 ).

It has also been argued that peer review has lowered research quality by limiting creativity amongst researchers. Proponents of this view claim that peer review has repressed scientists from pursuing innovative research ideas and bold research questions that have the potential to make major advances and paradigm shifts in the field, as they believe that this work will likely be rejected by their peers upon review ( 28 ). Indeed, in some cases peer review may result in rejection of innovative research, as some studies may not seem particularly strong initially, yet may be capable of yielding very interesting and useful developments when examined under different circumstances, or in the light of new information ( 28 ). Scientists that do not believe in peer review argue that the process stifles the development of ingenious ideas, and thus the release of fresh knowledge and new developments into the scientific community.

Another issue that peer review is criticized for, is that there are a limited number of people that are competent to conduct peer review compared to the vast number of papers that need reviewing. An enormous number of papers published (1.3 million papers in 23,750 journals in 2006), but the number of competent peer reviewers available could not have reviewed them all ( 29 ). Thus, people who lack the required expertise to analyze the quality of a research paper are conducting reviews, and weak papers are being accepted as a result. It is now possible to publish any paper in an obscure journal that claims to be peer-reviewed, though the paper or journal itself could be substandard ( 29 ). On a similar note, the US National Library of Medicine indexes 39 journals that specialize in alternative medicine, and though they all identify themselves as “peer-reviewed”, they rarely publish any high quality research ( 29 ). This highlights the fact that peer review of more controversial or specialized work is typically performed by people who are interested and hold similar views or opinions as the author, which can cause bias in their review. For instance, a paper on homeopathy is likely to be reviewed by fellow practicing homeopaths, and thus is likely to be accepted as credible, though other scientists may find the paper to be nonsense ( 29 ). In some cases, papers are initially published, but their credibility is challenged at a later date and they are subsequently retracted. Retraction Watch is a website dedicated to revealing papers that have been retracted after publishing, potentially due to improper peer review ( 30 ).

Additionally, despite its many positive outcomes, peer review is also criticized for being a delay to the dissemination of new knowledge into the scientific community, and as an unpaid-activity that takes scientists’ time away from activities that they would otherwise prioritize, such as research and teaching, for which they are paid ( 31 ). As described by Eva Amsen, Outreach Director for F1000Research, peer review was originally developed as a means of helping editors choose which papers to publish when journals had to limit the number of papers they could print in one issue ( 32 ). However, nowadays most journals are available online, either exclusively or in addition to print, and many journals have very limited printing runs ( 32 ). Since there are no longer page limits to journals, any good work can and should be published. Consequently, being selective for the purpose of saving space in a journal is no longer a valid excuse that peer reviewers can use to reject a paper ( 32 ). However, some reviewers have used this excuse when they have personal ulterior motives, such as getting their own research published first.

RECENT INITIATIVES TOWARDS IMPROVING PEER REVIEW

F1000Research was launched in January 2013 by Faculty of 1000 as an open access journal that immediately publishes papers (after an initial check to ensure that the paper is in fact produced by a scientist and has not been plagiarised), and then conducts transparent post-publication peer review ( 32 ). F1000Research aims to prevent delays in new science reaching the academic community that are caused by prolonged publication times ( 32 ). It also aims to make peer reviewing more fair by eliminating any anonymity, which prevents reviewers from delaying the completion of a review so they can publish their own similar work first ( 32 ). F1000Research offers completely open peer review, where everything is published, including the name of the reviewers, their review reports, and the editorial decision letters ( 32 ).

PeerJ was founded by Jason Hoyt and Peter Binfield in June 2012 as an open access, peer reviewed scholarly journal for the Biological and Medical Sciences ( 33 ). PeerJ selects articles to publish based only on scientific and methodological soundness, not on subjective determinants of ‘impact ’, ‘novelty’ or ‘interest’ ( 34 ). It works on a “lifetime publishing plan” model which charges scientists for publishing plans that give them lifetime rights to publish with PeerJ, rather than charging them per publication ( 34 ). PeerJ also encourages open peer review, and authors are given the option to post the full peer review history of their submission with their published article ( 34 ). PeerJ also offers a pre-print review service called PeerJ Pre-prints, in which paper drafts are reviewed before being sent to PeerJ to publish ( 34 ).

Rubriq is an independent peer review service designed by Shashi Mudunuri and Keith Collier to improve the peer review system ( 35 ). Rubriq is intended to decrease redundancy in the peer review process so that the time lost in redundant reviewing can be put back into research ( 35 ). According to Keith Collier, over 15 million hours are lost each year to redundant peer review, as papers get rejected from one journal and are subsequently submitted to a less prestigious journal where they are reviewed again ( 35 ). Authors often have to submit their manuscript to multiple journals, and are often rejected multiple times before they find the right match. This process could take months or even years ( 35 ). Rubriq makes peer review portable in order to help authors choose the journal that is best suited for their manuscript from the beginning, thus reducing the time before their paper is published ( 35 ). Rubriq operates under an author-pay model, in which the author pays a fee and their manuscript undergoes double-blind peer review by three expert academic reviewers using a standardized scorecard ( 35 ). The majority of the author’s fee goes towards a reviewer honorarium ( 35 ). The papers are also screened for plagiarism using iThenticate ( 35 ). Once the manuscript has been reviewed by the three experts, the most appropriate journal for submission is determined based on the topic and quality of the paper ( 35 ). The paper is returned to the author in 1-2 weeks with the Rubriq Report ( 35 ). The author can then submit their paper to the suggested journal with the Rubriq Report attached. The Rubriq Report will give the journal editors a much stronger incentive to consider the paper as it shows that three experts have recommended the paper to them ( 35 ). Rubriq also has its benefits for reviewers; the Rubriq scorecard gives structure to the peer review process, and thus makes it consistent and efficient, which decreases time and stress for the reviewer. Reviewers also receive feedback on their reviews and most significantly, they are compensated for their time ( 35 ). Journals also benefit, as they receive pre-screened papers, reducing the number of papers sent to their own reviewers, which often end up rejected ( 35 ). This can reduce reviewer fatigue, and allow only higher-quality articles to be sent to their peer reviewers ( 35 ).

According to Eva Amsen, peer review and scientific publishing are moving in a new direction, in which all papers will be posted online, and a post-publication peer review will take place that is independent of specific journal criteria and solely focused on improving paper quality ( 32 ). Journals will then choose papers that they find relevant based on the peer reviews and publish those papers as a collection ( 32 ). In this process, peer review and individual journals are uncoupled ( 32 ). In Keith Collier’s opinion, post-publication peer review is likely to become more prevalent as a complement to pre-publication peer review, but not as a replacement ( 35 ). Post-publication peer review will not serve to identify errors and fraud but will provide an additional measurement of impact ( 35 ). Collier also believes that as journals and publishers consolidate into larger systems, there will be stronger potential for “cascading” and shared peer review ( 35 ).

CONCLUDING REMARKS

Peer review has become fundamental in assisting editors in selecting credible, high quality, novel and interesting research papers to publish in scientific journals and to ensure the correction of any errors or issues present in submitted papers. Though the peer review process still has some flaws and deficiencies, a more suitable screening method for scientific papers has not yet been proposed or developed. Researchers have begun and must continue to look for means of addressing the current issues with peer review to ensure that it is a full-proof system that ensures only quality research papers are released into the scientific community.

peer review in research example

"Culture and morale changed overnight! In under 2 months, we’ve had over 2,000 kudos sent and 80%+ engagement across all employees."

peer review in research example

President at M&H

peer review in research example

Recognition, Rewards, and Surveys all inside Slack or Teams

Free Forever. No Credit Card Required.

Microsoft Teams Logo

Celebrate wins together and regularly for all to see

peer review in research example

Redeem coins for gift cards, company rewards & donations

Feedback Friday

Start a weekly recognition habit with automatic reminders

peer review in research example

Automatically celebrate birthdays and work anniversaries

Feedback Surveys

10x your response rate, instantly with surveys inside Slack/Teams

Continuous Feedback

Gather continuous, real-time feedback and insights

peer review in research example

Discover insights from recognition

Have questions? Send us a message

How teams are building culture with employee recognition and rewards

Advice and answers from the Matter team

Helpful videos to fully experience Matter

Peer Review Examples (+14 Phrases to Use)

peer review in research example

‍ Table of Contents:

Peer review feedback examples, what are the benefits of peer review feedback examples, what are peer review feedback examples, 5 key parts of good peer review examples, 14 examples of performance review phrases, how do you give peer review feedback to remote teams, the benefits of a feedback culture, how to implement a strong feedback culture.

A peer review is a type of evaluative feedback. It focuses on the strengths and areas of improvement for yourself, your team members, and even the organization as a whole. This form of evaluation can benefit all parties involved, helping to build self-awareness and grow in new ways that we might not have realized before. Of course, the best examples of peer review feedback are those that are well-received and effective in the workplace, which we will go over in the next section.

As mentioned, peer review feedback is a great way to identify your strengths and weaknesses and those of others. The benefits are two-fold: it helps you grow in new ways that may have been difficult for you before, while also making sure everyone involved feels confident about their abilities moving forward.

For instance, organizations with robust feedback cultures can close any gaps that hinder their performance and seize business opportunities whenever they present themselves. This dual benefit gives them competitive advantages that allow them to grow, along with a more positive workplace. Leading companies that enjoy these types of advantages include Cargill, Netflix, and Google. Peer review feedback can also be a great tool to use for conducting your annual performance reviews. They give managers visibility and insights that might not be possible otherwise. The feedback can help you better understand how your employees view their performance, as well as what they think the company's expectations are of them. This opportunity is especially helpful for those who work remotely—it allows managers to see things that might be missed otherwise.

For example, if an employee works from home often or telecommutes frequently, it can be more difficult for managers to get a sense of how they are doing. This is where peer review feedback comes in—if their peers notice issues that need attention, this provides the manager with valuable insights that might otherwise have gone unnoticed. Everyone must be on the same page about what exactly it is they want from these sessions and how their employees will benefit from receiving them.

A Gallup poll revealed that organizations that give their employees regular feedback have turnover rates that are almost 15% lower than for those employees that didn't receive any. This statistic indicates that regular reviews, including peer reviews, are important. However, so is giving the right kind of peer review feedback.

As such, when you have a peer review session, think about some good examples of the type of feedback that might be beneficial for both parties. These would be the relevant peer review examples you want to use for your organization.

One example would be to discuss ways in which the employee’s performance may have been exemplary when you give them their peer review feedback forms . This conversation gives the person being reviewed an idea about how well they're doing and where their strengths lie in the form of positive feedback. 

On the other hand, it also helps them know there is room for improvement where they may not have realized it before in the form of negative feedback.

Another example would be to discuss how you might improve how the person being reviewed conducts themselves on a day-to-day basis. Again, this action can help someone realize how their performance can be improved and provide them with suggestions that they might not have thought of before.

For example, you may notice that a team member tends to talk more than is necessary during meetings or wastes time by doing unnecessary tasks when other pressing matters are at hand. This type of negative feedback would allow the person receiving it to know what areas they need to work on and how they can improve themselves.

As mentioned previously, peer reviews are a great way of giving an employee concrete suggestions for the areas in which they need improvement, as well as those where their performance is exemplary.

To ensure that your team feels valued and confident moving forward, you should give them the best examples of peer review feedback possible. The following are five examples of what constitutes good peer review feedback:

1. Use anonymity. Keeping them anonymous so that the employee review makes workers feel comfortable with the content and don't feel any bias has entered the review process.

2. Scheduling them frequently enough. A good employee experience with peer reviews involves scheduling them often enough so that no one has an unwelcome surprise come annual or biannual performance appraisal time.

3. Keep them objective & constructive. Keep peer review feedback objective and constructive—your goal is to help improve the peers you're reviewing so they can continue to do an even better job than before!

4. Having key points to work on. Ask questions such as: what is the goal? And what does the company want people to get out of each session?

5. The right people giving the peer review . Personnel familiar with the employee's work should be the ones doing the employee evaluation, rating the reviewer's performance, and providing peer feedback.

You can use the following positive performance appraisal phrases to recognize and coach your employees for anything from regularly scheduled peer reviews to biannual and annual appraisals:

  • "I can always count on you to..." ‍
  • "You are a dependable employee who meets all deadlines." ‍
  • "Your customer service is excellent. You make everyone feel welcome and comfortable, no matter how busy things get." ‍
  • "The accounting work that you do for our team helps us out in the long run." ‍
  • "I appreciate your helpfulness when it comes to training new employees. You always seem willing to take some time out of your day, even though you're busy with other tasks, to show them how we do things here at [COMPANY]." ‍
  • "It's so nice to see you staying on top of your work. You never miss a deadline, and that is very important here at [COMPANY]." ‍
  • "I can always count on you when I need something done immediately." ‍
  • "Your communication skills are exceptional, and I appreciate the way you always get your point across clearly." ‍
  • "You are always willing to lend an ear if someone needs help or has a question about something. You're great at being the go-to person when people need advice." ‍
  • "I appreciate your ability to anticipate our customers' needs."

Negative performance review phrases can be helpful if handled the right way and often contribute to improving the employee's performance. 

Here are some examples of effective negative performance review phrases you can use:

  • "You seem to struggle with following the company's processes. I would like to see you get better at staying on top of what needs to be done and getting it done on time." ‍
  • "I'm concerned that your work quality has slipped lately. You're still meeting deadlines, but some of your work seems rushed or incomplete. I want to make sure that you're giving everything the attention it deserves." ‍
  • "I noticed that you've been getting a lot of customer complaints lately. Is there anything going on? Maybe we can work together and come up with some solutions for how things could be better handled in the future?" ‍
  • "You seem overwhelmed right now, and it's affecting your work quality. I want to help you figure out how we can better distribute the workload so that you're not feeling like this anymore."

When giving peer review feedback to remote teams, it is essential for everyone involved that the employee being reviewed feels comfortable and respected. And whether a peer or direct report gives the remote employee a review, the most effective way to ensure this happens is by providing open communication and constructive feedback throughout the process.

However, when you work remotely, it can be difficult to get the opportunity for peer feedback. However, there are ways of ensuring that such a process is still beneficial and productive.

The following are some examples of how to go about giving effective peer review feedback when working virtually:

  • Take advantage of webcams or video conferencing to make sure that you can see the employee's facial expressions and monitor body language during a performance review, remote or otherwise. ‍
  • Just like with any in-person performance review, it's critical to schedule a regular time for sessions so they don't catch anyone by surprise. ‍
  • Make it clear at both your end as well as theirs what the overall goal is—this helps them prepare ahead of time and ensures there are no unforeseen surprises. ‍
  • Ensure that you keep the feedback objective with constructive criticism, as this is what will allow them to improve their performance in a way that they can take advantage of immediately. Include all these key points in your company peer review templates also. ‍
  • Be prepared for these sessions by having a list of key points you want to cover with your peer reviewer—this helps guide the conversation while ensuring no important points are overlooked.

When employees enjoy their work, understand their goals, and know the values and competencies of the job, job satisfaction increases, along with their performance. In addition, the link between productivity and effective feedback is well established. For instance, 69% of workers said they would work harder if their efforts were recognized, according to LinkedIn.

Continuous and regularly scheduled performance appraisal feedback helps with employee development, clarifies expectations, aligns goals, and motivates staff (check out our article Peer Review Feedback to find out why peer feedback is so essential), establishing a positive workplace. Lastly, a workplace that dedicates itself to motivating people to be better will improve employee engagement and the levels of performance.

If you haven't implemented a culture for using feedback yet, there are several effective ways to go about it. One good way to kick things off is to first identify teams or some other similar organizational unit and have them experiment with the social feedback system.

While the frequency of peer reviews should be given every three to four weeks, or even at the end of a project sprint , the cycles for building a strong feedback culture can be quarterly or monthly, depending on your preferences and operations.

After the three cycles are finalized, you typically have built up enough feedback information to start the organization on its path to a strong feedback culture.

Knowing these peer review feedback examples and tips on giving them to remote teams will help you become more comfortable with this type of evaluative discussion. It can be difficult at first, but remember that the benefits are worth it! And remember: when giving peer review feedback, make sure you keep each session objective. This helps ensure they're constructive and that both parties walk away feeling as though they've learned a lot from them.

Want to keep that morale sky-high during Feedback Friday and the peer review process? If so, be sure to check out Matter , with features that allow you to give public Kudos all inside Slack.

Recognition & Rewards all inside Slack or Teams

Awwards cat

Employee Recognition & Rewards all in Slack or Teams

peer review in research example

Major revisions: Sample peer review comments and examples

‘Major revisions’ is one of the most common peer review decisions. It means that the peer reviewer considers a manuscript suitable for publication if the authors rectify some major shortcomings. As a peer reviewer, it is useful to learn about common reasons for a ‘major revision’ verdict. Furthermore, you can get inspired by sample peer review comments and examples which reflect this verdict appropriately.

Common reasons for a ‘major revisions’ decision

Knowing how to react to a ‘major revisions’ verdict on your own manuscript is important. Yet, it is different from evaluating someone else’s manuscript.

As a manuscript reviewer, you decide on a ‘major revisions’ verdict if you think that the manuscript is good, but that the authors have to address some significant issues before it can be published.

Sample peer review comments for a ‘major revisions’ verdict

“The manuscript shows a lot of promise, but some major issues need to be addressed before it can be published.”

“I enjoyed reading this manuscript, and believe that it is very promising. At the same time, I identified several issues that require the authors’ attention.”

“The key argument needs to be worked out and formulated much more clearly.”

“The empirical evidence is at times insufficient to support the authors’ claims. For instance, in section…”

“I believe that the manuscript addresses a relevant topic and includes a timely discussion. However, I struggled to understand section 3.1.”

“The line of argumentation should be improved by dividing the manuscript into clear sections with subheadings.”

Reviewer comments ‘major revisions’ example 1

Reviewer comments ‘major revisions’ example 2

Master academia, get new content delivered directly to your inbox, revise and resubmit: sample peer review comments and examples, minor revisions: sample peer review comments and examples, related articles, 10 tips on how to use reference management software smartly and efficiently, dealing with failure as a phd student, 18 common audience questions at academic conferences (+ how to react).

How to perform a peer review

You’ve received or accepted an invitation to review an article. Now the work begins. Here are some guidelines and a step by step guide to help you conduct your peer review. 

General and Ethical Guidelines

Step by Step Guide to Reviewing a Manuscript

Top Tips for Peer Reviewers

Working with Editors

Reviewing Revised Manuscripts

Tips for Reviewing a Clinical Manuscript

Reviewing Registered Reports

Tips for Reviewing Rich Media

Reviewing for Sound Science

zavvy logo

Quicklinks ‍

Peer review examples: 50+ effective phrases for next review.

Are you struggling with writing effective reviews for your peers? Learn does and don'ts and get inspired by 50 peer review examples for coworkers.

Let's face it: giving feedback can be challenging, especially when it comes to peer reviews.

As a peer, you're in a unique position to provide constructive feedback to your colleagues. You want to help them grow and develop. But finding the right words to use is no walk in the park.

🙋 We're here to help you ensure your feedback is effective and actionable.

We collected a comprehensive peer review sample: 50+ effective review phrases to use in your next performance or skill review, helping you provide feedback that's supportive, constructive, and inspiring. You'll find peer review phrases for positive performance and constructive peer review feedback examples.

Plus, we've also included tips for giving peer review feedback (and how not to do it), supported by multiple peer feedback examples.

360 Feedback toolkit for growing businesses

❓ What are peer review feedback examples?

Peer review feedback is part of an  employee's development and performance process and an essential component of 360 feedback.

Performance reviews are a key of 360 degree feedback systems and can be the difference between a happy employee and one who is just going through the motions.

Think of peer reviews as a thermometer that measures an employee's performance, skills, abilities, or attitudes by their fellow co-workers and team members.

Peer reviews on Zavvy -> questions and peer review example phrases

As part of a wider performance management system , peer reviews help an organization in the following ways:

  • 🎯 Can be used as a goal-setting opportunity.
  • 🔎 Peer feedback helps identify the strengths and weaknesses of individual employees, teams, and the company as a whole.
  • 🌱 Suggestions from peers can help employees and team members develop personally and professionally .
  • 🔗 Boost employee motivation and satisfaction and strengthen trust and collaboration within the team .
  • 📈 Through peer reviews, employees can receive constructive criticism and solutions on how they can work to meet the company's expectations and contribute to its growth.

🌟 33 Positive peer review feedback examples

We structured these positive feedback samples into competency-specific examples and job performance -specific examples.

🗣️ Communication skills

  • "You effectively communicate with colleagues, customers, vendors, supervisors, and partners. You are a key driver of our high customer satisfaction scores."
  • "You are an excellent communicator, and you are adept at discussing difficult issues effectively and straight to the point."
  • "Tom has excellent communication skills and always keeps the team up-to-date on his progress, ensuring the team is always on the same page."
  • "John is an excellent mentor who is always willing to share his knowledge and experience with others, providing guidance and support when needed."
  • "Your approach to giving peer feedback is exemplary. You have a knack for delivering constructive insights in a manner that fosters growth and understanding. Your peers, including myself, value the way you phrase your feedback to be actionable and uplifting."

🤝 Teamwork & collaboration

  • "I appreciate the way you collaborate with your team and cross-functionally to find solutions to problems."
  • "You're an effective team member, as demonstrated by your willingness to help out and contribute as required."
  • "Sarah is a true team player who always helps out her colleagues. She consistently meets deadlines and produces work of a high standard."
  • "Bob is an excellent collaborator and has built strong relationships with his colleagues. He actively seeks out opportunities to share knowledge and support others on the team."

🤗 Mentoring & support

  • "I appreciate that you never make your team members feel belittled even when they ask the simplest questions. You're eager to help, and you're exceptional at mentoring when people need advice."
  • "I appreciate how Julie is always willing to share her knowledge and expertise with others. She is an excellent resource for the team and is always happy to help out when someone needs guidance."

😊 Positivity & attitude

  • "I appreciate how Sarah always brings a positive attitude to the team. She is always willing to help out and support others, and her enthusiasm is infectious."
  • "I appreciate how Maria always takes the time to build relationships with her colleagues. She is friendly and approachable, and she has a talent for bringing people together."
  • "I appreciate how you remain calm under pressure and greet customers with a smile."

Competency Matrix Database including levels

🙏 Professionalism & work ethics

  • "I admire how you uphold organizational standards for inclusion, diversity, and ethics."
  • "I appreciate how John builds relationships with clients and colleagues. He is always professional and courteous, and he has a natural talent for making people feel comfortable and valued."
  • "I appreciate how David always takes a thoughtful and considered approach to his work. He is always looking for ways to improve his performance and is never satisfied with simply meeting the bare minimum."

⭐ Quality of work & performance

  • "Your copy-editing skills are excellent. You always ensure that all articles published by the content marketing team are thoroughly edited and proofed, which is very important here at (COMPANY)."
  • "You've improved XX by XYZ%, and you've streamlined the work process by doing XYZ."
  • "John has a great eye for detail and consistently produces high work quality. I appreciate the way he is always happy to lend a hand to others when needed and proactively offers ideas to improve processes."
  • "Karen is a fast learner and has a keen eye for detail, making her a valuable asset to the team."
  • "I can always count on you to give our customers the best customer experience, and I appreciate the way you go over and beyond for them."

🚀 Innovation & initiative

  • "You are always suggesting new ideas in meetings and during projects. Well done!"
  • "You constantly show initiative by developing new ways of thinking to improve projects and overall company success."
  • "Jane has been doing an excellent job with her projects, and her creativity and innovative ideas have helped move the team forward."
  • "Samantha has a creative approach to problem-solving, and I have noticed that she often comes up with unique and innovative solutions to complex challenges."

🌱 Self-improvement & learning

  • "You are constantly open to learning and ask for more training when you don't understand XYZ processes."
  • "You accept coaching when things aren't clear and apply what you learned to improve XYZ ability."
  • "David is a role model for the rest of the team with his continuous self-improvement mindset and focus on developing his skills and expertise."
  • "I appreciate how Karen is always looking for ways to improve her work and is never satisfied with the status quo. She is a great role model for the rest of the team."

💼 Leadership skills

  • "You show great leadership signs by owning up to mistakes and errors, fixing them, and communicating with others (quickly) when you're unable to meet a deadline."
  • "During our recent project, I noticed how effectively you lead the team. Your ability to listen to everyone's input, make decisions promptly, and delegate tasks was truly commendable. The team felt both supported and empowered under your guidance."
  • "Your leadership during challenging times is admirable. You remain calm, focused, and provide clarity when most needed. This not only keeps the team aligned but also instills a sense of trust and security amongst us."

Leadership competency model template

😥 23 Examples of effective  negative performance peer review examples

All of the above are peer review examples for positive performance .

But it's not always that we only have good things to share.

So, what happens when you want to give negative feedback in cases of low or disappointing performance?

If handled rightly, negative feedback can improve an employee's performance . The key is giving criticism constructively.

📉 Overall employee performance

  • "While your presentations are always well-researched and insightful, they can sometimes run longer than scheduled, which affects subsequent agenda items. For future projects, consider practicing time management during meetings or working on summarizing key points more concisely."
  • "I've noticed that you often work late hours to meet deadlines. While your commitment is commendable, it's crucial to balance workload and ensure that tasks are spread out adequately. Perhaps adopting a more structured approach to project management or seeking delegation opportunities could help prevent last-minute rushes."
  • "I've observed that while you excel in your core tasks, there's occasionally a delay in responding to emails or returning calls. This sometimes causes minor setbacks in our project timelines. It might be beneficial to set aside dedicated times during the day for communication or using a tool to manage and prioritize your inbox."

🧠 Mindset & perspective

  • "You seem to focus more on what can't be done instead of offering solutions. I would like to see you develop an open mindset and work alongside our teammates on brainstorming solutions."
  • "Jane has strong ideas but could work on being more open-minded and considering the perspectives of others to create a more collaborative work environment. I highly encourage her to actively listen to others' ideas and provide constructive feedback. As a result, I think she will become a better collaborator."
  • "Lisa seems to stick to familiar routines and processes and be resistant to change. I think that she could benefit from being more open to change and new ways of doing things to encourage growth and innovation for the team. For a concrete suggestion, I would recommend for her to exchange ideas with new team members with different backgrounds or skill sets to broaden her perspective and challenge her existing ideas."
  • "I think your ideas are really creative and valuable, but I've noticed that you sometimes struggle to communicate them effectively in meetings. I think it would be helpful for you to practice presenting your ideas to a smaller group or one-on-one, and to ask for feedback from your colleagues on how you can improve your communication skills."
  • "Greg tends to be unclear or vague in his messaging, causing confusion and misunderstandings. I encourage him to practice active listening techniques such as asking questions to clarify understanding, and summarizing the conversation."
  • "I've observed challenges in your approach to communicating with remote workers. At times, there seems to be a disconnect or delay in relaying vital information, which has led to inefficiencies and misunderstandings. It might be beneficial to revisit your communication tools and strategies to ensure that everyone, regardless of their location, stays informed and aligned."
  • "I appreciate your attention to detail and your commitment to producing high-quality work, but I've noticed that you sometimes struggle to take feedback or suggestions from others. I think it would be helpful for you to practice being more open to feedback and to work on developing your collaboration skills."
  • "Frank often puts his personal goals above the team's objectives, causing conflict and tension in the workplace. He could work on being more of a team player and prioritizing the team's objectives over personal goals to avoid conflict and tension and help the team meet our goals faster. For example, I would like him to attend our team-building activities or events to help build stronger relationships within our team."

⏰ Time management & meeting deadlines

  • "I've noticed that you're having difficulty meeting your deadlines. I think it would be helpful for you to break down your tasks into smaller, more manageable pieces, and to communicate with fellow colleagues if you need more time or support to complete your work."
  • "Alex could benefit from developing better time management skills to prioritize tasks effectively and avoid delays and missed deadlines. I think that with the right time management training and resources, he will discover time saving processes."

🛠️ Task execution & quality

  • ‍ "I noticed you aren't meeting your targets. Let's get on a call in two days to go over your cold email strategy . Perhaps you can use an email verification tool to validate prospects' addresses." ‍
  • ‍ "Jim could benefit from working on his organization skills and prioritizing his workload to avoid missed deadlines and inconvenience for the team. He could work on creating a system to better manage his workload and set reminders for important deadlines."
  • "Although he is very fast at handling customer requests, Tim is not detail-oriented and often overlooks important aspects of a project, leading to mistakes and oversights. One idea for improving his attention to detail while maintaining his fast response time could be to implement a system of double-checking or quality control." ‍
📈 Explore 45 performance review phrases and extra tips and tricks for giving better performance feedback.

💼 Professionalism & attitude ‍

  • ‍ "Peter could benefit from improving his professionalism in the workplace and avoiding negative or gossipy conversations that create tension. I think that focusing on more positive and constructive interactions with colleagues could help create a better work environment and work relationships."
  • "Samantha can be confrontational and abrasive, making it difficult for others to work with her. She could work on being more approachable and collaborative. One way to do so is by practicing active listening and binge more mindful of how she communicates with others."

🌱 Personal development & growth

  • "I appreciate the effort you're putting in, but I've noticed that you're struggling with certain tasks. I think it would be helpful for you to receive additional training or guidance in those areas."
  • "Sarah has great potential but there is room for improvement, especially with regards to seeking out opportunities to contribute and taking initiative on tasks. I think she could benefit from setting goals and creating a plan to take more ownership of her work."

Performance improvement plan template

  • "During team meetings, it would be beneficial if you could encourage other team members (especially quiet ones) to voice their opinions. When a few individuals dominate the discussions, it might be stifling innovative ideas from others."
  • "I've noticed you generally give feedback in group settings. It would be more effective and respectful to provide constructive criticism in private to avoid any unnecessary embarrassment or tension amongst the team."
  • "When receiving feedback, I've observed that you sometimes become defensive or dismissive. Truly embracing feedback can catalyze growth and development. It might be beneficial to explore methods or strategies that foster a more open and accepting attitude towards feedback."
🌱 Use your peer's feedback to create a development plan to set the path for growth? First, set concrete professional development goals . Then, define the concrete steps that will make your goals a reality.

excel template development plan Zavvy

📝 How do you write a peer review: Does & don'ts for giving feedback to peers

How to write a peer review

The following steps will help you learn how to write a peer review for your co-workers.

For each step, we included positive peer feedback examples and negative peer feedback examples.

By following these guidelines, giving quality feedback should no longer feel like an intimidating task.

1. Think about their work

Before writing your peer review, think about your colleagues' contribution to the workplace.

Then, to get you started, ask yourself the following questions?

  • What are their strengths? What are their weaknesses?
  • How can they improve?
  • What are their latest accomplishments?
  • What do I like or appreciate about them?
  • What do I wish they did less? What do I want them to do more?
  • What are their expected competencies? (In case your company uses a competency model ).

🔴 DO NOT  make the peer review personal. Try to avoid using "I" such as "I don't like..." or "I'm not comfortable with..." when giving constructive feedback.

🟢 DO Tie your comments to the goal of the peer review and not your personal references.

👎 "I don't really pay attention to what John does, so I can't say much about his work."

This peer review example is not helpful or constructive feedback because it doesn't provide any specific information or insights about John's work or his abilities. The feedback is vague and non-specific.

This kind of feedback is not only unhelpful, but it can also be demotivating and discouraging for John. He may feel that his contributions are not valued or recognized.

Recognition is something that people need to stay motivated and engaged. Last thing you want is to disengage and demotivate your peers.

👍 "John has a great eye for detail and consistently produces high-quality work. I appreciate his ability to prioritize tasks and his willingness to help others when needed."

This peer review sample is a good peer review example. It acknowledges John's strengths and provides specific examples of his skills and abilities.

The reviewer highlights John's ability to produce high-quality work, his attention to detail, and his willingness to help others, which are all positive attributes that contribute to the team's success.

👎 "I don't like the way that Mary interacts with others on the team. She can be really abrasive and confrontational, which makes it difficult to work with her."

This peer review example is overly negative and vague, providing no specific information or insights that could help the colleague improve. It also uses emotionally charged language that can be interpreted as a personal attack rather than constructive feedback.

The feedback is also specific and actionable, which can help John continue to excel in his work and contribute to a positive work environment.

👍 "I've noticed that Mary sometimes comes across as confrontational or abrasive in team meetings, which can create tension and make it difficult to collaborate effectively. I think it would be helpful for Mary to work on developing more positive and collaborative communication skills, such as active listening and empathy, to build more positive relationships with her colleagues."

This is another good peer review example because it acknowledges Jane's strengths and accomplishments while also providing specific and actionable feedback on areas for improvement.

By focusing on specific behaviors that Mary can improve, such as organization and task prioritization, the feedback is constructive and helpful for Jane. It also provides her with specific strategies for growth and development in her role, which can help her to continue to excel in her work.

Overall, this kind of feedback can be a powerful tool for helping colleagues to grow and develop in their roles, and for promoting a more collaborative work environment.

2. Be mindful of your colleague's feelings

While it's okay to give constructive feedback and share your honest thoughts on a peer review, you should communicate your opinions professionally without being rude or insulting.

Also, instead of constantly reiterating their weaknesses, let their strengths shine and think of solutions that could motivate them to do better.

🟢 DO be mindful of the tone of your feedback. Using harsh or judgmental language can damage relationships and create a negative work environment.

🔴 DO NOT  use condescending language when evaluating your colleague's performance.

Let's look at some peer feedback examples.

👎 "I don't believe my colleague can function effectively in this job."

👎 "I'm not really sure what Mary does around here. She seems to just be coasting and not really contributing much to the team."

👎 " Mary's work is consistently subpar and it's frustrating to work with her. She needs to work harder."

These are poor example of peer feedback because they are overly negative and do not provide any actionable steps for the person receiving the feedback to improve their performance.

Words like "subpar" and "frustrating" can be hurtful and demotivating, and don't give any specific information on what exactly Mary needs to improve on or how to do so.

👍 "While there's room for improvement, I appreciate the effort Mary puts into her work. I think she could benefit from more training and guidance on how to prioritize tasks."

👍 "I think Mary has the potential to be a great team member, but she could benefit from improving her communication skills. I would suggest that she work on being more clear and direct in her interactions with others."

These are better examples of constructive peer feedback because they acknowledges Mary's effort and provides specific steps for improvement. The reviewer uses more positive language to acknowledge that Mary is trying, and suggests that training and guidance could help her prioritize tasks more effectively or her communication.

The positive examples are more specific, actionable, and solution-focused, and are more likely to lead to improved performance and a more positive work environment.

By focusing on specific areas for improvement and suggesting a way forward, the feedback provides Mary with a clear path to success and encourages her to continue working hard to improve her skills.

3. Explain in detail

While your goal, when given a peer review form, is to focus solely on a particular area of your co-worker's performance, it won't help them in the long run.

🟢 DO share a comprehensive review helps your manager identify their areas of improvement and helps your colleague understand how others view their overall performance at work.

🔴 DO NOT focus on a single event or project. Discuss how they operate daily and their attitudes to work.

Do they have excellent communication skills?

Are they great at communicating with people?

How do they approach brainstorming sessions or when asked to handle complex tasks? 

🔴 DO NOT  critique every tiny detail about your colleague's performance. For example, a colleague's approach to handling a difficult task may be to take some time away from everyone or work and come up with answers than yours.

🟢 DO Understand and appreciate that everyone has different working styles, and it makes up their personalities and who they are.

Let's analyze some concrete peer review feedback examples.

👎 "Samantha's work is good."

👎 "Jane is a great teammate. Great work."

For the negative examples of peer review comments, the feedback is too vague. It doesn't provide enough detail for the recipient to be actionable or meaningful.

👍 "Samantha has great communication skills and is always willing to step in and help others. She excels at problem-solving and is able to stay calm under pressure."

👍 "I really appreciate Jane's ability to stay calm under pressure and help us problem-solve when things get tough. She's always willing to pitch in and go above and beyond to make sure the team succeeds, whether it's taking on extra work or providing a listening ear when someone needs to vent."

For the positive examples of peer review comments, the reviewer provides specific examples of the colleague's behavior and how it positively impacts the team. As a result, the feedback is more meaningful; the receiving peer can use to continue to be a great teammate in the future.

👎 "I can't believe how poorly Tom handled the client meeting last week. He was disorganized and unprepared, and it was clear that the client was not impressed.

This example of peer review feedback is overly negative and strictly refers to a single event. There is no indication that John always displays the same behavior. It also does not acknowledge any strengths or positive attributes that Tom may possess, which can make the feedback feel overly harsh and unfair.

👍 "I think Tom has a lot of potential, but I have noticed that he tends to struggle with giving presentations. I think it would be helpful for him to work on his preparation and public speaking skills, perhaps by attending a workshop or training session. With some additional support and training, I believe Tom could continue to grow in his role and make a positive impact on the team."

In this example, the reviewer does not refer to a single event but to a recurring behavior. By providing specific feedback and actionable steps for improvement, the feedback is more constructive and helpful for the colleague. It also focuses on growth and development rather than criticism and negativity.

This is what we call an effective peer reviewer.

4. Write clearly

Summarize what you've noticed about your co-worker's performance.

🟢 DO mention areas of improvement you've noticed and highlight areas you hope you see their work on in the future.

🔴 DO NOT beat around the bush with your answers during peer reviews.

Ensure your answers are clear, concise, and easy to understand.

👎 "Brian is fine, I guess."

This peer review doesn't provide any specific information or insights about Brian's work or his abilities.

There is a clear example of non-effective feedback. There is nothing actionable for Brian. Even if, on the surface, the reviewer did not share anything negative, there is no take-away for the reviewee.

👍 "I've noticed that Brian has been taking on more responsibilities lately and doing a great job. I think he could benefit from more opportunities to showcase his leadership skills and contribute to larger projects."

This peer review sample is a good example of constructive feedback. It acknowledges Brian's growth and contributions to the team, and suggests opportunities for him to further develop his skills and take on more responsibility.

By acknowledging that Brian has been taking on more responsibilities and doing a great job, the feedback is specific and provides actionable steps for Brian to continue to excel in his role.

👎 "I think Mary is a good worker overall, but there are some things she could improve on. Maybe she could be more organized or something."

👍 "I have noticed that Mary tends to struggle with prioritizing her tasks and meeting deadlines. To help her improve in these areas, I think it would be helpful for her to work on creating more detailed to-do lists or setting reminders for herself. Additionally, I think Mary could benefit from some additional training or support in project management skills."

📜 Templates you can use for you next peer review

Employee peer review templates for annual performance reviews

While there are different ways to create a peer review template, we recommend using Google Docs or Microsoft Word. Not only are they easier to use, but they are free too. With these two online document creation tools, you can say goodbye to purchasing expensive peer review templates or downloading special software.

Here is our free Google Forms template you can give colleagues to send each other meaningful feedback.

Peer review feedback form

  • 🌱 Make it as easy as possible for people to give each other meaningful feedback.
  • 🧩 It's 100% customizable so you can truly make it your own.
➡️ Download your free peer review feedback form here.

Peer feedback form template Google Forms

You could also use Zavvy's feedback tool to collect peer reviews.

With Zavvy, you can create peer review forms that are relevant to the department or job.

For example, peer review forms for sales representatives, customer support specialists, or receptionists should focus on soft skills. In contrast, a Cybersecurity engineer or software developer might focus on technical skills.

This means that you'll need some kind of sheet that outlines your peer's competencies.

Career progression competencies

Don't forget to leave blank spaces on your peer review forms to allow the reviewers to add important yet overlooked topics.

If you're using Zavvy, you can either have reviewees choose their peers themselves - or have managers do it for them.

peer review in research example

➡️ Facilitate feedback and growth with Zavvy

When implemented and done right, peer reviews can offer insights that you might never have otherwise discovered and increase an employees' performance.

Zavvy makes collecting feedback a breeze . With just a few clicks, you will have recurring feedback cycles.

  • Select the types of feedback you want to collect - any combination of self-review , downward, upward feedback, or peer reviews.
  • Customize the survey forms for each feedback type (Or use one of our ready-to-use templates ).
  • Define your anonymity settings (Should all feedback be anonymous ?)
  • Decide if you want to include a performance calibration step .
  • Select the participants for your review cycle (For example, Taktile automates feedback cycles for their new hires at the 6 week, 12 week, and 18 week of their new hire journeys).
  • Define the timeline for writing, nomination and feedback sharing tasks.
  • Double-check all the details and activate your cycle 🏁 .

how Taktile automates giving feedback to their new hires - quote

But, it's one thing to collect peer review feedback, and it's a different ballgame to use it to propel employee growth.

Don't leave your employees wondering what comes next.

Instead, roll out learning and development programs to improve their skills and put them on the right career path .

📅 Want to ensure a cycle of continuous development and grow your people? Book a demo  today.

Zavvy 360 degree growtth system

Keke is Zavvy's expert in learning experience. On our blog, she shares experience and insights based on her studies in learning design and experiences made with our customers.

Als Nächstes lesen

peer review in research example

IMAGES

  1. 25 Peer Feedback Examples (2023)

    peer review in research example

  2. INFOGRAPHIC: 7 Common types of academic peer review

    peer review in research example

  3. (PDF) Sentiment Analysis of Peer Review Texts for Scholarly Papers

    peer review in research example

  4. FREE 10+ Sample Peer Review Forms in PDF

    peer review in research example

  5. My Complete Guide to Academic Peer Review: Example Comments & How to

    peer review in research example

  6. My Complete Guide to Academic Peer Review: Example Comments & How to

    peer review in research example

VIDEO

  1. Peer review training

  2. Scientific Publishing: Where and How to Publish

  3. Peer Review Standards Update: Ask Us Anything

  4. THIS Got Through Peer Review?!

  5. What is Peer Review

  6. University of Johannesburg & Elsevier present "SDGs and How to Use Them in Your Research"

COMMENTS

  1. What Is Peer Review?

    What Is Peer Review? | Types & Examples

  2. 50 Great Peer Review Examples: Sample Phrases + Scenarios

    50 Great Peer Review Examples: Sample Phrases ...

  3. Peer Review Examples (300 Key Positive, Negative Phrases)

    When providing peer review feedback, it's important to balance positive and negative comments: this approach allows the reviewer to maintain a friendly tone and helps the recipient feel reassured. Professionalism: 25 Performance Review Phrases Examples. Examples of Positive Remarks: Well-organized. Clear and concise.

  4. How to Write a Peer Review

    How to Write a Peer Review

  5. My Complete Guide to Academic Peer Review: Example Comments & How to

    The good news is that published papers often now include peer-review records, including the reviewer comments and authors' replies. So here are two feedback examples from my own papers: Example Peer Review: Paper 1. Quantifying 3D Strain in Scaffold Implants for Regenerative Medicine, J. Clark et al. 2020 - Available here

  6. Peer review guidance: a primer for researchers

    Peer review guidance: a primer for researchers - PMC

  7. What Is Peer Review?

    Peer review enhances the credibility of the published manuscript. However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

  8. A step-by-step guide to peer review: a template for patients and novice

    A step-by-step guide to peer review: a template for patients ...

  9. Peer Review

    "Peer review is broken. But let's do it as effectively and as conscientiously as possible." — Rosy Hosking, CommLab "A thoughtful, well-presented evaluation of a manuscript, with tangible suggestions for improvement and a recommendation that is supported by the comments, is the most valuable contribution that you can make as a reviewer, and such a review is greatly appreciated by ...

  10. Peer Review Template

    Peer Review Template

  11. Understanding Peer Review in Science

    The manuscript peer review process helps ensure scientific publications are credible and minimizes errors. Peer review is an essential element of the scientific publishing process that helps ensure that research articles are evaluated, critiqued, and improved before release into the academic community. Take a look at the significance of peer review in scientific publications, the typical steps ...

  12. Peer review

    Peer review - American Psychological Association (APA) ... Peer Review

  13. 70 Peer Review Examples: Powerful Phrases You Can Use

    70 Peer Review Examples: Powerful Phrases You ...

  14. Peer Review Examples

    Peer Review Examples

  15. Research Guides: Peer Reviewed Literature: What is Peer Review?

    The terms scholarly, academic, peer-reviewed and refereed are sometimes used interchangeably, although there are slight differences.. Scholarly and academic may refer to peer-reviewed articles, but not all scholarly and academic journals are peer-reviewed (although most are.) For example, the Harvard Business Review is an academic journal but it is editorially reviewed, not peer-reviewed.

  16. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer Review in Scientific Publications: Benefits, Critiques, ...

  17. Peer Review Examples (With 25 Effective Peer Review Phrases)

    Peer Review Examples (With 25 Effective ...

  18. Peer Review Examples (+14 Phrases to Use)

    Peer Review Examples (+14 Phrases to Use)

  19. Major revisions: Sample peer review comments and examples

    It means that the peer reviewer considers a manuscript suitable for publication if the authors rectify some major shortcomings. As a peer reviewer, it is useful to learn about common reasons for a 'major revision' verdict. Furthermore, you can get inspired by sample peer review comments and examples which reflect this verdict appropriately.

  20. How to Perform a Peer Review

    How to Perform a Peer Review

  21. Peer Review Examples: 50+ Effective Phrases for Next Review

    Peer Review Examples: 50+ Effective Phrases for ...