Code Review
Software development
Core activities
Paradigms and models
Methodologies and frameworks
Supporting disciplines
Practices
Tools
Standards and BOKs

Code review is systematic examination (sometimes referred to as peer review) of computer source code. It is intended to find mistakes overlooked in software development, improving the overall quality of software. Reviews are done in various forms such as pair programming, informal walkthroughs, and formal inspections.[1]

Introduction

Simple Definition

A code review is a process where two or more developers visually inspect a set of program code, typically, several times. The code can be a method, a class, or an entire program. The main code-review objectives are:

  1. Best Practice ~ A more efficient, less error-prone, or more elegant way to accomplish a given task.
  2. Error Detection ~ Discovering logical or transitional errors.
  3. Vulnerability Exposure ~ Identifying and averting common vulnerabilities like Cross-Site Scripting [XSS], Injection, Buffer Overflow, Excessive Disclosure, etc. Although many controls are inapplicable and can be ignored, a STIG [e.g., Application Security STIG 4.3] provides an excellent vulnerability checklist.
  4. Malware Discovery ~ This often-overlooked and very special code-review objective looks for segments of code that appear extraneous, questionable, or flat-out weird. The intent is to discover back doors, Trojans, and time bombs. I say often-overlooked because the very idea of malware and malicious intent may ring overly dramatic to some developers. However - particularly in today's peril-ridden world - malevolent code is a very real threat and should not be overlooked... especially by USG agencies and departments like the DoD.

Of the four objectives, malware is the only one that requires human detection. A program containing an obvious back door can be scanned using a tool like Fortify and come out looking as clean as the driven snow.

This is not to disparage Fortify and similar scanning tools. They are built to discover and highlight vulnerabilities, and they do that job well. They are not built to discern malicious program code. That task remains - at least for now - the exclusive domain of human programmers.

Artifacts

The most important byproduct of a properly conducted code review is a written record describing:

  • Who ~ Names of those involved in the Review.
  • When ~ Date and time the Review was conducted.
  • Why ~ Best-Practice, Error Detection, Vulnerability Exposure, Malware Discovery or a combination.
  • Where ~ Office number or other location identifier.
  • What ~ Name of the class, method, or program, plus line ranges and other particulars specific to the reviewed code.
  • Result ~ What was disclosed during the course of the Review.

Details

Code reviews can often find and remove common vulnerabilities such as format string exploits, race conditions, memory leaks and buffer overflows, thereby improving software security. Online software repositories based on Subversion (with Redmine or Trac), Mercurial, Git or others allow groups of individuals to collaboratively review code. Additionally, specific tools for collaborative code review can facilitate the code review process.

Automated code reviewing software lessens the task of reviewing large chunks of code on the developer by systematically checking source code for known vulnerabilities. A 2012 study by VDC Research reports that 17.6% of the embedded software engineers surveyed currently use automated tools for peer code review and 23.7% expect to use them within 2 years.[2]

Capers Jones' ongoing analysis of over 12,000 software development projects showed that the latent defect discovery rate of formal inspection is in the 60-65% range.[ambiguous] For informal inspection, the figure is less than 50%.[] The latent defect discovery rate for most forms of testing is about 30%.[3]

Code review rates should be between 200 and 400 lines of code per hour.[4][5][6][7] Inspecting and reviewing more than a few hundred lines of code per hour for critical software (such as safety critical embedded software) may be too fast to find errors.[4][8] Industry data indicates that code reviews can accomplish at most an 85% defect removal rate with an average rate of about 65%.[9]

The types of defects detected in code reviews have also been studied. Empirical studies provided evidence that up to 75% of code review defects affect software evolvability rather than functionality,[10][11][12] making code reviews an excellent tool for software companies with long product or system life cycles.[13]

Types

Code review practices fall into two main categories: formal code review and lightweight code review.[1]

Formal code review, such as a Fagan inspection, involves a careful and detailed process with multiple participants and multiple phases. Formal code reviews are the traditional method of review, in which software developers attend a series of meetings and review code line by line, usually using printed copies of the material. Formal inspections are extremely thorough and have been proven effective at finding defects in the code under review.[]

Lightweight code review typically requires less overhead than formal code inspections, though it can be equally effective when done properly.[] Lightweight reviews are often conducted as part of the normal development process:

  • Over-the-shoulder - one developer looks over the author's shoulder as the latter walks through the code.
  • Email pass-around - source code management system emails code to reviewers automatically after checkin is made.
  • Pair programming - two authors develop code together at the same workstation, as is common in Extreme Programming.
  • Tool-assisted code review - authors and reviewers use software tools, informal ones such as pastebins and IRC, or specialized tools designed for peer code review.

Some of these are also known as walkthrough (informal) or "critique" (fast and informal) code review types.

Many teams that eschew traditional, formal code review use one of the above forms of lightweight review as part of their normal development process. A code review case study published in the book Best Kept Secrets of Peer Code Review found that lightweight reviews uncovered as many bugs as formal reviews, but were faster and more cost-effective.

Criticism

Historically, formal code reviews have required a considerable investment in preparation for the review event and execution time. Use of code analysis tools can support this activity. Especially tools that work in the IDE as they provide direct feedback to developers of coding standard compliance.

See also

References

  1. ^ a b Kolawa, Adam; Huizinga, Dorota (2007). Automated Defect Prevention: Best Practices in Software Management. Wiley-IEEE Computer Society Press. p. 260. ISBN 0-470-04212-5. 
  2. ^ VDC Research (2012-02-01). "Automated Defect Prevention for Embedded Software Quality". VDC Research. Retrieved . 
  3. ^ Jones, Capers; Ebert, Christof (April 2009). "Embedded Software: Facts, Figures, and Future". IEEE Computer Society. Retrieved . 
  4. ^ a b Kemerer,, C.F.; Paulk, M.C. (2009-04-17). "The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data". IEEE Transactions on Software Engineering. 35 (4): 534-550. doi:10.1109/TSE.2009.27. Archived from the original on 2015-10-09. Retrieved 2015. 
  5. ^ "Code Review Metrics". Open Web Application Security Project. Open Web Application Security Project. Archived from the original on 2015-10-09. Retrieved 2015. 
  6. ^ "Best Practices for Peer Code Review". Smart Bear. Smart Bear Software. Archived from the original on 2015-10-09. Retrieved 2015. 
  7. ^ Bisant, David B. (October 1989). "A Two-Person Inspection Method to Improve Programming Productivity". IEEE Transactions on Software Engineering. 15 (10): 1294-1304. doi:10.1109/TSE.1989.559782. Retrieved 2015. 
  8. ^ Ganssle, Jack (February 2010). "A Guide to Code Inspections" (PDF). The Ganssle Group. Retrieved . 
  9. ^ Jones, Capers (June 2008). "Measuring Defect Potentials and Defect Removal Efficiency" (PDF). Crosstalk, The Journal of Defense Software Engineering. Retrieved . 
  10. ^ Mantyla, M.V.; Lassenius, C (May-June 2009). "What Types of Defects Are Really Discovered in Code Reviews?" (PDF). IEEE Transactions on Software Engineering. Retrieved . 
  11. ^ Bacchelli, A; Bird, C (May 2013). "Expectations, outcomes, and challenges of modern code review" (PDF). Proceedings of the 35th IEEE/ACM International Conference On Software Engineering (ICSE 2013). Retrieved . 
  12. ^ Beller, M; Bacchelli, A; Zaidman, A; Juergens, E (May 2014). "Modern code reviews in open-source projects: which problems do they fix?" (PDF). Proceedings of the 11th Working Conference on Mining Software Repositories (MSR 2014). Retrieved . 
  13. ^ Siy, Harvey; Votta, Lawrence (2004-12-01). "Does the Modern Code Inspection Have Value?" (PDF). unomaha.edu. Retrieved . 

Further reading

  • Jason Cohen (2006). Best Kept Secrets of Peer Code Review (Modern Approach. Practical Advice.). Smart Bear Inc. ISBN 1-59916-067-6. 

External links


  This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.


Code_review
 

Manage research, learning and skills at IT1me. Create an account using LinkedIn to manage and organize your IT knowledge. IT1me works like a shopping cart for information -- helping you to save, discuss and share.


  Contact Us  |  About IT1me.com |  IT Training & References |  IT Careers |  IT Hardware |  IT Software |  IT Books