Viewing File: /home/ubuntu/code-backup/code_review/phabricator/src/docs/user/userguide/reviews_vs_audit.diviner

@title User Guide: Review vs Audit
@group userguide

Discusses the differences between "review" and "audit" workflows.

Overview
========

Phabricator supports two similar but separate code review workflows: "review"
and "audit".

Review occurs in **Differential**, before changes are published. You can learn
more in @{article:Differential User Guide}.

Audit occurs in **Diffusion**, after changes are published. You can learn more
in @{article:Audit User Guide}.

When this documentation discusses "unpublished changes", it refers to changes
which are still subject to being reworked in response to feedback. In many
workflows, these changes will only exist locally on the developer's machine,
but some workflows push tentative or temporary changes into remotes. The step
that "publishes" changes might be either pushing or merging them, depending on
your workflow.

Both the audit and review workflows are lightweight, asynchronous web-based
workflows where reviewers or auditors inspect code independently, from their
own machines -- not synchronous review sessions where authors and reviewers
meet in person to discuss changes.

Broadly, review is normally a //blocking// workflow: in review workflows,
authors usually can not publish changes until review completes and reviewers
are satisfied.

In contrast, audit is normally a //nonblocking// workflow: in audit workflows,
changes usually move forward by default.

Advantages of Review
====================

Pre-publish review is significantly more powerful than post-publish auditing.
You gain these advantages by requiring review //before// changes may be
published:

  - Authors have a strong incentive to craft small, well-formed changes that
    will be readily understood, to explain them adequately, and to provide
    appropriate test plans, test coverage and context.
  - Reviewers have a real opportunity to make significant suggestions about
    architecture or approach in review. These suggestions are less attractive
    to adopt from audit, and may be much more difficult to adopt if significant
    time has passed between publish and audit.
  - Authors have a strong incentive to fix problems and respond to feedback
    received during review because it blocks them. Authors have a much weaker
    incentive to promptly address problems raised during audit.
  - Authors can ask reviewers to apply and verify fixes before they are
    published.
  - Authors can easily pursue feedback early, and get course corrections on
    approach or direction.
  - Reviewers are better prepared to support a given change once it is in
    production, having already had a chance to become familiar with and reason
    through the code.
  - Reviewers are able to catch problems which automated tests may have
    difficulty detecting. For example, human reviewers are able to reason about
    performance problems that tests can easily miss because they run on
    small datasets and stub out service calls.
  - Communicating about changes //before// they happen generally leads to better
    preparation for their effects.

The theoretical cost of review is that it slows down development by introducing
a blocking step into the process and generally wastes developer time that could
be better spent developing. This is less true than it appears, because the costs
are low and pay for themselves in other ways:

  - Differential is fast and provides a lightweight process for submitting
    code for review and for performing review.
  - Authors are free to pursue other changes while code is being reviewed. With
    appropriate change management (like local branching in Git) they can even
    pursue dependent changes easily. Authors should rarely if ever be blocked on
    review, even though an individual change is blocked until it is approved.
  - The workflow as a whole is lightweight and, with skillful reviewers,
    effective at identifying bugs. It is generally faster to fix bugs in review
    than in production.
  - More importantly, it is effective at identifying problems with architecture
    and approach. These are free to fix in review ("don't do this, it is a bad
    idea") and may be very time consuming to fix in production. No matter how
    good your test suite is, it can't identify solutions which are poor because
    of missing context, or miscommunication, or which are simply bad ideas.
  - Changes which are too large or too complicated to be reviewed quickly are
    often //too large and too complicated, period//. Nearly all large changes
    can be split apart into small, independent pieces which are easier to
    understand and test. Review tends to encourage smaller and better-factored
    changes.
  - Review can be integrated with static analysis which can detect (and,
    in many cases, correct) mechanical problems with code like syntax,
    formatting, naming conventions, style problems, misspellings, and some
    program errors. This reduces the amount of time it takes to review code,
    and means reviewers can focus on actual problems with the code rather than
    minor stylistic issues.
  - Review creates a permanent record of context and intent which explains why
    a change was made, generally with much more information than commit messages
    alone (authors have an incentive to properly explain a change when sending
    it for review). This makes it easier to understand code later, and to
    respond appropriately when it breaks.
  - With `arc patch`, it is roughly as easy to pull a change out of Differential
    as it is to pull it out of the remote.

Advantages of Audit
===================

Post-publish audit is a less powerful workflow than pre-publish review, but can
supplement review and is better than nothing on its own. If you are unpersuaded
by the arguments above (or work on a team that is unswayed), audits provide
some of the benefits of review with less friction:

  - Audits are driven entirely by Phabricator: users do not need to install
    `arc`.
  - Audits require little adjustment to existing workflows and little training.
  - Audits are completely nonblocking, and send fewer notifications than review.
  - Even if you have review, audits can be useful as a supplement to keep tabs
    on lower-importance changes or raise issues that are discovered after
    review.

Recommendations
===============

Here are super biased recommendations from developers of code review software:

  - If you can do review, do it. Supplement it with audits for less important
    changes as your organization scales.
  - If you can't do review immediately, set up audits and try to transition
    toward review. Some types of changes (like tentative changes or requests
    for feedback about code) are a naturally good fit for review and can serve
    as a stepping stone toward broader acceptance. Greater familiarity with the
    toolset may also foster more acceptance toward review, and the value of
    review may become more obvious as the organization scales (e.g., once you
    get interns).
  - If you aren't interested in review, just do audits. You can always
    change your mind later. But consider review! It's really good, we promise!

Next Steps
==========

  - Learn more about reviews in @{article:Differential User Guide}; or
  - learn more about audits in @{article:Audit User Guide}.
Back to Directory File Manager