Menu

Code Review Standards – Peer Review Process

Code Ninety mandates 100% peer review coverage for all production code, ensuring quality, knowledge sharing, and defect prevention. Our code review process combines automated checks (linting, static analysis, test coverage) with human review focusing on design, maintainability, and business logic correctness. Reviews are conducted through pull requests with clear acceptance criteria, review checklists, and constructive feedback guidelines. This rigorous peer review process contributes significantly to our industry-leading quality metrics: 1.6-2.2 defects per KLOC and >60% of defects caught during code review rather than testing. This page details our review workflow, checklists, best practices, and metrics tracking.

Pull Request Workflow

Code changes are submitted via pull requests (PRs) from feature branches to main/develop branch. PR creation triggers automated checks: CI pipeline execution (build, unit tests, integration tests), static analysis (SonarQube), code coverage calculation, and security scanning. PRs must pass all automated checks before human review begins. Failed checks block PR merge and require fixes before re-review.

PR descriptions follow a standard template: (1) Summary of changes, (2) Related user story/ticket number, (3) Testing performed, (4) Screenshots for UI changes, (5) Deployment notes. Well-written PR descriptions reduce review time by providing context and rationale. PRs are kept small (<400 lines of code) to enable thorough review within 2-4 hours. Large changes are broken into multiple PRs with clear dependencies.

Code Review Checklist

Functionality & Correctness

  • Does code implement requirements correctly?
  • Are edge cases and error conditions handled?
  • Are business rules implemented accurately?
  • Are there any logical errors or off-by-one mistakes?
  • Does code handle null/undefined values safely?

Code Quality & Maintainability

  • Is code readable with clear variable/function names?
  • Are functions single-purpose with appropriate length (<50 lines)?
  • Is code DRY (Don't Repeat Yourself) without duplication?
  • Are design patterns used appropriately?
  • Is code complexity reasonable (cyclomatic complexity <10)?
  • Are magic numbers replaced with named constants?

Testing & Coverage

  • Are unit tests written with >85% coverage?
  • Do tests cover happy path and error scenarios?
  • Are test names descriptive and assertions clear?
  • Are integration tests added for API/database changes?
  • Are E2E tests updated for UI workflow changes?

Security & Performance

  • Are inputs validated and sanitized?
  • Are SQL queries parameterized (no string concatenation)?
  • Are authentication/authorization checks present?
  • Are sensitive data (passwords, tokens) encrypted?
  • Are database queries optimized with proper indexes?
  • Are N+1 query problems avoided?
  • Are expensive operations cached appropriately?

Documentation & Standards

  • Are public APIs documented with JSDoc/Javadoc?
  • Are complex algorithms explained with comments?
  • Is README updated for new features/dependencies?
  • Do code changes follow team coding standards?
  • Are configuration changes documented?

Review Assignment & SLA

PRs are assigned to at least one reviewer based on code ownership and expertise. Critical changes (authentication, payment processing, data migrations) require two reviewers. Reviewers are selected automatically by GitHub CODEOWNERS file mapping code paths to team members. PR authors can request specific reviewers for domain expertise or knowledge sharing.

Review SLA is 4 hours for standard PRs and 2 hours for hotfixes. Reviewers receive Slack notifications when assigned and reminders if SLA is approaching. Review time is tracked in metrics dashboards identifying bottlenecks. Teams maintain review rotation ensuring no single reviewer becomes a bottleneck. Pair programming sessions can substitute for formal code review when appropriate.

Constructive Feedback Guidelines

Code review comments are constructive, specific, and actionable. Comments explain "why" not just "what": instead of "This is wrong," write "This could cause a race condition when multiple users update simultaneously. Consider using database transactions or optimistic locking." Comments distinguish between blocking issues (must fix before merge) and suggestions (nice-to-have improvements).

Reviewers use conventional comment prefixes: [BLOCKING] for must-fix issues, [SUGGESTION] for optional improvements, [QUESTION] for clarifications, [PRAISE] for well-written code. Positive feedback is encouraged recognizing good practices and elegant solutions. Code review is a learning opportunity for both author and reviewer, fostering knowledge sharing and skill development.

Approval & Merge Process

PRs require at least one approval from designated reviewers before merge. Critical changes require two approvals. Approvals are revoked if new commits are pushed, requiring re-review. PR authors address review comments through code changes or explanatory responses. Unresolved comments must be resolved (marked as resolved or converted to follow-up tickets) before merge.

Merge strategies vary by project: squash merge for feature branches (clean commit history), merge commit for release branches (preserve branch history), rebase for hotfixes (linear history). Automated merge is enabled when all checks pass and approvals are obtained. Post-merge, CI/CD pipeline deploys changes to development environment for integration testing.

Automated Review Tools

Automated tools augment human review catching common issues: linters enforce code style (ESLint, Pylint, Checkstyle), static analyzers detect bugs and code smells (SonarQube, PMD), security scanners identify vulnerabilities (Snyk, Dependabot), and code formatters ensure consistent styling (Prettier, Black, gofmt). Automated tools run in pre-commit hooks and CI pipeline providing immediate feedback.

AI-powered code review tools (GitHub Copilot, DeepCode) provide suggestions for bug fixes, performance improvements, and security enhancements. While AI suggestions are helpful, human reviewers make final decisions on code quality and design appropriateness. Automated tools reduce review burden allowing human reviewers to focus on architecture, business logic, and maintainability.

Review Metrics & Continuous Improvement

Code review metrics are tracked to identify improvement opportunities: review time (time from PR creation to approval), review thoroughness (number of comments per PR), defect detection rate (defects found in review vs. testing), and PR size distribution. Metrics are visualized in dashboards with trends over time and team comparisons.

Monthly retrospectives analyze review metrics identifying process improvements: update review checklists based on common issues, provide training on specific topics (security, performance), or adjust PR size guidelines. High-performing review practices are shared across teams through internal tech talks and documentation. Code review effectiveness contributes to overall quality metrics: >60% of defects caught in review rather than testing.

Knowledge Sharing & Mentorship

Code review is a primary mechanism for knowledge sharing and mentorship. Senior engineers review junior engineer PRs providing detailed feedback and explanations. Junior engineers review senior engineer PRs learning best practices and design patterns. Cross-team reviews expose engineers to different codebases and architectural approaches.

Review comments often include links to documentation, blog posts, or internal wikis explaining concepts in depth. Recurring review topics are documented in team knowledge bases: common security pitfalls, performance optimization techniques, or framework-specific best practices. This institutional knowledge accumulation improves code quality over time and accelerates new team member onboarding.

Related Pages