General

  • All SCons software (SCons itself, tests, supporting utilities) will be written to work with Python version 3.5+.
  • SCons will be tested against Python versions 3.5, and all later versions of 3.x.
  • The SCons distribution will be generated by the setuptools package.
  • SCons will not require installation of any additional Python modules or packages. All modules or packages used by SCons must either be part of the standard Python 3.5 release or be part of the SCons distribution, excepting that 4.0 and above will install pywin32 when installed via pip on a Windows system.
  • At a minimum, SCons will be tested on Linux and Windows. Continuous Integration testing is wired in to the GitHub project, pull requests and any updates to PRs will kick off builds, and the results will be reflected in the page for the PR. All tests must be written portably, because of this (if a feature cannot work on one system, detect that and skip the test only in that case).
  • SCons software will be written to a separately-defined set of conventions (variable naming, class naming, etc.). We won't be dogmatic about these, and will use discretion when deciding whether a naming violation is significant enough to require a fix.
  • SCons is being developed using the the Git source code control system: the main source tree is kept on GitHub.
  • Tests are written using custom testing infrastructure built on top of unittest:
    • SCons infrastructure module tests are written using PyUnit.
    • Tests of SCons packaging are written using subclasses of the TestCmd module (these are no longer actively used)
    • Tests of full SCons script functionality are written using subclasses of the TestCmd module.

Development philosophy

TLDR version: Testing, testing, testing.

We're growing a rich set of regression tests incrementally, as SCons evolves. The goal is to produce an exceptionally stable, reliable tool of known, verifiable quality right from the start.

A strong set of tests allows us to guarantee that everything works properly even when we have to refactor internal subsystems, which we expect to have to do fairly often as SCons grows and develops. It's also great positive feedback in the development cycle to make a change, see the test(s) work, make another change, see the test(s) work...

Testing methodology

The specific testing rules we're using for SCons are as follows:

  • Every functional change must have one or more new tests, or modify one or more existing tests. In other words, code touched by a change must be hit by a test.
  • The new or modified test(s) must pass when run against your new code (of course).
  • The new code must also pass all unmodified, checked-in tests (regression tests).
  • The new or modified test(s) must fail when run against the currently checked-in code. This verifies that your new or modified test does, in fact, test what you intend it to. If it doesn't, then either there's a bug in your test, or you're writing code that duplicates functionality that already exists.
  • Changes that don't affect functionality (documentation changes, code cleanup, adding a new test for existing functionality, etc.) can relax these restrictions as appropriate - check with the project maintainer.

The CI infrastructure wired into the GitHub project will run the tests of the new code automatically when a commit it pushed to a PR after the PR has been submitted. What they won't do is verify the new test must fail when run against old code checkbox. This suggests following a TDD (test-driven devlopment) approach - write your tests first, making sure they run but fail, thus demonstrating they're able to detect the difference between broken (or unimplemented) code and new code. Then write the new code. runtest.py has support for running a test against a released version, so you can checkpoint in your working tree that the test didn't become invalid during your development.

The SCons testing infrastructure is intended to make writing tests as easy and painless as possible. We will change the infrastructure as needed to continue to make testing even easier, so long as it still does the job. Since the test infrastructure involves some project-specific piece unfamiliar, please ask for help if you don't find a simple explanation in the docs.

SCons development uses a combination of test harness pieces, covering the unit tests, end-to-end functional tests, and for test execution:

  • The infrastructure modules (under the SCons subdirectory) all have individual unit tests that use PyUnit, the unit testing framework in the Python standard library. The naming convention is to append "Tests" to the module name. For example, the unit tests for the SCons/Foo.py module can be found in the SCons/FooTests.py file.
  • SCons itself is tested by end-to-end tests that live in the test/ subdirectory and which use the TestCmd.py infrastructure (from testing/framework).
  • Execution of these tests is handled by a script runtest.py, which adds multithreaded execution, reporting capabilities, and more.

The end-to-end tests in the test/ subdirectory are not substitutes for module unit tests. If you modify a module under the SCons subdirectory, you generally must modify its *Tests.py script to validate your change. This can be (and probably should be) in addition to a test/* test of how the modification affects the end-to-end workings of SCons.

General developer requirements

  • All project developers must subscribe to the scons-dev@scons.org mailing list.
  • All project developers should register at GitHub.com and be added to the SCons developer list, this allows tagging developers as owning bugs.
  • We will accept patches from developers not actually registered on the project, so long as the patches conform to our normal requirements. Preferrably the patches should come as pull requests on GitHub.

Using git for SCons development