General

  • All SCons software (SCons itself, tests, supporting utilities) is written to work with the oldest Python version SCons promises to support and may not depend on newer features. If a feature standard in later Pythons has a backport package which makes it equivalent on older Pythons, an exception could be considered. At the moment the floor version is 3.6, but this is deprecated and 3.7 will become the floor in future, possibly in SCons 4.9.0.
  • SCons should be tested against all supported Python versions. In reality, since "free" CI resources are not infinite, automatic testing triggered by PR submissions only takes place on a representative subset. Developers must be prepared to respond to problems discovered on versions which may not have been tested by the automatic system.
  • The SCons distribution is generated by the setuptools package. Use SCons itself to generate the distribution, a direct call to python setup.py is no longer supported.
  • SCons should not require installation of any additional Python modules or packages. All modules or packages used by SCons must either be part of the standard library in the lowest supported Python release, or be "vendored" into the SCons distribution. Exceptions may be made for optional modules.
  • At a minimum, SCons is tested on Linux and Windows. Continuous Integration testing is wired in to the GitHub project: Pull Requests and any updates to PRs kick off builds (unless those are specifically disabled using GitHub and Appveyor conventions), and the results will be reflected in the page for the PR. All tests are to be written portably because of this. If a feature cannot work on one system, detect that and skip the test only in that case. Unittests can use the @unittest.skip decorator, e2e tests should feature test.
  • SCons software is written to a separately-defined set of conventions (variable naming, class naming, code formatting, etc.) Some of these have changed from when SCons was first written, so usage is not consistent - we won't be dogmatic about this, and generally will suggest the conventions be applied for any newly-written code and maybe for a segment of code being modified, but not require whole files be changed to make a discrete change (such refactoring should be split out from actual code changes anyway, to make reviewing and change tracking more manageable). In future, use of tools such as black/pylint (or ruff as an alternative) for submitted code may become more strongly encouraged.
  • Adding type annotations where it makes sense in new code is also allowed. An effort to get static type checking right for a new addition should not get in the way of getting the actual work done. We understand that SCons has a fairly complex internal set of types (environments, nodes, etc.), and proper typing support for these is not trivial. Also the (current) minimum Python version does not support deferred annotation evaluation, so import loops can happen very easily when importing to get type definitions.
  • When new features are added, or a substantive change is added (like a new parameter added to an existing function), the version and a summary of the change should be added to documentation. This may need to go two places: in the docstring of the affected function (use the Sphinx-supported markers .. versionadded::, .. versionchanged::, .. deprecated:: and .. deprecated-removed::) and in the xml documentation (use <emphasis>Added in </emphasis>, etc. Pull requests should use the placeholder NEXT_VERSION for the version, to be updated when a release is made including that code.
  • SCons is being developed using the the Git source code control system: the main source tree is kept on GitHub.
  • Tests are written using custom testing infrastructure built on top of unittest:
    • SCons infrastructure module tests are written using PyUnit.
    • Tests of SCons packaging are written using subclasses of the TestCmd module (these are no longer actively used)
    • Tests of full SCons script functionality are written using subclasses of the TestCmd module.

Realities

  • While not a formal policy, the SCons project is currently able to support only the latest released version. Once a new version releases, fixes will not be backported to previous released versions. If necessary, users can do their own backports - each release is both branched and tagged.
  • Emergency fixes are occasionally released immediately, but in most cases, bug fixes accumulate until the release manager determines a new release is appropriate. The porject strives to have the master branch in an always runnable state and SCons is designed to be directly runnable from the code without "installation", so if necessary a git checkout containing a fix can be used as an interim.
  • For the avoidance of doubt, SCons was originally developed well before the concept of Semantic Versioning was formalized, and does not officially follow Semantic Versioning. Technically API-breaking changes, such as dropping deprecated functionality, may happen in minor versions; however SCons always strives for a very high level of API compatibility for the interfaces described in the reference manual ("manpage"), as noted elsewhere in these guidelines. Major version changes are reserved for large visible changes, like dropping support for Python 2, which affected many people, but in fact did not "break the API".

Development philosophy

TLDR version: Testing, testing, testing.

We're growing a rich set of regression tests incrementally, as SCons evolves. The goal is to produce an exceptionally stable, reliable tool of known, verifiable quality right from the start.

A strong set of tests allows us to guarantee that everything works properly even when we have to refactor internal subsystems, which we expect to have to do fairly often as SCons grows and develops. It's also great positive feedback in the development cycle to make a change, see the test(s) work, make another change, see the test(s) work...

Testing methodology

The specific testing rules we're using for SCons are as follows:

  • Every functional change must have one or more new tests, or modify one or more existing tests. In other words, code touched by a change must be hit by a test.
  • The new or modified test(s) must pass when run against your new code (of course).
  • The new code must also pass all unmodified, checked-in tests (regression tests).
  • The new or modified test(s) must fail when run against the currently checked-in code. This verifies that your new or modified test does, in fact, test what you intend it to. If it doesn't, then either there's a bug in your test, or you're writing code that duplicates functionality that already exists.
  • Changes that don't affect functionality (documentation changes, code cleanup, adding a new test for existing functionality, etc.) can relax these restrictions as appropriate - check with the project maintainer.

The CI infrastructure wired into the GitHub project will run the tests of the new code automatically when a commit is pushed to a PR after the PR has been submitted. What they won't do is verify the new test fails when run against old code.

This suggests following a TDD (test-driven devlopment) approach - write your tests first, making sure they run but fail, thus demonstrating they're able to detect the difference between broken (or unimplemented) code and new code. Then write the new code. runtest.py has support for running a test against a released version, so you can checkpoint in your working tree that the test didn't become invalid during your development.

The SCons testing infrastructure is intended to make writing tests as easy and painless as possible. We will change the infrastructure as needed to continue to make testing even easier, so long as it still does the job. Since the test infrastructure involves some project-specific infrastructure that may be unfamiliar, please ask for help if you don't find a simple explanation in the docs.

SCons development uses a combination of test harness pieces covering the unit tests, end-to-end functional tests, and for test execution:

  • Execution of tests is handled by a script runtest.py, which adds multithreaded execution, reporting capabilities, getting paths right to necessary components, and more. It is possible to run an individual test file without using the runner, but it may take extra work.
  • The infrastructure modules (under the SCons subdirectory) all have individual unit tests that use unittest, the unit testing framework in the Python standard library. The naming convention is to append "Tests" to the module name. For example, the unit tests for the SCons/Foo.py module can be found in the SCons/FooTests.py file. The test runner uses the naming convention in unit test discovery so it is important to follow that style.
  • SCons itself as an application is tested by end-to-end tests that live in the test/ subdirectory and which use the TestCmd.py infrastructure (from testing/framework).

The end-to-end tests in the test/ subdirectory are not substitutes for module unit tests. If you modify a module under the SCons subdirectory, you normally would modify its *Tests.py script to validate your change at a "unit" level. This can be (and probably should be) in addition to a test/* test of how the modification affects the end-to-end workings of SCons. An exception is tool specification modules in SCons/Tool - since tools are initialized at SCons runtime it often makes more sense to use the e2e style to test that they indeed set up their variables correctly: most existing tool modules don't have matching unittest files.

General developer requirements

  • All project developers must subscribe to the scons-dev@scons.org mailing list.
  • All project developers should register at GitHub.com and be added to the SCons developer list, this allows tagging developers as owning bugs.
  • We will accept patches from developers not actually registered on the project, so long as the patches conform to our normal requirements. Preferrably the patches should come as pull requests on GitHub.

Using git for SCons development