[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[seul-edu] Programming contest !
Design Competition Rules
Evaluation of Initial Submissions
SC Config: Platform Investigation and Program
SC Build: Dependency Management and Program
SC Test: Regression Testing
SC Track: Issue Tracking
The Software Carpentry project is pleased to announce an open competition to
design a new generation of easy-to-use software engineering tools.
Individuals, teams, and companies are invited to submit short design
outlines in any of the following categories:
1.a platform inspection tool similar to autoconf;
2.a build management tool similar to make;
3.an issue tracking system similar to Gnats or Bugzilla; and
4.a unit and regression testing harness with the functionality of XUnit,
Expect, and DejaGnu.
The competition is open to individuals and teams from any country. Students
and professionals alike are encouraged to enter.
The deadline for initial submissions is March 31, 2000. A principal contact
for each proposal must be clearly identified in each submission. Submissions
must be in English, and formatted as a single HTML page, and should be sent
to firstname.lastname@example.org. (If you are intending to submit one or more
designs, please send a note as early as possible to
email@example.com and let us know the category or
categories you intend to enter.)
Entries must be no more than 5000 words long (not including code fragments).
Entries should include Unix-style man pages and mock-ups of graphical user
interfaces as appendices where appropriate; theseq will not count against
the 5000 word limit. Entries should also include both a simple "Hello,
world" example (to demonstrate how easy it will be for novices to pick up
the tool) and a more realistic example (to demonstrate that the tool will be
able to meet the needs of larger and more complex projects). These examples
will count against the length limit.
Entrants are strongly encouraged to focus on interface and workflow issues,
rather than implementation details. In particular, good initial submissions
should contain very little that is platform- or language-specific, but
should have as much information about how the tool is to be used as one
would expect in a classroom handout. All entries must incorporate the
project's Open Source license, and will be considered interpretive
documentation under its terms.
Note that background material, such as descriptions of the problems the tool
is meant to solve, or analyses of the shortcomings of existing tools,
should be sent to firstname.lastname@example.org for inclusion in this
page and the FAQ, rather than being included in submissions. This will allow
the judges to give entrants early feedback on whether they are addressing
the right problems or not.
Note also that designs based on existing tools, written in any language, are
welcome. Such designs will be judged on the same basis as those written from
Evaluation of Initial Submissions
The four best proposals in each category will be announced on April 30,
2000. These proposals will be published on the web, along with commentary
from the competition judges. The author or authors of these first-round
winners will be awarded $2500, and asked to submit a full proposal.
The deadline for full submissions is June 1, 2000. These must also be in
English, and in HTML, but there is no restriction on their overall length.
Submissions must contain a full interface specification, and enough detail
on implementation to permit distributed development and testing. UML should
be used where appropriate for such things as class diagrams.
The overall winners in each category will be announced on June 30, 2000, and
awarded a further $7500. Runners-up will be awarded a further $2500.
Participants in the second round of the competition will have two incentives
to pool their efforts. First, entrants who are in one category will have a
strong incentive to form ties with entrants in other categories, since
inter-operability will strengthen both submissions. Second, the total
amount of prize money in each category in the second round will remain the
same, even if groups coalesce between the first and second rounds. Thus, if
two groups that have designed replacements for autoconf combine their
designs, and win their category, they will receive $10,000 (the first prize
of $7500, plus a runner-up prize of $2500). If that combined entry had not
won, its authors would receive a $5000 (2×$2500) prize.
Once winners have been selected, $200,000 will be provided to support the
implementation, testing, and documentation of these tools. This money will
be available through competitive bidding to any interested party. Preference
will be given to bidders who have demonstrated both their enthusiasm and
their technical skills by contributing software, doing testing, writing
documentation, or participating in the project in any other way of their own
accord. As with the proposals themselves, all work will be public under the
terms of the project's Open Source license.
All tools will be implemented primarily in Python, in tools such as Zope
which are themselves built on top of Python, or using freely-available
tools, such as Open Source relational databases, which have Python
interfaces. (See the FAQ for discussion of this requirement.) Where this is
not feasible (e.g. for performance reasons), those parts which are
implemented in some other way will be required to be fully scriptable using
Python. Entrants are free to base their designs on existing tools, provided
that their designs meet these criteria, and that those tools are freely
All tools are required to operate equivalently on both Linux and Microsoft
Windows NT. In particular, command-line options and GUI interfaces will be
required to be equivalent, and scripting interfaces will be required to be
identical. Tools should be compatible with both Netscape and Internet
Explorer where appropriate.
Guido van Rossum
autoconf (http://sourceware.cygnus.com/autoconf) is used to manage
platform-specific configuration in many Open Source projects. Unlike tools
which depend on configuration tables to specify the capabilities of
hardware, operating systems, and compilers, autoconf examines the output and
behavior of small test programs to determine such things as the names of
header files, or whether particular system calls are supported.
While dynamic inspection makes autoconf much more flexible and robust than
alternative approaches, it is difficult to learn anything more than its
basic functionality, and whatever is learnt is often quickly forgotten, as
it is not reinforced by use in the programmer's day-to-day work. This is in
part due to the fact that autoconf is built as a set of macros for the m4
preprocessor, and may in fact be the last large m4 application in widespread
For more information, see the SC Config home page.
make (http://www.gnu.org/software/make/make.html) has been used to manage
dependencies between project components for almost a quarter of a century.
While it was a major advance over the hand-written shell scripts that
preceded it, make's semi-declarative syntax is clumsy, and even short make
scripts can be very difficult to debug. In addition, its functionality is
not accessible from other programs without heroic effort, and it provides
little support for common operations such as recursion.
Cons (http://www.dsmit.com/cons/) addressed make's weaknesses by building a
Perl extension module to manage updates and dependencies in large software
projects. Users can describe particular projects by naming files (purely
declarative), overriding dependency rules (mixed declarative and
imperative), or overriding update mechanisms (purely imperative) as
appropriate. In addition, Cons files use Perl's own syntax for both data
structures (such as lists of dependencies) and actions (such as printing
For more information, see the SC Build home page.
Unit and Regression
Tom Van Vleck
While the open source community has done a good job of producing program
development tools (such as the GNU compiler suite), it has not been as
successful at creating the tools needed to turn those programs into
products. XUnit (http://www.xprogramming.com/software.htm) is a simple
framework for managing unit tests, and Expect (http://expect.nist.gov/) and
DejaGnu (http://www.gnu.org/software/dejagnu/dejagnu.html) can be used to
run tests on interactive and non-interactive applications, but none of these
tools directly support large-scale regression testing, coverage surveys, and
For more information, see the SC Test home page.
Many existing Open Source projects rely on either Gnats
(http://sourceware.cygnus.com/gnats) or more recently Bugzilla
(http://bugzilla.mozilla.org) to track bugs. However, both systems are
difficult to install and configure, and are not well-suited to use as
general workflow management tools. In particular, neither supports threaded
discussions, or is easy to integrate with version control systems, test
suites, and the like.
For more information, see the SC Track home page.
The Software Carpentry project is funded by the Advanced Computing
Laboratory at the U.S. Department of Energy's Los Alamos National
Laboratory. These funds are being administered by Code Sourcery, LLC. For
more information, see the project's web site at