Bug 35464 - Add a dry run mode
Summary: Add a dry run mode
Status: NEW
Alias: None
Product: Ant
Classification: Unclassified
Component: Core (show other bugs)
Version: 1.6.5
Hardware: Other All
: P2 enhancement with 1 vote (vote)
Target Milestone: ---
Assignee: Ant Notifications List
Depends on:
Reported: 2005-06-22 13:43 UTC by F. Robert
Modified: 2008-11-24 03:58 UTC (History)
0 users


Note You need to log in before you can comment on or make changes to this bug.
Description F. Robert 2005-06-22 13:43:01 UTC
I am new to Ant. We are using it in a Configuration Management context. Here is 
the situation :

Preliminary definition: Targets in a build.xml files can be broadly divided into 
two categories : Target that produce a "deliverable" (ie. something that will be 
passed to the customer/QA group/etc... Typically, a deliverable is a jar file, 
an executable, a shared object...) and those that don't (deployments, cleanup, 
tests etc...)

- We have a bunch of distinct build.xml files, which are used to build Java-
based deliverables (for now). Each Ant file typically builds a handfull of 
- The deliverables can be built in debug or release "flavor" (or maybe some 
other, like "instrumented" ...)
- The building process of all deliverables is sprayed accross all the different 
ant files.
- Deliverables have dependencies on other deliverables, whose target is not 
necessarily located in the same ant file. In other words, the entire build 
process must run all build.xml files, but in a specific order, passing specific 

The problem is : I neeed to determine (automatically) : 
- What are the deliverables that can be produced by all the ant files.
- What target in what file will produce what deliverable(s).
- What target depend on what other target in which ant file.
- The "flavor" of each deliverable.

The objective is to create automatically a "master" ant file that can produce 
all the deliverables. (by calling <ant> or <subant> against the right file and 
the right targets, with the right expression of dependencies).

To achieve this, a minimum parsing of each build.xml is necessary, but without 
execution (or rather: replacing the execution of most task with emitting on 
standard output what would be executed. A few tasks, such as <ant>, <subant>, 
<property> and the like needs to be "executed", however). This is what I call a 
Ideally, for my particular need, the "dry-run" should span multiple ant files 
(but that may not fit too well with the general philosophy of having Ant 
executing a single file at a time). A dry run mode would also what are the 
dependencies of each target and what are the deliverables produced by each 
target (and what is its "flavor")

I have tried to implement the scenario above and here are some observations I 
gathered during this attempt (for what it's worth).
- I first thought of parsing the Ant file through the Ant API (a bit like the 
Grand tool) but then I am a bit stuck with property resolution and sub-builds 
- A second approach that I then looked into was to redefine core and optionnal 
tasks with appropriate no-op task (except for sub-builds and <property> tasks 
and maybe others).  I then thought that such a definition could done inside each 
task class (with a dry-run flag added or maybe with some OO trick), or that a 
modified class loader pointing to a different set of class files may do the 
- The definition of "deliverable" is fuzzy. Typically, in any build process, 
some "intermediate compilation products" (static library, object files, class 
files etc...) are also produced. Those usually don't qualify as deliverable 
because they are consumed by some other subsequent build action. Tracking what 
is produced and consumed by the various steps is not easy (and may well be 
impossible with <taskdef>s, unless the task writer somewhat hints what are the 
inputs and outputs of its task) but is essentially what allows to distinguish 
between "deliverables" and mere temporary files. It also allows to assign a 
"flavor" to a deliverable: If we track the compilation settings (debug, release, 
instrumented, optimized, whatever...) of intermediate products, then you can 
assign a "flavor" to each deliverable down the chain as well. You can also flag 
situations like mixing different flavors in the same deliverable/intermediate 
product (which is most of a time a mistake).
Comment 1 Steve Loughran 2005-06-22 17:26:05 UTC
I'm not sure that this can be done. It isnt enough to parse the file, you have
to know the output of every single task in the system. In make (i.e. make -n)
the output is there, as it is just walking the dependency chain you specify by

In ant, things work it out for themselves, based on (chained) input files. you
need to know the output of all predecessors before you can predict what the next
components will do, something that is done using the filesystem as a persistent
state communication mechanism.

Now Maven, that does have a more explicit model of deliverables, and you may
want to look at that to see if it works on your (seemingly large/complex)
project. For reference, I use their ant tasks and have sub projects deploy into
the maven repository on a local machine, as a way of decoupling projects from
each other.

I would also encourage looking at the <import> task in Ant1.6+, which lets you
share stuff across projects. Having many sub projects does not imply lots of
duplicated code. If you can have everything share the same build files, you will
stay in control.
Comment 2 F. Robert 2005-06-22 18:00:38 UTC
Thanks for your (fast !) answers and suggestions. I will definitely have a look 
at Maven, to see whether it can help me. 

Regarding <import>. Can you elaborate a bit more ? Are you suggesting to write a 
top level ant file that call others via <import> ?  Or did I miss the point ? (I 
want to avoid writting -and maintaining...- a top-level build file, because it's 
so easy do get it out-of-synch with the child projects)

The point is that the projects have no direct knowledge of each other. 
Dependencies are basically expressed indirectly, through the content of the 
"classpath" settings.

Ho. And then there is Eclipse. At some point in the future, I want to be able to 
check that all build.xml are in synch with Eclipse projects (same classpath, 
basically...). Admitedly, this has nothing to do directly with a "dry-run" 
feature per se. It may however help you frame the business use case I have in my 
Comment 3 F. Robert 2005-06-22 18:32:01 UTC
One more bit of context : The source control system is imposed and is Rational 
ClearCase, which means that sources and projects files are typically fetched 
from a remote, special Windows or Unix filesystem (called MultiVersion 
FileSystem or MVFS).
Comment 4 Steve Loughran 2005-06-22 23:40:08 UTC
the way I work in a big project, we have one common file (common.xml), that
contains the base targets for everything. 

child projects <import> this, and override targets only if they need to do
something different. 

This isnt perfect, common.xml is a bit scary now, but it means we can have 10+
projects all in sync. To add new targets is easy too.

for file sharing, we have targets to put versioned files into the m2 repository
tree, and other build files can pull them out, with .properties files driving
version control. this gives us a unified way of dealing with our own artifacts
as well as remote things. 

here is the target from one project, for example

  <target name="m2-files" depends="m2-tasks">
    <m2-libraries pathID="m2.classpath">
      <dependency groupID="org.ggf"
      <dependency groupID="commons-lang"
      <dependency groupID="commons-logging"
      <dependency groupID="log4j"
      <dependency groupID="org.smartfrog"
      <dependency groupID="xom"
      <dependency groupID="xalan"

regarding clearcase, it scares me. when it works, it is wonderful, but when it
goes wrong, you are truly hosed. And mostly works well on it, but if something
like your clean target keeps failing, reboot your box and try again before
filing a bug that <delete> doesnt work. Also note that the filesystem is
slightly case sensitive, even on windows. 
Comment 5 F. Robert 2005-06-27 15:52:58 UTC
Firstly, I had a quick look at Maven. I am not too sure how suitable it is to my 
In fact, thinking again about my business use case, I now believe that I am 
trying to achieve something closer to "auditing" or "monitoring" (more passive 
behavior) than to "building" (more prescriptive behavior).
The idea is that I could have an overall view of that *can* be built from the 
source, which is a superset of what *will* be built. If something is added or 
removed, (even if should not be built) then I can see that as well.

Also, I thought about Steve Loughran's observation (comment #1) that "you need 
to know the output of all predecessors before you can predict what the next 
components will do, something that is done using the filesystem as a persistent 
state communication mechanism".
This made me thought about the fact that merely *simulating* the chaining of 
input and output files on the filesystem could be enough to get a reasonnably 
meaningful set of deliverables. What I mean is that it is possible to define for 
many tasks which file(s) name(s) they will consume and which they will emit as a 
result (assuming successful completion). For instance, the <tar> tasks produces 
what is specified in the "destfile" attributes and consumes whatever file in the 
"basedir" folder whose name match/don't match the include/exclude patterns. 
<javac> produces a set of .class files from a certain set of .java files etc...
The idea would be then to simulate the lifecycle of files (production/
consumption/deletion), simulate the chaining and deduce from it what are the 

Of course this scenario breaks in various cases :
- <taskdefs> : In the current version of Ant, custom-define <taskdefs> have 
umpredictable inputs and outputs. If Ant mandated that taskdefs explicitely 
define their inputs and outputs, then this issue would disappear.
- output filename existence that depends on input file contents (and not merely 
on existence + Ant build file content). An example is linker output located in 
response files. I would suspect that this is truly hard to solve only if the 
input file is dynamically generated during the build. Otherwise, parsing of 
static input files (that exists before the build is started) can do the job 
(Admitedly could be very tedious to cover formats exhaustively, but feasible in 
- A variant of the previous case is when a task has unpredictable output(s) 
(like <get> when used with wildcards). Maybe a deeper/smarter simulation could 
help hint the filenames (ie in the case of <get>, obtain a directory listing of 
the files and track the filenames so obtained ?). Non-Ant scripting may be one 
sticky case where output may be truly unpredictable (<telnet> tasks with 
embedded shell scripting come to mind).

I also have the gut feeling in such a file-lifecycle tracking, outputs are more 
important than the input : In many cases, inputs are pattern-based, whereas 
output is well-defined.

Then there is the fact that simulating all targets (at least all target without 
predecessors) will cause different execution paths, that will yield different 
file chains, all of which needs to be simulated. That may well cause a 
combinatorial explosion of cases (?).

One of the things I don't know is how prevalent in practice are the cases that 
cannot be simulated (I suspect that there may be well more than the above 
mentionned three...).

Comments ?