You might try this thread http://www.monarchforums.com/showthread.php?t=2146[/URL]
Another possibility (which we do instead) is to have the second process monitor for what the first process will rename it's input file to.
Process 1 processes FirstInput.txt, and renames it to FirstInput_Processed.txt. It kicks off when FirstInput.txt exists.
Process 2 takes Second Input.txt and renames it to SecondInput_Processed.txt. It kicks off when FirstInput_Processed.txt exists. There is a batch script in the post process to rename FirstInput_Processed.txt to FirstInput_FinalProcessed.txt.
The challenge with the second option, is that not all the projects that I need to run afterwards use a report as the input file, so since its an excel file, I'm not able to do a distribution input (to dot old), and if I can't it won't let me monitor.
Tried to understand the thread you sent me, but then it means, I didn't understand the script options on the process tab, since it should be as easy to initiate another process based on the process finishing. That is how it was explained to me during the datawatch conference.
Is there no manuals for those options ?
As far as Excel, I think you can have an input distribution on spreadsheets. At least in Data Pump 9 you can.
You might also take a step back and see if you can merge your two processes into one.
Both of these options would keep Data Pump at a level you can understand and support.
If you want to go into scripting, keep it simple, or be prepared to learn VB .Net. I've not ventured into the latter myself.
Bill Watson in the post I mentioned would probably be the best one to help with exactly what pieces of his code are used to kick off a process. It seems to be PumpAPI.StartProcess(strCurrent). But you need to catch exceptions, and consider how to handle them. Also, not sure about references or imports. You might consider sending Bill a PM if you want to venture down VB .NET scripting. It's powerful stuff if you're willing to learn.