The fact that Data Pump sees a Process as a whole is by design. If the Projects you wish to run are completely independent of each other, then you should probably create them as separate Processes, unless you are sure that all pre-requisites will be met, or are configured correctly.
In the case you mention, where the first Project creates a file that is consumed by the second Project, then the correct way to do this is as you describe, by specifying the second (and possibly subsequent) Project input files as not required, as Data Pump must evaluate all Process criteria at the start of a Process, in order to see if it can go ahead.
This will work if you are using both the "One Project per job" and the "multiple Projects per job" setting in the Process properties.
In the case you mention, I suggest you might want to consider the "multiple Projects per job" setting, if all the Process does is create a file that is used as an input for the next and then completes. This will only spawn one job, instead of two jobs, so you have a single log for the entire Process.
In the case of "one Project per job" this spawns a job for each Project, which will run sequentially, but are not interrelated. You should have no problems using this setting in the scenario you describe, unless you have selected the "Allow parallel jobs" setting.
If you enable "Allow parallel jobs" then Data Pump processes jobs (in most cases, this will mean Projects) in parallel, so in your situation, both jobs will start at the same time, instead of waiting for the first job to complete before starting the second one. You need to ensure that you do not have this setting enabled for this kind of scenario.
If this setting is not enabled, then Data Pump will process Projects sequentially in the order they appear in the Process, and will wait for the first Project to complete before starting the second one.
So to sum up, what you have described will work correctly if the Projects appear in the Process in the correct order and the "Allow parallel jobs" setting is not set. You also need to ensure that for any input files that do not exist at the start of the Process, the input file setting is set to not required.
Additionally, it may be better to use the "multiple Projects per job" setting in the Process properties.
I see 2 problems in trying to use DP8. Using several xjprj's in one process, I attempted to run the job. DP8 looks at the entire process as a whole instead of looking at each xprj individually. My job failed as a whole, but it really failed on the second xprj. For example, the 2nd xprj in my process was incorrect,so the first xprj should have completed and the second one fail.
Also, under the input tab/file existence criteria, the item 'not required' does not seem to be working. My process has 2 xprj's assciated with it. The second xprj exports 2 files. My first process produces the input file for my second xprj. I did not have the input file existing already, so I flagged the 'not required'. I did this because my process was failing due to the input file not being present. I finally just ran the process manually through Monarch Pro to get the input file, then my process worked. Your input files should only HAVE to exist for your first xprj, but that is not the case. /b[/quote]