Thank you for the reply. I was worried that I was not doing a good job of explaining my problem and it seems that I was right.
&[source.name] gets the name of the PDF and works just fine when my job is processing a single input file. But when there are multiple files in my input folder, the exports from the models get appended into the file named using the first input file.
If my input files are File1.pdf and File2.pdf, the export file will be File1.txt (but I want two export files, File1.txt and File2.txt).
I'm sorry- I will try to explain one more time and provide additional information so that maybe the issue will be clearer.
The input file (*.pdf) is a report that has four distinct sections. Each section has very different data. This is the reason for the four projects. Each project extracts data from a different section of the same report. I want to combine the outputs of the projects into a single export file. This works fine if there is a single input file. If there are two input files in my folder when I run the process, however, the exported data from both ends up in a single export file and this is a problem.
So, to sum up, this is the problem: if there are multiple input files in the folder when I run the job, I get the data from all of them combined in a single export file. I want multiple export files!
A requirement of the project is that there be one export file for every input file. Since I have literally thousands of files to process, running each one manually is really not an option. On a side note, I am no longer using the &[source.name] macro to name my export file and am now using the &[counter] macro instead. The goal remains the same, however- to get one export file for every input file.
Aha. If you define Name as I suggested above, and make that the hidden first key value of a summary export, and select the option to create distinct export files for each value of the first key of the summary, you should be OK.
If you're currently exporting a table and not a summary, using a calculated field based on Recno() as an Item field will help you avoid any unwanted aggregation.
I can't be exactly sure of the solution, but I'll explain a couple of options that are relevant and see if that helps.
In the General Tab, there are two options:"one project per job" and "multiple projects per job"
"one project per job" will create a new job for each project in the process and execute them in parallel up to the number specified in the General Settings.
So, if you get 10 input files appearing, you will get 10 jobs spawned, each with it's own job log. So, each project in the list is seen as independent of the others and the execution order is not maintained.
"multiple projects per job" will create a single job encompassing all projects and execute the projects in sequence, a typical use would be that the 3rd project in the process reads the combined outputs of the first 2 projects.
From your description, I would imagine you need the "multiple projects per job" setting, which would iterate through your list of projects sequentially.
In your project, there is also an important input setting called "each in it's own job".
This is in the Input tab of XPRJEditor, in the "Grouping of multiple files" section.
The grouping of multiple files works as follows:
"all in a single job"
Typically, you have multiple instances of the same report type (i.e that fits the model) that you want to load and aggregate, then produce an export which combines all the files.
"each in it's own job"
You have multiple instances of the same report type that you want to load individually and produce an export from each individual input file.
Note that there are some complexities with the interaction of these options in multiple project scenarios, so check out the "using multiple projects in a process" topic in the help.
Thank you, Gareth!
After specifying the grouping of multiple input files (in the XPRJEditor) as "each in its own job", I am now getting multiple export files. Truth be told, I tried this before but was getting an error message when the job started ("combination of xprjs whose input specifications are incompatible") and gave up. But I think the cause was that I had different input orders specified and once I fixed that it all worked like a charm.