OK Roger, I think I've got a solution.
As you've noted, exports don't do much for you when nothing matches the detail template trap, so here's what I propose.
First, build a model with a filter that uses the Advanced tab to limit the data to only the first record. Now setup your Table window to display only one field; maybe a calculated field using RecNo() would be best to ensure that you've got something[/I] showing. In your Options->Export and Clipboard... turn off the option to export the field names. Now build a project export to send the Table window out to "test.txt", overwriting the existing file in all cases.
Next build a model/project that uses your normal footer data as the detail template, and adds the page header to that data.
Tie it all together with a batch file similar to:
rem outputs a single field without field names to test.txt
set /p testvalue=<test.txt
Monarch.exe /prj:"Regular data project.xprj" /pxall
Monarch.exe /prj:"Special Project to Force Header and Footer data.xprj" /pxall
If no data was exported, the batch file runs the special header/footer project, otherwise "regular" data exists, and your normal detail template model will be run.
Fix up the filenames/paths as necessary, and you should be OK.
Any further ahead?
Thank you for the suggestion, but I ended up taking a different approach:
the statement looked something like this:
CITY ST ZIP
AMT DUE: 34.03
LINE ITEM DATE (spaces) LINE ITEM (spaces) AMOUNT
(up to 30 line items)
AMT DUE: 34.03
fortunately for me, there was 30 lines of spaces regardless if there were line items or not.
so I made "ACCT:" the detail item. and made the first AMT DUE: the footer portion, that stretched down to the bottom AMT DUE. I then took the first line item date, line item, and amount, and did a 30 line trap for each one.
I then made 3 calculated fields that use a formula. the first one was called "line_item_date_delimited", and I did this in the formula
trim(textline(line_item_date,1)) + "|"
trim(textline(line_item_date,2)) + "|"
trim(textline(line_item_date,3)) + "|"
trim(textline(line_item_date,4)) + "|"
trim(textline(line_item_date,5)) + "|"
all the way to 30.
I did this for each line item set.
the resulting fields looked like this:
pool service|replace filter|new pump| etc...
from there, I am importing the data into a different database using a program I wrote in C#, so I called a stored procedure to get the data and called it in C#, filled a datatable, and looped through each record using split like this:
//use predefined stored proc that gets the data from the database that datapump put it in
DataTable dtMasterData = dbclass.GetImportedDatapumpData();
for(int i = 0; i < dtMasterData.Rows.Count; i++)
//now we grab the data, and call the .Split() method, which takes in 1 parameter: what to split it by
//the resulting string arrays have each line in each node of the array
string itemdescs = dtMasterData.Rows[i]["line_item_descs_delimited"].ToString().Split('|');
string itemdates = dtMasterData.Rows[i]["items_delimited"].ToString().Split('|');
string itemtotals = dtMasterData.Rows[i]["amounts_delim"].ToString().Split('|');
//now we loop though, 30 times, and add each iteration to the database
for (int e = 0; e < 30; e++)
//if the description is blank, then it is a blank line, and dont add it
//once one blank line is hit, each line will be blank till 30, so just exit the loop.
if (itemdescs[e] == "")
//otherwise, call the pre defined method i wrote that calls the stored procedure to add this information to a different data base.
dbclass.AddToDatabase(DateTime.Now, itemdates[e], itemdescs[e], itemtotals[e]);
}//end loop 30 times
}//end loop through each record
that C# portion can be used as a "script after" if I wanted I guess, but it would have taken more work to re-write the "dbclass" methods again though, and set up the connection strings and what not.
I guess my real question was just if there was some kind of check box in the Monarch software called "Minimum 1 detail per record set" or something that could force at least one blank line item per set of data if there are no details. This way required some re -coding in my program and the model had to be redefined from scratch again.
Maybe this can be thrown on to the list of "some day" features!
Thanks for the help and timely response!
I think the problem is that you really always needed to think of the Acct as the detail record for your data upload. In effect you had two load requirements. - not uncommon.
One simple approach is along the lines of creating 2 extracts to the same record format thus cretaing a 'dummy' empty record for for any zero value accts (presuming no order lines) and throwing away the non zero records.
Then a full extract based on your original detail line identification.
Concatenate the two outputs (and maybe sort the file) and you have your input file (or analysis file or whatever you want.)
Just another way of approaching the problem.