We are trying to process our models against the interface reports that are created when we interface our Order to Cash system to the General Ledger system and are running into a few issues. In some divisions, the size and complexity results in more than a 1000 reports and you do not get the same reports every month. Because of the variations in the branches inside the divisions, you may only get 10 of the standard 18 reports because they had no entries that populate a particular report. To add to the variations, almost all of the reports have error reports that can be created based on certain critieria not being met. Three of the standard 18 do not have a model of there own because in most cases there is not data on the report and the only way they get data is if a set of very special circumstances take place and even then there is no need to capture that data for additional reporting. We are attempting to setup one process that runs 15 projects that we are creating for each division and have it start the process as a monitored process. This works great for single branch divisions, but not very well for the multi-branch divisions. Because we are monitoring for all the 18 standard reports, the delay in which the reports come into our monitored file location, and the fact we do not get all eighteen reports for every branch we setup the projects to not require file existance, but this is causing some issues in that it kicks off a job everytime a file hits which results in us getting occassional errors because some of the files are small and hit almost simultaneously so that we get an error saying the process is currently in use and cannot be started. Anyone have any suggestions? Thought about scheduling to run at 12:00AM the day after the interfaces are processed so that all the files would be there, but the interfaces are always on the second business day of the month and I do not see anything in scheduling that would allow us to do that.
You don't mention the version of Pump that you're using - I guess 10.5? v12, Automator, has its own Scheduler, rather than relying on the Windows Server scheduler, so it might support more options, but "business day" is a worrying term - as you might need to handle public holidays like 1st January. You can set retry conditions, though.
With files being hit "almost simultaneously" I'm a little concerned about the input wildcards for the monitoring. Do you have some overview of the process in a diagram you can share?
Datapump v 11.6 and Monarch Pro v 11.8
I have never even seen one of these diagrams, the third party consultants
that they used to help us with installation and training were less than
helpful. I will try to build one but I am very new to Datapump and
believe that I am still in my infancy in my development with Monarch Pro.
We have been using Monarch for years but only as a tool to mine data out
of reports to be used in ad hoc analysis. The is our first attempt at
automating a significant number of reports and we are still only trying to
create roll-ups of the data at the division and company level. I see many
benefits in the future but we still have a significant amount to learn
before we will be able to utilize all the capabilities of the tools we
Dean Foods Company
14760 Trinity Blvd
Fort Worth, TX 76155
I'm sorry to hear that the consultants you invested in weren't as helpful as you hoped. You might want to consider asking Monarch Experts to help - details of what we can do are on www.monarchexperts.com. You can order our services directly from Datawatch, just contact your salesperson at Datawatc or ask for Jamie Menashi, the director of professional services in the US. We can help you with a skills audit, bespoke training, process documentation and enterprise level support for your models.