1 Reply Latest reply: May 15, 2014 10:11 AM by Olly Bond RSS

    Storing Job information in database based on job status

    july13 _

      I'm a new bee in  datapump.

      As per our datapump requirement,  we need to save the information of each job( job ID, process name,Time stamp, etc)  in sql server database in a seperate table based on  the status   "job completed succefully" or not.


      We are creating only one process and one project for the input file. We use these tabs specifing the project file and model file.

      a)input (giving input files for processing),

      b)input distribution(move the files to another location)

      c) Exports tabs( converting the input file to a specific format to another location).


      These are my questions.


      1) Where should we need to write the script? In post process or Post Export?

      2) How do we get the Job information(job Id, etc)? Sample code should be appreciated.

      3) Is there any way we can easily test the database connection?





        • Storing Job information in database based on job status
          Olly Bond

          Hello July,


          The Custodian would let you set up a distribution for the job log files so you don't end up having thousands in one folder. This would let you keep the job logs separate from the application if that's what you need.


          But another approach would be to use Datawatch to read its own job logs on a periodic basis - say daily. A DataPump process with one project that opens *.xml in the job logs folder, and a model that checks the XML content for whether the job was successful, and and OLEdb export to a SQL database of your choice.  An input distribution can then store the source XML files in an archive.


          If you would like a sample model, do feel free to drop me an email.


          Best wishes,