I have a report with inconsistent alignment of data. I trapped the date field (mm/dd/yyyy) using a floating trap but the other fields did not get captured properly. The example below has 2 records. The first record has a debit of $155.65, no credit and no balance. The second record has no debit, a credit of $981.10 and no balance. The debit and credit fields overlapped because of the misalignment. Each record is on one line. Any help will be appreciated.
I am using Monarch V7 on my laptop when I am not in the office. Monarch V7 Pro is on the office server.
Report Column Heading:
10001 APY AP00002632 08/30/2004 8 AP Accruals AP Accruals 155.65
10005 GEN FEPW000100 06/30/2004 6 JPANTEN September FEP JEs BANK CORR DEP-4/21/04ENC 981.10 /font[/quote][/b][/quote]Hi Ed,
An interesting one!
Is there anything else on the line that indicates ig the value is a debit or credit (or balance?)
Can you get everything else you need from the line. You seem to be suggesting that only the fiscal values are a problem but I thought I had best check.
My initial thought is that you select a large field that cover both the credit and debit values as a text field. (However when I try to cut and paste the sample posted I tend to get an overlap on more than just the numeric values so I am not sure if you also have a problem with the descriptions - if so there might be a benefit with that 'problem'.
You then run a clever assessment of the contents of the field using the IF() function to ascertain whether the field is a debit or credit and populate a calculated field for each according to the result.
One way to analyse the string might be to assess where the decimal point is to be found. If it is in the first xx characters then it must be a debit but if it falls after that it must be a credit. You can probably be very much more precise about the likely positions. The function to use is INSTR()
So a calculated field to give a debit NUMERIC value might be
[font="courier"]IF(INSTR(".",FIELD)>17,VAL(FIELD),0)[/font][/quote]Bear in mind I have not tested those formulae but they should be close to OK and might give you some ideas even if they are not 100% right!
I may have missed some detail that makes the specific suggestion above a lost cause - but the general approach should still be worth considering.
I hope this helps.
Thanks Grant for the suggestion. I was unable to use the instring function because the long string that would contain the debit and credit fields(the balance field is not on the detail record line; it is only on the period subtotal line) has text value from the previous field(Line Description) so it would not convert into a proper number.
I worked on the trap after I posted this message and was able to get the fileds up to Per(Accounting Period) but can't get any valid fields after that.
Thanks for taking the time to answer my post.
OK, I suspected that might be the case.
Does the real report look anything like the representation in my post?
Can the text form the description interfere in both credit and debit lines? (If so I assume that the entire right hand side of a line may shift according to the length of on or more description fields?)
All is not lost! Not yet anyway.
If the position of the decimal point can be used to work out whether the value is a credit or debit you still have a chance though things may get a little more complicated and the order of processing may vary - on a small sample it is not always easy to work out which order gives the most direct results.
What you need to do is remove the text part of the calculated field before converting the rest to a value.
So, lets say your field contains the folowing string:
[font="courier"]/04ENC 981.10 /font[/quote]There are 2 problems, one obvious one less obvious.
The less obvious one is that the are trailing spaces at the end of the string which may be annoying of we don't have a strategy to get rid of them. The more obvious one is that you have your problem alpha characters.
There are several possible approaches. If you have an appropriate version of Monarch (as you do iirc) you could apply the STRIP function to remove all alpha characters from the field before converting to a numeric field using VAL()
So the formula would be something like
Another approach would be to break the field in two using a SPLIT function. In this case an RSPLIT using spaces as the split character seem a likely method.
should get fairly close from what I can see.
One problem would be any trailing spaces as mentioned above. The RSPLIT will using the first space it finds - in that case maybe a trailing space. RTRIM will remove trailing spaces and can be used before the field is processed.
(the above is unchecked and may have errors but should give the idea of the approach.)
If the position of field and the data required from the field and can vary randomly and significantly according to the length of descriptions in earlier fields you may have a more complex problem to resolve. For instance the decimal point position may not be a useful indictor of the transaction type by its absolute position in the field and a relative position comparison may be required.
There may be the possibility that a floating trap based on the decimal point position would give you better control over the identification of the other fields but not help with the credit/debit categorisation.
There are still a number of other approaches we can call on if the report will not submit to those above. It the 2 line representation in the "code" section above an accurate layout for the report you are dealing with? If so I can experiment with it. If not it would be good to see something more accurately representing the report. I appreciate that may be difficult to post but I can provide you with an email contact point if necessary.
Great tips. I am familiar with the in string, left trim and right trim functions but never would have thought to use the strip and the split functions.
I was wrong in my previous post that I was able to get the proper fields up to the PER field. I was using mm/dd/yyyy, blanks and alpha in my floating trap and believed that I have the proper fields. This was only true for about the first 60 pages of the report. When data appears for the Dept field, the trap no longer works. I found this when I was doing a field boundary check.
The debit or credit field only appears once for each record as I can see on the report. There is only one decimal point for the printed field while the absent field is blank.
One thing I discovered is that the debit and credit columns are in alignment with the page column heading. However, this changes from page to page.
If you don’t mind, I like to email you more records to see if you could point me in the right direction.
Hey Grant i having the same issue with one of my CO worker ... Everytime he opens a report everything is misalign ... He just received a new pc, and every since then everything hs been mess up.... Help... /b[/quote]Hi MESQ,
That sounds different to Ed's problem. His report really does have data which should be in columns but move about on the rows very inconsistently.
If your Co-worker's problems have only started since the new PC arrived and reports which were once OK now have problems that is a very different problem.
If the reports are MS Office document types or something similar it might be worth checking that the fonts installed on the new PC match the fonts on the old one and the font originally used for the document. Or that font substitution is suitable.
If they are not Office type reports then some other effect may be the cause.
I think it might be best to open a new thread and keep your question separate from Ed's to avoid any confusion for what could be 2 different solutions to two different problems.
A quick update with Ed's permission following some off line discussions.
Ed's report really is rather problematic in that the data can shift left in the row by one, two or three characters seemingly randomly. By the time we get to deal with the right hand side of the report the shoft can, on a few lines, be even greater than that.
Some fields, mainly to the left of the report lines seem to map OK as they are. But by the time we get to the middle of the line it is time to think about selecting a block of obviously spearate fields as a single field (because the shift can cause a few of them to overlap if we did not) and then use 'slice and dice' ideas (explained in the Monarch Advanced Training courses) to break them back into separate fields.
Using the same ideas for the right hand columns of the report it was possible (well, almost!) to work out which column the shifted values belonged to based on the position of the decimal point (period) in the numeric value. This would have been fairly simple until we discovered that some of the Long Desciptions also contained perioad and as they had to be included in the block of data that required slicing there was no option but to make the formula a little more complex in order to eliminate the text from the equation but include the number of characters in the text for part of the positional calculation.
Whilst this seemed to work quite well we ended up discovering a couple of records (there could be more and of course there is always the risk of others if the process is used for several iteration of the report) where the decimal point position could suggest the value should apply to either column.
That meant we needed to find something else on the line that would help to quality the value as a debit or credit. Also that a cross check of the summary calculated values from the Monarch output matched the summary values from the report.
If it turned out to be impossible then 'doubtful' flags would need to be set against the extracted records, with a cross check of the sub-total values and followed by operator review and allocation of the values where necessary.
All of this is quite possible but a little time consuming to set up and test.
At this point Ed identified that the report file we were working with was a text extract from a PDF file. That explains the peculiar and inconsistent offsets of data, but also suggest that the original file might be an excellent target for a Monarch V8 Pro attack on the original PDF file - it might well offer up a better interpretation of the PDF than the text export.
So hopefully that will be the next step - test the PDF against Monarch 8 Pro. and hope to avoid quite a lot of complex development. Unfortunately as this is confidential information the undisguised pdf file cannot be released for external testing. Hopefully Ed will be able to resolve access to V8 Pro soon so that the next stage options can be worked out.
Further updates expected in the future.