Welcome to the forum!
I think you're looking at a two pass solution.to your challenge.
In the first pass, you'll create a list of items which appear more than once throughout your reports. To do this, create a filter in the table window named Duplicates. On the Advanced tab, select "Duplicated rows" and "all duplicated rows". Now check the field you want to monitor from the list box ("Specify keys").
Click OK to close the filter and export the table to an Excel file (if you don't have too many rows - export to an Access table if you do). Add this export to the project exports. Save the model and the project.
Close everything and connect to your exported table as a database (Open Database...). After the records have been imported, create a filter which only shows records where the date is today.
These will be the records that you're after in the activity of the previous days.
Edit: It was suggested to me by a certain Most Honorable Guru that a compound filter might alleviate the need for a two pass process. So, with that in mind, create your Duplicates filter. Now create a Today, MyDate = Today(), filter. Both of these are formula-based.
Now create a compound filter joining Duplicates AND Today.
Let us know if this works out for you.
[size="1"][ June 27, 2007, 05:26 PM: Message edited by: Data Kruncher ][/size]
I'm having a similar problem but I'm working with one flat file that is 334MB in size.
I have accounts with the account holders name, and amount owed. But a customer can have sub accounts and I'm trying to just filter out the duplicates from my filter.
I've been running the "All Duplicated Rows" function for the last 5 hours with no end in sight.
I have ran other filters requesting similar inforamation and it has taken less time than what it takes to run the duplicate filter.
Is there something I can do to speed up the process? Am I even using the Duplicated Row feature correctly?
Any help you can give is appreciated.
Sounds like a large file - does that also mean it will produce a large number of records?
As with all such situations the more records the report will produce the longer it will take - over a certain number the processing is likely to increase exponentially.
How many duplicates are you likely be trying to process? How precise will the link be?
Could you try filtering the extraction first so you can test the principle with a smaller working set of data?
Just some thoughts that might suggest ideas.
I could filter my file out more than what it already is and I think the records are going to be about 20 to 30K.
I have ran other filters that have not taken that long in the past and I'm thinking since I'm asking for all the duplicates, that is what is clogging up my file.
In response to DC...I have upgraded to v.9.