There may be a simpler way, but I would probably first put on a filter for only duplicated rows and export that to Excel (or whatever will accomodate the table). I assume you could also export your summary.
Then go back to the original table and take the filter off completely. Do an external lookup to the table you just exported and match using whatever is the key identifier.
That should be all of the single occurence records.
Hi pmcfeely and welcome to the forum.
Version 9 would give you all you need (as far as I can tell) but in the absence of that you could try this with V8.
Assuming that your Account field is the primary key in your summary and you have a count measure, set the field properties for the key summary field to MATCH (via the Matching TAB) the MEASURE options and set that to AT MOST with a value of 1.
That should give you all accounts that only have a single occurrence in the source data table.
The rest will be collected under an 'All Others' grouping or groupings at the bottom of the summary.
Hopefully this segregation will be enough for your needs.
If you need to automate the process and ensure the 'All other' values are dropped a 2 stage process might be worth considering. Brad has outlined one approach. Another might be create a summary with the count by summarised account value included as a field. Export that as a new report/database and then read it into Monarch as a new report, selecting the count field (now read as data rather than calculated) as the filter field.
Filter would be either 1 or >1 I would suggest. (Although you have already dealt with duplicates of course but this would be an alternative approach.) The process could be automated most easily using project exports and a batch script.