4 Replies Latest reply: May 15, 2014 10:10 AM by Grant Perkins RSS

    How to sort summary based on count

    MonUserCJ _

      Hey all,


      I'm trying to view the records in a report with duplicate names. I made a calculated field of full name with the concatenated last name, first name, and middle initial. Then, I made a summary based on this field.


      I want to only view items from the summary that have a count greater than or equal to 2 (or at least, I'd like the entries with a count greater than one to appear near the top). I'm not sure how to do this. Is there a way? I'd appreciate any help anyone can give.



        • How to sort summary based on count
          Olly Bond



          Yes, in the key field, click on the matching tab, and specify that the measure limit of Count() has to be strictly greater than 1.


          You can also sort by the value of the measure in descending order.





          • How to sort summary based on count
            MonUserCJ _

            Hey all,


            Thanks for the replies. The more that I look into summaries, the more I realize I have a lot to learn.


            I have figured out how to limit summary results to items that appear a certain number of times by count. However, I'm still not clear on how to sort a summary by count. It seems like I'd add count as an item (as opposed to a key or a measure) to do this, but this option is grayed out.


            What I'm trying to do is look for duplicate names in a report of demographic data. The names are separated into first and last. I can create a calculated field combining them and impose the count constraint on that, but I wasn't clear on how to do this by adding last name as the primary key and first name as the secondary key. Does anyone know if there is a way to do this? I'd appreciate any advice anyone can give.

              • How to sort summary based on count
                Grant Perkins

                I think you are better off to make a single key using the LastName+FirstName (plus another with central initial(s) maybe) route and making sure that any potential data entries anomalies (remove excess spaces, punctuation, etc.) are considered. Add in a check for possible anomalies using whatever 'rules' you suspect are necessary. The usable rules may only become clear as you work with the data.


                By doing it this way and keeping it simple if you get unexpected results  - and with names (and address) databases it would be the normal situation in my experience - you have a way to see the sources quite quickly and clearly.