Assuming the PDF to text file interpretation is working consistently enough with your source of PDFs the answer here probably makes no difference whether you are using a PDF or a text file.
There a s couple of ideas what might work for you depending on the overall context of what you are working with.
You could consider a floating trap making use of the comma but I am wondering whether you would be better off separating the trapping from the field extraction. Floating traps can be complex for various reasons and are best suited to the sort of log files that are generated by web activity in computer systems.
Is the data area you wish to extract part of an address? If so you may find something worthwhile in the Address Block processing feature (but oprobably NOT the Postal Trap in this case.)
If it is not part of an address then the simpler option would be to trap for the data line or lines you need, 'paint' the fields you need to a suitable size, in this case probably the maximum size of a city name but you may have other things to consider as well, and extract whatever is included.
From the resulting Table of data you can create a calculated field, probably using the LSPLIT() function, to create one or more break point(s) in the character string wherever a comma appears and then specify which section of the extracted data you want the new calculated field to contain.
If the field (Myfield) contained, say:
"Exeter, Devon, England."
[font="courier"]LSPLIT(Myfield,3,",",1) /font[/quote]would return "Exeter".
[font="courier"]LSPLIT(Myfield,3,",",2) /font[/quote]would return "Devon"
[font="courier"]LSPLIT(Myfield,3,",",3) /font[/quote]would return "England."
In some circumstances you might also want to use the TRIM function or one of its variants to remove any spaces that could affect correct data justification in the new fields.
Does this help?