The part that seems to be missing from this discussion is that duplicate content is not penalized, but it is eventually filtered out. For the most part, filters are run separately from the ranking algorithms. That is why a duplicated article can initially rank well, but at some point drops out of the rankings.
Avoiding the filter is not always a matter of getting the copy of an article posted on your site first. There is a article out there about stolen content where Matt Cutts says that Google uses backlinks to determine which is the original article. He recommended embedding a link back to the original in an article to avoid problems with stolen articles out-ranking the originals. I have personally seen situations where a stolen article out-ranked the original, and the original got filtered out because the thief was much better at link building than the author. The authority level of the site can also play a big role in determining which site is viewed as the original.
There are numerous situations where autoblogs eventually get de-indexed or or all of the articles are demoted in rank due to the amount of duplicate content. That was part of what Panda was about, but those filters were out there long before Panda. I don't buy any web sites with duplicate content because once they have been flagged by Google is it very difficult to get them to rank well.
The best way to avoid problems with syndicated articles would therefore be to post the article on your site first and get a few backlinks pointed to the article before it is posted on other sites. Better yet, don't post the same article on your site as the one that you post on other sites.
"It's inexcusable for scientists to torture animals; let them make their experiments on journalists and politicians." -Henrik Ibsen