Wednesday, August 27, 2008

Statistical drivel in patent law analyses?

The PatentHawk blog has a post concerning bad sampling in statistics, which included the text:

A recent patent reexamination analysis by lawyers is exemplary: a small sample size of biased data, rendering it rather meaningless. But the statistically-challenged authors reported the results as conclusive. Another study, on civil lawsuit settlements, suffers the same flaw. There at least the authors admit the data base as flawed, but regardless paint a brave face on tainted data. [concerning the analysis by Andrew S. Baluch and Stephen B. Maebius of Foley & Lardner.]


LBE posted a comment:

This evokes the completely bogus analysis by Professor John R. Thomas on the (allegedly high) grant rate of the first 100 published patent applications. Thomas neglected to point out that most of the first published applications had antecedent patent family members (ie, the first 100 were primarily not first filed cases). Biasing one's sample can lead to unusual results.

See IPBiz posts

Do the published applications of 2001 tell us about patent grant rate?
(July 31, 2007)


More on patent grant rate; the USPTO is NOT a rubber stamp
(August 2, 2007)

***Of course, then there is the godfather of bad analysis in the patent grant rate saga, the first paper of Quillen and Webster.

0 Comments:

Post a Comment

<< Home