Tuesday, January 23, 2018

Are patent examiners influenced by other patent examiners?


Michael Risch at WrittenDescription has a post titled Evidence of Peer Group Influence on Patent Examiners about a paper by Michael Frakes and Melissa Wasserman in SSRN in which Risch writes:



I'll admit that I was skeptical upon reading the abstract. After all, I would expect that grant rates would rise and fall together in any given art unit, based on either technology or the trends of the day. Indeed, the effect is not so large as to rule some other influences.

But by the end, I was convinced. Here are a couple of the findings that were most persuasive (in addition to the fact that I think they specified fixed effects nicely):
The effect is more present during the early years, and tends to get "locked in" with experience
The effect is more present with peers than with supervisory examiners
The effect is more present for examiners who do not telecommute - this, to me, was the best robustness check
Examiners who do not telecommute tended to behave similarly in obviousness (v. novelty) and also to cite the same prior art (that was not cited as frequently by those to telecommute)
This paper's framing is interesting. I read it, of course, because it is a patent paper, but Frakes & Wasserman open with a more generalized pitch that this is about employment peer effects. I suppose it is about both, really, and it is worth taking a look at if you are interested in either area.



**Frakes and Wasserman had discussed "cohort effects" in 65 Duke L.J. 1601 (2016) :



The numerous hiring-year coefficients presented in Table A1 are meant to be interpreted with reference to the omitted hiring-year cohort - that is, the 1993 cohort. The specific hypotheses that we are testing in this Article (beyond the general hypothesis of the presence of cohort effects in the first instance, which can be assessed via the F-tests presented in Table A1) do not necessarily bear on the year-by-year comparisons that the standard errors in Table A1 may be designed to facilitate. Rather we are seeking to compare grant rates across a coarser divide of hiring-year cohorts, mainly pre-2003-2004 cohorts vs. mid-to late-2000 cohorts, and mid-to late-2000 cohorts vs. post-2010 cohorts. In Table A3, we estimate specifications identical to those estimated above, but we group hiring cohorts into three groups: 1993-2002 cohorts, 2005-2008 cohorts, and 2011-2012 cohorts. To address concerns over how to specify the operable regime when the quality-assurance initiatives driving our delineation of hiring-culture [page 1653] eras are being rolled out, we drop those cohorts from the specification that started at the PTO during the specific years marking the transition across the relevant eras (2003 and 2004, 2009 and 2010), allowing us to make steady-state comparisons across eras. In Column (2) of Table A3, we control for the available individual application covariates at our disposal (entity-size and foreign-priority status).

(...)

The final hypothesis that we test in this Article bears on the effect of moving from a short, centralized training period of two weeks to a robust, PTO-run training program of eight months in 2006, with roughly half of the examiners in the 2006 cohort receiving the [page 1654] new training program and half receiving the old program (with assignment based on technology, which we control for). Rather than just comparing the grant rate of these two particular cohorts, we still estimate an empirical specification on the full set of cohorts and sample years, allowing us to achieve separation between year effects, cohort effects, and experience effects while trying to isolate the inherent granting tendencies of these two particular groups. As such, we estimate specifications that modify the approach taken in Table A3 to break the mid-2000s era into four separate groups: a 2005 cohort (a mid-2000 restrictive cohort purely under the old training regime), a 2006 cohort under the old training regime (the 2006 cohort control group), a 2006 cohort under the new training regime (the 2006 cohort treatment group), and the 2007 and 2008 cohorts (mid-2000 restrictive cohorts purely under the new training regime).

(...)

All else being equal, Tables A3 and A4 suggest a statistically significant decline in mean grant rates between examiner cohorts starting with the PTO in the mid-2000s and cohorts starting in the prior period. They also suggest a statistically significant subsequent increase in granting tendencies for the most recently hired cohorts relative to the prior cohorts (note that Table A4 arguably allows for a better test of this second comparison to the extent it allows for an observation of how things change around the time of transition to the recent permissive regime). Moreover, Table A4 demonstrates that the 2006 treatment cohort that was subjected to the new training program had a lower grant rate relative to the 2006 control cohort that was not subject to the new training program (statistically significant at the 10 percent level or 1 percent level depending on the specification), consistent with expectations that the training would more strongly induce new hires to adopt the prevailing views promulgated by the agency heads at that time.



**In a past article, The Failed Promise of User Fees: Empirical Evidence from the U.S. Patent and Trademark Office, 11 J. Empirical Legal Stud. 602, Frakes and Wasserman had noted


However, the Agency's fee structure is relevant for our analysis in a perhaps more fundamental way. We began our discussion with the more critical observation that the PTO's fee structure creates an inherent risk of financial instability that may lead to situations of binding budget constraints (necessitating the above responses) in the first instance. That is, we contend that the inadequacies of the examination fees coupled with the subsidization of the examination process by fees assessed on those who have already successfully navigated that process creates a financial risk that the Agency's incoming fee revenue may not be sufficient to meet its examination demands.
This basic observation suggests that in addition to modifying those particularities of the fee structure discussed above--for example, setting technology-specific examination fees in proportion to examination costs for that technology--the patent examination distortions under discussion in this article and in Frakes and Wasserman (2013) would be less paramount under a funding structure that imposed no such inherent financial risk. One such alternative structure would be one where examination fees came closer to covering the costs of patent processing (which constitute the vast majority of the Agency's operational costs), a structure that would reduce the need for cross-subsidization and thus reduce the risk that the parties being used for cross-subsidization purposes would grow out of step with applicants.

0 Comments:

Post a Comment

<< Home