May the 4th [paper] be with you: Recapitulation of the discussions around the new ICSE 2017 submission policy

For a couple of weeks now, we can witness heated discussions around the new ICSE 2017 submission policy allowing each author a maximum of three submissions:

Each individual author is limited to a maximum of three submissions as an author or co-author. This new policy was put in place to address the rapidly increasing imbalance between the amount of reviewing service requested from the conference and the availability of willing and competent volunteer reviewers. Authors of co-authored submissions are jointly responsible for respecting this policy. In case of violations, only the first three submissions (in order of submission number) will be sent for review.

The discussions began, as far as I can tell, via a Facebook post by Tao Xie in the Software Engineering Research Community page which has led to a response by the PC co-chairs saying (condensed):

One of the main reasons for this policy is that every year more people submit more papers, but the pool of qualified reviewers willing to make the necessary commitment does not grow in proportion […] The NIPS Experiment is a disconcerting indication that randomness in paper acceptance decisions is not an exaggeration. At the same time, the very perception of a random review process is an encouragement to submit more papers. We are stuck in a vicious cycle. A key to breaking the cycle is the recognition that the time donated by ICSE reviewers is a valuable and limited resource. […] Our vision is that if authors submit fewer papers and still get a similar (or possibly even higher) number of submissions accepted, we will maintain or even improve the quality of the ICSE program while using reviewing resources in a more efficient, fair, and sustainable way.

As I gladly accepted to serve the community as a social media co-chair for that event, I hoped it would be of value for the organisers if I tried to observe and channel somehow the following discussion between all parties. So in response to the Facebook discussion, I started a small poll via twitter to collect any thoughts and arguments for and against the submission policy (apart from picking up further discussions I saw on social media or brought to me by email).

Screen Shot 2016-05-05 at 09.42.19

All the feedback was sent to the organisers without any filtering and without any opinions of my own, let alone because I myself didn’t have any (strong) opinion on the policy itself. My only thought was that hard quantitative constraints would probably not solve problems at the other side of the submission pipeline.

Yet, no strong opinion and it didn’t change much, maybe also because my naivety hinders me in caring too much about such things. I cannot submit a specific manuscript to event X? Then I’ll submit to Y. To me, the content matters more than the venue and I am also not willing to lose a second to the thought of having the co-authorship of a manuscript be threatened by the desire for a specific venue. But this is of course only me and how I was “academically raised”, and of course you may absolutely disagree here. But that is not the point, the actual reason why it is so difficult for me to decide for a specific side is that so far I couldn’t see any argument – for or against the policy change – that would really convince me.

Now colleagues have launched an online petition on abolishing the submission policy which is getting much attention while I still think it is worth reflecting on whether it actually solves problems worth solving. To be clear, I am neither in favour of the petition nor am I against it, but I just fear that emotions might drown the actual case. Reason enough to briefly recapitulate a little bit the statements and opinions.

The poll itself brought no clear signal, let’s therefore take a look at the arguments.

After the initial response arguing for the policy as an attempt to decrease the reviewer loads, we could see some (good) points, such as increasing the PC, and some good counter arguments. The first one, as far as I can tell, was that the policy would not evidently reduce the reviewer load. Another argument was that the policy would hamper collaboration. This point has been made by Christian Kästner and Andreas Zeller. The argument for such a policy hampering collaborations was picked up later on by Stefan Wagner providing a counter argument that ICSE is not the only venue.

Further counter arguments where, essentially, the following:

  • Barrier for students who want to submit with their supervisors (provided that their supervisors already have planned three submissions and, well, that they can only submit to the ICSE technical track as well as that the supervisors have significantly contributed to that work).
  • ICSE is the flagship conference and having papers placed there are a factor important to hiring and award committees (followed by counter arguments that it is up to the same community to change the criteria in such committees).

Arguments in favour of the policy where, inter alia, the following:

  • A limit might also help broaden the community and increase the amount of “new blood” at ICSE.
  • That way, we can slow down the publication pace and the flagship conferences are the ones that should take the lead.
  • It reduces the shotgun principle / submission number game.

There are many good arguments in favour and against which all can and should, of course, be discussed in a good debate (which we are luckily having as I personally think). An interesting list of pros and cons is also provided in the slides by Mary Shaw.

So, what is the problem now? The petition itself is of course not the problem, but it essentially picks on two arguments:

  1. The policy discourages researchers, especially young scholars, from engaging in collaborations, and it
  2. sets an artificial limit to an individual’s productivity.

I certainly do understand these arguments. And I would also like to stress how much respect I have for the colleagues initiating the petition (as well as for those who signed it). But, and this holds for the whole discussion (including arguments in favour of the change and arguments against the change), all points raised so far are based on expectations rather than evidence, which is why am still not convinced and remain undecided.

When seeing the petition, my first thought was, however, not about the provided arguments but why we do not have the same engagement for other, much more severe problems, we currently have. Why do we not invest the same effort into something that would actually change the root causes for the problems we face in academia rather than suppressing the symptoms, such as:

  • Changing reviewing processes to mitigate possible threats resulting from biased reviewers or from simple prevalent prejudice; for instance, by relying on a double blind review format or, what I favour, complete openness.
  • Changing the evaluation criteria used in hiring and award committees valuing, for instance, quality over quantity (and relying on nonsense metrics).
  • Changing the evaluation criteria for scientific contributions themselves to increase our understanding on the notion of scientific quality.

What baffles me most is that we – and I’m certainly including myself here – are so often complaining about a game for which we ourselves can set the rules.

Now we can only wait for the final statement/FAQs by the PC co-chairs which we will soon announce via the official social media accounts of the event.

In any case, while I don’t expect the discourse to leave the level of reasoning by argument to a level of reasoning by data, whatever changes we will eventually have, I do believe that the discussions we are having are good and necessary, and sending a signal into whatever direction is a good start.