How to write a critique for a research paper?
Posted: January 23, 2019 Filed under: Research Tips | Tags: critique 1 CommentNote: this is an active post and I’ll be updating it as I receive feedback or find helpful illustrative examples.
One of my many awesome advisors and teachers, Daniel Abadi, made us write critiques of research papers in his graduate database systems course. It was an excellent exercise as it allowed us to:
- think deeply of the work,
- create a summary that we can later visit whenever we need to refresh our memory of the work or even to write the related works,
- think about new research problems or different research perspectives on our current work, and
- practice writing (and believe me you need the practice).
If you read a paper and you like it (or dislike it), write a critique! Adrian Colyer has a popular blog where he critiques research papers in systems and ML.
The general structure of a critique is simple. It is a short 1-2 pager with the following:
- A description of the problem and solution
- The pros
- The cons
You can weave the pros and cons into the general description as well.
This is also the general structure of paper reviews but they are different because you are selfish in how you write critiques: it is about you and your interests. Critiques are your own private research notes (to my research students, I’m sorry they are not: your peers and I will read and critique them).
The general description
Make sure the critique sufficiently describes the work to trigger your memory to easily recall the paper in the future. One way to achieve this is not to repeat the paper’s jargon but instead use layman’s descriptions of the work. A benefit of doing this is that suddenly works that seem overly complicated are distilled into the very basic ideas behind them. Also, it allows you not to deceive yourself about understanding the work.
Here is a take on my PODS paper.
Using the author’s jargon:
The paper describes classes of quantified boolean queries that are tractably learnable using membership questions constructed from distinguishing tuples
A more layman description (I hope):
The paper studies how to learn “boolean quantified” queries of the form: I want students who for all their courses got an A, or students who for which there exists a course in arts and humanities. The “all” and the “exists” are quantifications and since the result set is a filtered subset of the input relation, these are boolean queries. They try to learn the query by interrogating the user about whether a certain tuple is in the result set or not — membership questions. Their goal is to ask as few questions as possible and so they select tuples, distinguishing tuples, that will effectively divide the space of valid queries. They show that this is possible for a number of quantified boolean query classes.
That said, if you are actively engaged in this kind of research then maybe the first description works well for you.
Details
Get into the details even if it is tiny detail that you thought was really clever or even a side-effect or a sub-problem that you really enjoyed thinking about.
When getting into the details, again do not repeat the paper! When I ask busy students to write critiques, I often get a dump of certain parts of the paper, especially figures, tables and algorithms. This is a bore for me (and future you reading the critique) and you probably missed the point if you did this. A better way to do this is perhaps to re-apply those algorithmic details to another simple example. Another strategy is to curate: what details did you really care about? A comprehensive summary rarely helps you get the most of these critiques and the paper itself usually does a good job of this. Here is another wise strategy:
Inspirations
Write the descriptions of the problem and solution in a way that is meaningful to your research interests or captures inspirations/epiphanies you had from reading the work. Try to answer this question: How is this work relevant to what you are currently doing or how does it inspire you to work on a new research problem?
Here is my take on this amazing paper by Moritz et al.
By creating a framework where principles of effective visualizations are fed in as constraints and then using answer set programming to search for effective visualizations for a particular data set, I started to think about how we could make this idea work for other design problems: for example, can we encode the principles of a good UI and then generate good UIs for any problem, what about a good data analysis/prediction model/etc.? What was special about the constraints describing effective visualizations that perhaps cannot be extended into other problems?
Now for the pros and cons
Unless your mind was blown away, as an academic, it is often hard to stay positive: accept this and now work against your cynical self.
When trying to say what is good about a paper, first avoid these cliches (unless you are writing a review, in which case some of these are important to say):
- The paper is well-written. You can instead say that you liked how they explained a certain problem, decomposed a problem, illustrated their examples, surveyed related works, etc. Be specific about what exactly they did. Only make these comments in your critique if you really enjoyed the write-up to the point that you would like to write your papers in a similar style!
- It is an important/timely/hard problem. Well, we sure hope so! But this is not necessarily a strength and its absence is not a weakness: it can still be cool and “not timely, somewhat easy or not that important”. Another way to discuss “it’s a hard problem” is to explain how the authors rethought the problem to make it easier to solve or scoped it down into a meaningful and solvable sub-problem.
- The experiments are solid. We sure hope that published, peer-reviewed papers have solid, reproducible experiments. What is more fascinating is generally how they tested a certain hypothesis in a convincing way? Would you attempt a similar evaluation strategy in a future paper of yours?
If your mind was blown, then sing its praises! Just be specific about what exactly blew your mind.
The cons are generally easier to write. However, what I find more constructive when writing the cons is to reflect about the work if you yourself had to do it or in relationship to what you are currently working on. Here are some points to get you started:
- Is the scope of the work limited and you know of a way to easily expand it? Another way of thinking about this is: are the assumptions they make about the world limited or problematic?
- Are the experiments lacking? What exactly would you do differently? Try to think of why the researchers designed the experiments the way they did: limited access to datasets? Are their hypotheses inherently subjective statements that are hard to validate?
- Are their techniques too complex, too simple, inefficient? Again, how exactly would you make it better. “Instead of doing xyz, I would try to do abc because …” Give the researchers a break, however, and recognize that they may have thought about the alternatives you considered and eliminated them so also think about the limitations of what you are suggesting instead.
Woo new post!