Writing the abstract for your paper can be one of the hardest things to do. Most readers who get past your title will scan your abstract to decide if they want to continue. The goal is clear, then. You want to hook the reader with a summary of your work that is compelling and accurate. And short. How to balance it all?
An amazingly useful technique that I learned at a summer school (now Neuromatch Academy) a few years back is the 100-word abstract. Five sentences, 20 words each. Each sentence addresses a straightforward question. I’ve included my group’s abstract from that summer school as a running example to help you see how this can come together.
What is the state of the literature?
What field and area of research is this work in? What level of audience do you expect? This first sentence needs to meet your audience at their level of knowledge to be effective.
“Network neuroscience seeks to explain how neuron-level activity is reliably aggregated into emergent, collective action.”
What is the research gap?
This is the missing piece of the puzzle that your paper is providing. Start with the boundary of knowledge, then move to what’s missing. Good signalling words for this gap include “although” and “but.”
“Although a network’s connectivity structure influences its robustness, this has not been systematically studied to model decision-making processes in the brain.”
How does this study address the gap?
You figured this out as part of your experimental design: this is a one-sentence Methods section. If you’re stuck on this, try finishing the sentence “Here we…”
“Here we address this gap by simulating how networks with different connectivity structures make time sensitive, value-based decisions.”
What do the study results show?
After our one-sentence Methods section we have a one-sentence Results section. What is the one finding you want your readers to take away with them?
“Our results show that a network’s reward rate is positively correlated with its small-world measure and that small-world networks are more robust to noise than either regular or random networks.”
Why is this important?
This brings the abstract, and your work, all together. What is so important about the research gap you’ve filled? What future research might come out of your results? What’s changed?
“We conclude that small-world characteristics are important in modelling the decision-making process in the human brain.”
And here is the abstract in full. You might have to do some work to get your individual sentences to flow together, but sometimes this work can prompt you about something you’re missing.
Network neuroscience seeks to explain how neuron-level activity is reliably aggregated into emergent, collective action. Although a network’s connectivity structure influences its robustness, this has not been systematically studied to model decision-making processes in the brain. Here we address this gap by simulating how networks with different connectivity structures make time sensitive, value-based decisions. Our results show that a network’s reward rate is positively correlated with its small-world measure and that small-world networks are more robust to noise than either regular or random networks. We conclude that small-world characteristics are important in modelling the decision-making process in the human brain.
When we were taught this exercise, our instructors pointed out that you should know these answers before you run your experiments. In fact, writing a 100-word abstract before conducting your study is a great way to see if your thoughts are coherent. If you can’t imagine writing an exciting abstract before you’ve started, that can be a sign that you want to refine your ideas.
Here is a fully written abstract that my co-authors and I developed from an initial 100-word abstract. In order to avoid an overly complicated first sentence, we broke the idea up into two separate sentences. Since the paper concerned theoretical results instead of an experimental study, we also adapted sentences 3 and 4.
In discrete-event system control, the worst-case time complexity for computing a system’s observer is exponential in the number of that system’s states. This results in practical difficulties since some problems require calculating multiple observers for a changing system, e.g., synthesising an opacity-enforcing supervisor. Although calculating these observers in an iterative manner allows us to synthesise an opacity-enforcing supervisor and although methods have been proposed to reduce the computational demands, room exists for a practical and intuitive solution. Here we extend the subautomaton relationship to the notion of a subobserver and demonstrate its use in reducing the computations required for iterated observer calculations. We then demonstrate the subobserver relationship’s power by simplifying state-of-the-art synthesis approaches for opacity-enforcing supervisors under realistic assumptions.