A Handbook to Clinical Research

Read Part 2

Introducing the Academic Clinician

Navigating a research career while simultaneously being a clinician is no simple task. There is a lot of heterogeneity in clinical training but perhaps even more variety in research training. This variety comes down to:

  1. There are expanding avenues for getting involved in research and starting higher degrees.
  2. The spectrum from basic science to clinical research is highly grey with techniques and skills being mashed together like alphabet soup.
  3. Research does not have skills-based checklists. It is highly vocational and your work is assessed by the field's acceptance of it.

The more you progress, the more your resume comes from accumulating experiences and not qualifications.

Career Pathways

As we alluded to, experience and the informal training that it provides counts substantially. Your medical degree (MBBS, MD, MChD, etc) is enough of a platform for you to begin training ‘on the job’ in research. Additional research-specific degrees do ease the process though and may provide greater access to career progression.

We have produced a diagram of some very general career pathways that are common

If you are considering higher degree research, you need to think about the pro’s and con’s of its timing in these pathways. It could be during your medical degree, junior doctor training or specialty training. As seniority increases, your project will begin to match interest to a greater extent. You will have developed the soft skills to work efficiently and manage research group relationships. The impact on your work life can be more severe though. You will have more clinical responsibilities and the time commitments would definitely impact your ability to manage a full clinical schedule. You may also be more likely to have a family and less time to give to work.

Starting earlier in the medical degree or junior training would typically reduce your ability to pick a specialisation of great interest or clinical understanding. In exchange, you typically gain greater work freedom and increased resume strength for speciality applications. Your resume will also compound over a greater period of time and this means a greater snowball of career ‘metrics’ and future opportunities. However, It is understandable if adding years to your training causes financial worries. Medical training is long and people who commit to the extra research training early can have the increasing financial responsibilities that age brings without a substantial (non-scholarship based) income to cover them.


Research Field

Early on in your career, the field that you research within is not of as great importance (unless you are post specialisation that is). Unless you have a future specialty that you are certain of, consider gaining some diversity of experience. In this article, we outline how research training is fairly universal and the benefits of interdisciplinary experience. Many people struggle to get started, and by being less picky on the project topic, you can gain the experience needed to make you a more appealing candidate for future supervisors. The exception to this can be the specifics of project techniques when comparing basic science and clinical research. Time required to be invested in basic science can be significant due to the additional layer of work in becoming laboratory proficient.

Being involved early on doesn’t only benefit one by increasing their research skills but also gives one an understanding of how the game is played and the lifestyle required to juggle both medicine and research - or even choosing one over the other.

Once you have a specialty in mind, are an accredited registrar, or are considering a PhD then your research field can become more significant. These periods can be a key time to develop professional relationships and learn both common research techniques and quirks that come with a discipline. There is nothing stopping you from changing fields in the future. Many people do this at any career stage.

Some advice: you should use the early years of learning research to formulate ideas and interesting topics. That way, once they decide to do a PhD, you will know what topics you find interesting, what research protocols you want to do and what the potential project roadblocks can be (e.g. funding, co workers, shared authorships, ethics approval pending, too high risk, too safe, too many hours that one doesn’t have)

Picking for Success

If you love the idea of research and the positive effects it can have on your clinical practice, consider a research field and future specialty that integrates with research tightly. For example,  Haematology and Oncology practices are developing at rates of knots. Oncology is becoming more personalised and research centred by the year.

Australian Haematology research is also award winning. If your passion is research and not a particular field, picking award winning areas can set you up for an interesting career with great potential for cutting edge training and future research grant success.

Degrees and Academic Roles

The career spectrum typically goes from the first research project all the way to senior academic roles, including professors. Every university will be slightly different, however, let's use The University of Queensland as an example. The first thing to note is that there is an Academic (research incorporated) and Clinical pathway to the title of professor. We are focussing on the Academic path. By outlining the requirements of the title ‘professor’, we can look back on what components of a long career contribute towards this particular endpoint.

  1. Research
  • Guiding the development (supervising) junior researchers.
    • This typically starts when you have the ability of a late stage PhD student.
  • Leadership of major funding initiatives.
    • Means grants like those awarded by NHMRC or ARC.  
  • Major contributions to knowledge and the beneficial application of knowledge.
    • Research output and translation of research findings to clinical practice.
  • “Demonstrate outstanding outcomes and leadership.” See career metrics next up...


  1. Teaching: “Scholarly teaching across all levels”.
  • This can start out as the peer mentor or course tutor you’d be familiar with from university. You’d then progress to roles such as head tutor for a course or a doctor running a PBL.
  • This tutoring experience provides the basis for applications to be a lecturer. You then work your way up to senior lecturer and course coordinator.


  1. Service and engagement:
  • “Leadership contribution to the governance and collegial life of the institution, to continuing education and professional development, and to be recognised for international leadership in the profession.”
    • This is basically asking, have you joined and helped run research working groups or journals? Have you advised the government on health issues?

Career Metrics

Early on, your focus should be on building research skills. Understand the project process and gain as much exposure as possible. As you begin to rise through your degrees and settle on a research field of interest, your focus can shift to creating and expanding your professional networks, by participating in discipline-relevant conferences and committees. These networks can help you understand where your field is heading, what the research gaps are, and who you can work with to fill them.

At these later stages there are number metrics used to assess you. This starts out relatively simply with publication-centred outputs. The most commonly used metrics are publication number for measuring your output, and impact factor for the work quality. Although the monarch of metrics, they are highly flawed and there are shifts to the consideration of other metrics. Senior academics then have to contend with other commitments such as courses taught, total grant funds received, and the number of students supervised. Outputs such as publications and grants have a snowball effect. Once you learn how to do the task and get efficient with writing, the numbers will no longer seem so daunting. To know what is expected of you, consult those in your research group who are at the next step. Metrics are highly field specific and you don’t want to measure your basic science publication number against a clinical researcher publication number.

ResearchLab: a source of support 

There is a lot to consider when it comes to a career. The first step is to gain an appreciation of the whole process. Gaining this holistic view requires familiarity and lots of it…

So, what do we mean by familiarity? We mean that we need to view research as a holistic process. We need to recognise that applying statistics is intrinsically linked to understanding project design. We need to remember that the project to publication is a winding stream. Conference presentations give dynamic feedback that refines our ideas and builds a bridge to publication.

For this we need a solid research foundation. Getting it is not so simple though. Emailing supervisors: How many of you have never gotten a response? A project gone wrong: I’ve bet you’ve heard of that happening.

Our online community is not just for the research diehards looking to flex. By connecting medical students with each other and researchers, we are looking to help you overcome the countless hurdles thrown your way. We will do this by cultivating the essential skills that let you produce quality research no matter why you’re doing it. You don’t know what you don’t know. Explore to get started


Getting Ahead: our application focused articles

  • How to do original research and why it’s essential for project publication.
  • Knowing whether your project design is fit for purpose. Is it setting you up for a smooth experience?
  • The shortcuts to data organisation and why being Marie Kondo is a prerequisite for successful research
  • Getting comfortable with the steps needed to apply and interpret statistics
  • The important intermediary role that conferences have in project to publication
  • How to navigate journals and publishing


Getting Support: Some of our free research services

  • A fully stocked resources page that links you to time saving and confusion easing sites.
  • The ability to anonymously ask us the questions you never thought you could.
  • A ResearchBlog written by medical students on topics relevant to you.
  • A digital version of the classic Journal Club.


Read Part 3

Original Research

When embarking on a new project you must consider your topic carefully. It is paramount to crystallize something that is yet to be investigated in the way you intend. The investigation findings then need to be specific enough that they can usefully inform a unique knowledge gap. The two primary reasons for fulfilling these criteria are:

  1. Ensuring that the proposal is of use to the medical field.
  2. Ensuring you have sound topic understanding and will, as such, produce a piece of research that targets the question with an appropriate method. 

No carefully guided university assignment can help us pick originality unfortunately. That’s why this article will explore the nuances of ‘original’ research in Original Research Importance. Conducting Original Research will follow with an investigation of the two points outlined: use in the medical field and appropriate methodology approach.

While most readers will expect to have a supervisor design their project, reflecting on these processes will help frame expectations. Knowing broadly what is ‘good’ and ‘bad’ will help you avoid the classic research project ‘burn’ that can kill project and publication success.

  • We talk more about picking supervisors and supervisors picking you on our blog and anonymous question responses.
  • We discuss more about what to look for in project design here.

Original Research Importance

What does it mean for a piece to be original?

Seldom is there a topic that hasn’t been investigated to some degree. This does not mean that there is no scope to conduct research in the field. Consider the example of someone finding that Neutrophils express particular cell surface markers that indicate degranulation hyperactivity during the first week of a COVID-19 infection. A different follow-up study measures the presence of granules during the first week of a COVID-19 infection.

Superficially, these studies may appear the same. The follow up study may even appear unoriginal. The slight methodological difference, however, is part of the originality. Cell surface markers only indicate a response and, during some diseases, the immune system can have a dysregulated response where signals don’t produce action. Measuring the protein is a direct way of testing for a biological effect. There is naturally research overlap. However, it is the targeting of your studies' aim and interpretation to the method, cohort or scenario that really defines originality.

Why do we want original research?

We research to build knowledge in a stepwise fashion. Originality can be found in reinterpreting problems and rethinking questions to present new or existing information in unique forms. Good questions are specific questions. No question is able to solve an issue in its entirety and we often have study limitations (ethical, financial, logistical and more) that require revised approaches. When done well, these small back and forth steps are a focal point to generate more questions, more studies and systemic change.

Conducting Original Research

Due to its importance, the research community applies a sort of selective pressure for original research. To survive under this pressure you need to identify both what is known about a topic as well as what is of clinical interest. For example, we don’t know exactly how paracetamol works. Despite this, given its well-known safety profile, the topic is of limited clinical interest compared to other research questions. A knowledge gap does not necessarily imply research value. So how does the hapless junior researcher identify the intersection between the two?

Our table summarises some potential methods. As knowing the literature backbones the   methods, we explore literature searches in greater detail next.


Idea Explanation

Reproducibility Analysis

Differing context and/or methodology to provide further evidence.  The effect of blood storage pre-transfusion on mortality would have differing implications in GI bleeding compared to sepsis given the underlying pathology implications.
Research Series

Someone finds (e.g.) a candidate cancer marker. Ask yourself a question and brainstorm an answer for it:


What sequence of papers does that researcher need to do next to achieve the overall goal of everyday clinical utilization of that cancer marker? Result was in a mouse? What papers should follow to prove that that explanation applies to humans too? Consider that the publication author may be working on the same project idea.

Method Critique

Look at published figures (don’t look at the text of the results section). Write down the result conclusions on the side and see if it matches with the author's conclusion. This is a great way to find mistakes or limitations of the work. This means there are things to improve in new studies to get a better answer. Don’t fall for the trap of reading the introduction and discussion sections first. They are interpretations, not the real data. They can be persuasive and make one not notice shortcomings.


Literature search: The source of life

Literature searches can simultaneously identify the current knowledge and clinical importance. Reading a recent review article in a highly-regarded journal can quickly point you to what research has been done and what frame of reference is being taken. Review articles don’t summarise everything but rather approach it from a perspective. For example, pathogenesis and diagnostic failures. You now know research on sensitive and specific markers is sorely needed. You can then delve deeper into reading original research to see how people have approached the problem and ask yourself, am I just beating a dead horse with my approach? Pubmed is the universally accepted first stop in the literature search. You should not make it your last though.

Pubmed is one of the most well-known literature search engines. It retrieves pieces found within the MEDLINE database (and some smaller ones). Though it provides a very good starting point, MEDLINE has its own inclusion criteria for journals and they are not solely quality centred. For example, they explicitly state the ‘importance’ of a journal’s research is considered. This limitation should be noted when attempting to gain a comprehensive understanding of the research topic and produce a novel piece of work. It is why cross-checking with other databases can be so important. Look at out resources to learn more about databases.

Poorly understood literature?

Poorly understanding of literature can lead to unoriginal research, which has two primary consequences.

  1. Missing clinical use (for the people who will use your research).
  2. Poor reception by journals (by the people who will publish your research).

Missing clinical use

Who’d want to invest countless hours for no return? Producing work of little clinical use is a broad topic but could occur as the problem has little to no impact on patient outcomes or biological understanding. Topics like this are a waste of the researcher’s (and their networks’) time, effort, and resources. Grant money, for one, is scarce. It is incumbent upon the researcher to use limited resources to the best of their abilities.

Poor journal reception

Un-original pieces will likely not be published. If they are, say goodbye to citations as it is lost in the 1 million pieces Pubmed indexes per year.

Unoriginal work is not seen favourably by publishers. While some journals are starting to focus solely on scientific validity (e.g. Plos One, Scientific Reports & F1000Research), most consider quality while being highly specific to advancing a field. This goal typically manifests in a desire to feature investigations that are exciting and reshape practice. The citations that these exciting pieces bring are what gives a journal its prestige and name (via the controversial but ever dominant impact factor). Pieces that are not original will be lost in the noise and will fail to be cited to a great extent.

As you may have noted from the use of ‘controversial impact factor’, this system is not perfect. The constant desire for new and exciting things which are highly cited has contributed to a less holistic and accurate academic literature. Firstly, ‘negative’ findings are often not published (but they should be!). The implications of this are discussed here, however, in simple terms it leads to people wasting their time working on projects that other researchers already know don’t work. Another issue is that journals, like The Lancet, are getting caught with fraudulent research within their ranks. This is something you instinctively do not want to hear associated with the gatekeepers of credible knowledge production. These issues are not killers of the existing research landscape, however, they do warn against relying on easy metrics instead of careful evaluation.

Crafting originality in practice: Case study of transfusion storage and mortality

As discussed, changing variables like population sample, methodology or intervention, when combined with the resulting question specificity, ensures research originality. How does that look when applied? Our case study focuses on the effect that pre-transfusion red cell storage has on clinical outcomes. We look at multiple studies (observational, RCT, and meta-analysis), all of which appear to answer this same research question on potential impacts of transfusion storage on mortality. The studies, however, are actually reflective of  deeper refinements.

An observational study is the easiest approach initially in that data is readily collected with lower effort and lower cost. This observational study gave a negative finding for storage impacting mortality. But research is all about reproducibility. A 2015 meta-analysis of observational studies found in excess of 50 such studies conducted. These observational studies had different patient groups and differing definitions of prolonged storage based on jurisdiction practices. The finding of this meta-analysis was a trend indicating storage increased mortality.

This is where RCTs came into play (think of that study hierarchy we’ve probably heard of). When possible we need to verify findings in the highly controlled setting of an RCT. Several RCTs were conducted in differing patient groups as, for one, each patient group had differing underlying pathologies that required red cell transfusion. All studies had similar findings to this example in the New England Journal of Medicine. Transfusion storage did not impact mortality. You could consider this case closed - but this study was designed to assess the standard of care that patients received compared to red cells stored for less time. An ethical requirement of RCTs is to provide typical care as a minimum treatment level. The RCT was not able to answer the question of whether storage above the average duration impacted outcomes. This type of mismatch in study ability to study needs is discussed in our article on making project design fit for purpose

Another, but different, meta-analysis was conducted after this. This study specifically looked at the extremes of storage to determine whether there was any potential impact. Their findings were positive for storage impacting mortality.

This research area is still actively under investigation. It highlights the nuances of research and originality - particularly demonstrating  how multiple studies that appear to ask the same question can serve different purposes and present their own original perspectives.

Being a Research OG

Hopefully we have highlighted that:

  • Originality is a fundamental component to the iterative research process.
  • Original research is nuanced and study design must acknowledge the subtle differences of result interpretation that differing methodologies provide.
  • There are several methods for thinking about original research but knowing the literature helps with it all. If you fail your due diligence, then you risk wasting limited resources and falling short of expectations required to publish.

Despite the essential role it plays, literature is definitely not free to read beyond an abstract. Institutional affiliations can help with access but it may not be a silver bullet. Our resources page covers databases and accessing publications. The topic was also explored in our community ResearchBlog.


Read Part 4

Study Design is Imperfect

Purpose fit Projects

In our article on original research, we mentioned how differing studies (observational, RCT and meta-analysis) can answer different questions about a research topic. This article takes a closer look, concluding that not every study type can present an answer for every research question.

Depending on your stage of medical education, you may have heard of designing for purpose. We will try to make the information actually actionable with further resources and some practicalities: Study type hourly commitments, difficulty of involvement in different study types and associated costs of each study type.

To get cracking, we first break down the fundamentals of designing a study. We look at various examples to frame our thinking about how studies can be designed and measured as fit for purpose. From there, we explore errors in study design and some implications of this.

Before starting, we think it’s important to note that ending with a completely different outcome to that which you expected and designed the study for is not a failure of the study design. It is common to be surprised by your findings. As long as you interpret your results in light of your design, ‘she’ll be right’.

Study Design Introduction

Study design should always reflect the type of research question that is being answered. Maybe the biggest initial pitfall that many junior (and sometimes senior) researchers fall into is not thoroughly analyzing the question they are attempting to answer before launching into the study. Faults may only become apparent once countless hours and money has been invested into a flawed investigation.There are a few likely outcomes for studies fitting this frustrating description. Outcomes depend on the comprehensiveness and severity of their design flaws and the effectiveness of peer-review and post publication critique - it may end up 1) not being published 2) open and constructively critiqued by others 3) retracted from publication.

Keep in mind that we are focussing on avoidable flaws. Every study is imperfect but it can be published with the right analysis and interpretation. We address this further down. For more on how to write for publication success, read about publication narratives in this piece here.

Although there are a variety of clinical study types available, there is a tendency to prioritise the randomised control study (RCT) in all situations. The idea that an RCT produces the highest quality of evidence for any given research question is not necessarily misplaced - they play an irreplaceable role in evidence accumulation. But by placing false hope in a RCT to fit all scenarios, we overlook the nuance in our research question and interpretation of results. RCT over-appeal has even been discussed as having a negative impact on evidence quality.

RCTs are prohibitively expensive, take years to complete and would be very difficult for juniors to get involved with. These large studies are less predictable in their duration and will provide comparatively little (or no) opportunity for hands-on learning in project design. Focus on something on the small side to get you immersed and learning. It will ensure you can see the project through to presentation or publication yourself.

Best Study Type?

So how do we achieve this goal of understanding nuance in research questions and results interpretation? A good starting point is getting to know the different clinical research study types and how controls work in the non-ideal real world (pg25-29). Let's introduce this with an example.

You are examining the negative impacts that smoking may have on a certain disease's progression. Randomising patients to smoking is ridiculous and you might have an interesting time getting ethics approval. You need to find people with prior smoking exposure and lack thereof. This kind of investigation is a cohort study. 

So what are the drawbacks to using a cohort study? There are a great deal of confounding risk factors for all diseases that now can’t be controlled for. For example, do people smoke in a social setting that is also associated with alcohol? Further compromises might include whether to complete a prospective or retrospective study. A prospective (enrolled in study before smoking or non-smoking status is defined) cohort study would be most beneficial as we can examine and control for additional confounding risks to a greater extent. Despite this, patient frequency or lack of resources for an initial investigation could force one to follow a retrospective path (selecting patients after they have already been exposed to smoking).

It is highly common for students to start with retrospective analysis. As has been mentioned, retrospective studies are easier to complete. Extracting data is substantially quicker (minutes vs hours per patient) and the work is flexible. The studies also have less lead time (ethics is easier & less logistics) and little to no cost (supervisors don’t have to risk grant money).

Despite these wide benefits, we should consider working small prospective studies into our training. The struggle that comes with data collection can give us a deeper understanding about the limitations different types of data have. It allows for an unreplicatable experience with controlling for confounding variables. This experience has the benefits of smoothening the challenges in being critical in presentation and publications.

From this hypothetical, we can start to see the various and tailored approaches that unique scenarios require. The clinical question remains at the heart of study design and more complicated design considerations look at this. This is exactly what Vincent and Brochard did in a 2017 editorial that discussed whether an RCT for extracorporeal membrane oxygenation (ECMO) is possible. ECMO is a highly specialised technique typically used in hypoxic patients refractory to other treatments. While the editorial covers numerous aspects, there were two key points made about ECMO:

  1. The specialised nature of the procedure means that patient volume per centre is low and there is little standardisation inter-centre. Not much control for a RCT.
  2. Pathology underlying hypoxia is highly heterogeneous and there are often differing treatment decisions based on this fact (e.g. maintain oxygenation or minimise inflammation). This raises a question - how do we balance the effect these differences may have on predetermining patient outcomes?

Understanding the clinical question, as demonstrated by in the case of Vincent and Brochard, allows the researcher to pick a study type that is appropriate in answering one aspect of a topic in a safe and targeted manner. Being able to determine ways in which a study could be designed is a skill that is developed not only through study in the clinical field, but also study in research itself. The latter presents itself in many ways, from practical research exposure right through to reading publications in journals known to publish quality work with diverse methodologies. Some highly regarded multidisciplinary journals include: NEJM, Nature or Nature Medicine, PLOS Medicine and The Lancet.

Study Design ≠ Study Type: other considerations matter

”An important thing to consider is whether the project design matches the statistical test and associated interpretation”

Now, let’s say we have designed a flawless study that will accurately answer the specific and targeted question (this will never happen for countless reasons but play along). At this stage, the study conduct itself becomes the next major challenge.

Conduct can most broadly be defined as the data collection, analysis, and interpretation processes. The analysis phase of research is widely taught in medical schools through statistics modules, however, it should not be viewed in a silo. Statistics should be applied in the context of what a study was designed to find (we cover this in detail in our statistics vs. analysis article). It is important to consider whether the project design matches the statistical test and associated interpretation. For example:

  • Treatment non-inferiority
    • A p value would give the likelihood a difference is real. This could be good for looking to see whether treatments differ.
  • Treatment superiority
    • An effect size gives the impact of a difference. This could be more suitable when looking for superiority as differences need to be meaningful to clinical practice.

Takeaways? Research and learning research are always an integrated process.

You may have noted that data collection was absent from the previous paragraph. This doesn’t mean that data collection is less important than analysis and interpretation when considering study design. Accurate and consistent collection is a bedrock of result reliability.

We cover common questions and issues on data collection in our organising data article.

Data collection's absence from our overview of conduct is similar to the reduced attention it receives in practice too. In practice, processes of conduct are highly scrutinized by the scientific community.  This scrutiny can come from colleagues during the study or from the broader professional network via peer-reviews or post-publication dissemination. While analysis and interpretation are visible at any stage of the process by simply viewing the manuscript, the critique of data collection is typically restricted to the colleagues who conduct the work with you. Peer-reviewers will take your claims about following data collection procedures at face value. For years, there have been pushes for transparency around data collection. Despite the benefits this transparency might bring, this is not yet commonplace and is often met with non-compliance. This means that the responsibility for data collection scrutiny remains with colleagues for now. When colleagues are busy with all the other logistical challenges of research and clinician academic life, it can be a lot to expect double and triple checking of each other's work. This might be why it’s not always done so well...

A highly publicised incident of poor data collection practices was The Surgisphere Scandal. A study looked at claims that Hydroxychloroquine could be an efficacious treatment for COVID-19 complications. The thing is, the records that study data were drawn from appeared to be falsified. For example, numbers of COVID-19 patients in Australia exceeded official counts. While the fraud appeared to be confined to 1 or 2 individuals, the only explanation co-authors had was that their role was only part of the process. They had not seen the raw data. This is actually common. Despite this, when you publish you have to make a declaration that you vouch for the accuracy of the publication. It is a responsibility of every author.

The publication and by extension the research team were given the academic death sentence of study retraction and international notoriety. It had already left an immutable imprint on clinical practice though. Randomised clinical trials were put on hold and lives were almost certainly lost.

Errors Effect All Studies

”As editorial processes are improving and independent teams focus on the issue of poorly designed research, more and more studies are being retracted.”

Scrutinising data collection is a common way in which studies are pulled from under the carpet and the findings are discarded through retraction. The Surgisphere Scandal was widely known due to the study's immediate clinical relevance and publication in one of the most prestigious medical journals, The Lancet. It is not only loud and noted studies that have this issue though. In 2018, Science published an article on the growing number of retracted studies. As editorial processes are improving and independent teams focus on the issue of poorly designed research, more and more studies are being retracted. A generally taboo process, retraction is no small thing to undergo. There are ongoing pushes for clearer statements explaining the reasons for study retraction to promote transparency and differentiate fraud from honest mistake.  

Being Banksy in Project Design

To summarise our key points for you:

  • There are many types of research studies and due to both ethical and clinical scenario mediated limitations, we can’t take a ‘one size fits all’ approach.
    • Starting small and looking at diverse projects will provide us with the best opportunity to see a project to completion and have the confidence to publish it.
  • Once we have a study type locked down we need to consider other design aspects. How does the study type influence what statistical tests we use and what interpretation can we take from these tests? Have we ensured our data collection is reliable or is our study a ticking time bomb of unreliability?
    • Knowing where things go wrong will help us avoid the same mistakes.

The best way to be comfortable with project design? Do your due diligence at every step and commit to holistic research understanding. A first step is to consistently read and critique studies: we do this at an entry level in our Journal Club. Start with clinical conditions that you understand in terms of  pathophysiology, diagnosis, and management. This will help you understand why a study question is relevant and investigated in a certain way to start with.

This essay is a great next read. If you do get stuck or want some more help, get involved in our community to ask any question you want.

Read Part 5

Data Organisation Essentials

Data organisation may spark thoughts of complex computer systems or school era discussions of rounding and significant figures. That’s not what we’re getting at here. This article outlines easy to understand data handling practices that we usually only pick up after time in the research game. Getting a grip of it from the start will greatly help in making research success pain free(ish).


First up, the issue. You need to keep good data records. While this may sound obvious, it is realistically often the last thing researchers consider. So, why do we want good records? For one, other researchers may want to include your published data in a meta-analysis. You can also be contacted for your data during peer review or post publication review. Journals have been pushing for data to be made publically available as supplementary information (some already mandate it). Inability to produce data can bring into question the integrity of your work and in the worst cases it could lead to publication retraction. Get to know what is needed in scientific communication through our articles on conferences and publications.

General Advice

We often strive for certainty when it comes to right and wrong practice. The thing is, there are always multiple options. What is right depends on the circumstances, our interpretation and the breadth of knowledge we have to apply at that point in time. Noone and no research is perfect.

To account for this uncertainty spiel, it is crucial that we record our thinking behind every step-in data collection and analysis. This applies to all the subheadings we have in this article and our article on Statistics vs. Analysis. By recording our process, at that point it happens, we can be confident in the limitations of our work and the context in which we have to interpret it.

For example, we record data at 1 day instead of 1 hour intervals as “X” study showed the variable doesn’t noticeably change in an acute fashion. Recording less frequently will save time and money. By doing this though, we must know that we will need to wait several days to a week to pick up any trends. Any secondary complications that theoretically arise in a shorter time frame may not be accurately inferred. Moral of story. Don’t look for something that arises in the same time frame that your measurement frequency occurs at.

Sourcing Data

Regardless of study type (prospective or retrospective) you can encounter lack of control in how you receive data. Broad considerations for each study type are listed below, followed by some suggestions under each sub-heading. If you need help understanding study types and their design, you should check out our article on project design.


If completing a retrospective analysis, the records will almost certainly be electronic. Any paper based data will have been transcribed already. Will the data be consistent in format though? This is unlikely. This is particularly the case for meta-analysis where searching the literature will produce an array of studies from different countries with different procedures.


In prospective studies you may have to deal with paper or electronic readouts depending on machines used, hospital systems, and more. Depending on the size and number of people collecting data you should be able to produce a consistent system and formatting. If collecting your own primary data, you should spend as much time thinking about how to collect data as you do about what you want to study.

Let’s visit another example to illustrate how this might look. Say you are planning to measure cardiac function and decide stroke volume is a parameter of interest. Here are some questions that may arise in your initial prospective study planning:

  • Is that data available and in what units?
  • Is there cardiac output and heart rate data? Calculate stroke volume instead?
  • What frequency is data measured at? What frequency of data answers your question: looking for acute onset, acute progression or chronic manifestation?
  • How reliable is the machine being used, and is there much user variability? More measurement variability can necessitate increased sample numbers.
  • Can I export data to a USB or do I have to take a paper readout?

Electronic vs. Paper Based

What do you do though if results are not produced in an readily accessible electronic format (e.g. spreadsheet)? Transcribing from paper or PDF can be time consuming and is ripe with  potential for errors. One method is using “Data from Picture” in Microsoft Excel. This feature takes information from an electronic file (e.g. PDF) or a photo of paper based data. If the information you use with this feature is not already in an electronic format you should scan a copy for a permanent electronic record. Anything can go wrong, so plan for it.

Non-Numerical Data?

Drawing out words from a sentence for thematic analysis? Voice to text software can be bought but there are both free Android and Apple programs. If sentences have words or phrases of interest, you can separate them into individual excel cells and apply formulas as aids (e.g. count word frequency).

Non-Objective Collection?

Clinical outputs such as a ventilator reading or path lab result are what they are. The process of validating a machine's consistency happens during the system development and when control samples are run. Our only option for increased certainty despite measure variability is increased patient or sample number. We enter a realm of greater subjectivity when it comes to tasks such as assessing whether a study fits meta-analysis criteria or determining histological grading. We are not machines, after all.

It is standard to have subjective processes completed by two people who have been trained to the same criteria. A third person can be used as a tiebreaker when there is disagreement. You should always validate your processes by testing your assessment on a data set that represents the variety of what you will need to categorise. You could always place (blinded) a few recurring items for repeated re-categorisation (controls). If the item is categorical you can set a minimum sensitivity and specificity for your categorisation. If it is numerical you can form a Levey Jennings Plot.  When your categorisation falls outside the specified thresholds you can repeat the categorisation you did after the previous control.

An example of a traditional Levey Jennings. It is based on the normal distribution that most biological phenomena produce. We know whether a value is likely to be real or not based on whether it falls within the mean and standard deviation that a normal distribution exhibits (95% of values within ~ 2SD, 99.7% of values within ~3 SD). You need to pick what level of certainty is suitable for your project.

Automation Consideration

In any of these situations you may need to manipulate data format. Thinking in terms of automation can save you labor and, so long as you check your system, increase accuracy. Here are some entry level examples for the less coding-inclined individuals.

1. Excel Formulas: Averaging time intervals and reorganising data can be done with straightforward formulas. An example is below:

2. Excel Visual Basic for Applications: When the process of automating data organisation is more complicated we can use ‘VBA’, otherwise known as macros.

Storing Data

Programs: Limitations?

Microsoft Excel is the most commonly used, understood, and available program for storing data. It is more adept at managing numerical information but can still be useful for sorting text. While Excel is suitable for most small projects, don’t be as careless as the UK’s NHS during their recent mishandling of public health data. Using Excel for managing extraordinarily large data sets meant that COVID-19 cases went unreported. This is because Excel has a row limit that they exceeded. REDcap is a database system for these larger population studies. You may have access via institutional affiliation.

Data Identification

Your study may involve storing lots of data, from many sources, and for long periods. Identification of data is, as a result, not as simple as it sounds. Most data for clinical purposes is stored in an anonymised format. In some cases, the personal details are then stored separately. This means you need data identifiers. We need to use two identifiers for every patient sample (e.g., unique ID number, date of birth, date collected). This can mean that your data is still identifiable even when one variable is incorrect, missed, or literally ripped off in the case of labels.

You may be offended by this but dates have an international standard format. Using 2021-06-31 or 20210631 for your date formatting will:

  • Allow others and computers to automatically detect your date. If you use 31/06/2021 you might get mixed up with the wrong format of 06/31/2021. Fight me USA.
  • Allow your files to be easily found in a list as they will automatically sort in a consistent order across days, months, and years.

Data Safety

Regardless of the format you store data in, you need to not lose it. Large institutions often have networked ‘shared’ folders which are linked to a local server (hard drives that store several copies of the data). These systems automatically cover backing up requirements. For those going it alone or without access to a shared folder, you need to follow the 3-2-1 principle. Data that is only on your computer is not enough and backing up infrequently is nearly as bad as not doing it at all. Beyond this, think about password protection and limiting access to essential staff. You typically have to retain data for at least 5 years after publication in case the data needs to be reviewed. Ethics approvals or institutional policies are likely to dictate your specifics of storage duration. Don’t get caught out and have your integrity questioned.

Using Data

Format in = Format out

Data analysis may occur some time after you start collecting data. Despite this, you should consider the processes you will follow when starting out. If you have your data stored in the same format that your analysis program uses, you will be able to import or copy straight in. This saves time and can prevent errors. Here is an example of a statistical program format. The rows are time points, sub-columns are patients/replicates and columns are treatments.

If things don’t work out as planned or you’re doing something ad hoc, remember to use the paste transpose function. If dealing with large data sets, consider writing an Excel formula or aforementioned VBA to automate the format transformation.


Common Programs

There’s a lot of relevant software out there, but some of the most commonly used for data utilisation are:


Program Pro’s Con’s
Excel Would often already have access and functionality is well understood. Limited use outside of descriptive statistics.
GraphPad Prism Easy to learn and has associated explanations.

Expensive if free access is not available through institutional affiliation.


Can be laborious for large data sets and non-basic statistical queries.


R and Python are free to download.


Flexible programs that handle complex data with more ease.

Non-free programs can be expensive.


Higher skill barrier to entry.


If not widely used in your field, you may lack support from peers.



  • Excel is good for data reorganisation and descriptive statistics (mean, standard deviation, median, etc). Careful though as Excel has limited flexibility to handle non-basic statistical tests and non-standard data format. Understand this before using Excel for statistical tests.


GraphPad Prism

  • Plug and play software. Prism has defined table formats for different types of data and analysis. Sample data sets and associated guides explain the application of each test and what the associated settings mean. Prism is a firm favourite of biomedical research.


Other: R or Python or SPSS or Stata 

  • These programs either require, or allow for, coding of some form. For not being put off you can be rewarded with greater flexibility and efficiency in your analysis. These programs are more widely used by statisticians or researchers handling unique models and large data sets.

Process Traceability

In the end, you may have to repeat work. We all make mistakes. Especially when starting out. You will want your process to be traceable if you need to go back and make changes. Traceability lets you:

  • Find your error spot. This can save you redoing the whole process.
  • Be accountable for the analysis decision you made, as the whole process is transparent.
    • Preparing for accountability provides you with an easy peer-review process and the opportunity for others to guide improvement.

Traceability: How?

First up, re-read the general advice we gave at the beginning of this article.

Next, you should always store a copy of your original data that is never altered. Every step of analysis that requires a different process and/or file should have a permanent copy. Multi-step processes can produce files that need a lot of information in their title. You may also have several subgroups of data within these files. As such, you should label everything with short intuitive abbreviations (e.g., CDR = Cardiovascular disease retrospective analysis) and have a dictionary so other people can easily understand your work. This website provides a dry definition of data dictionaries, but what might be more valuable for us are the many examples you can look through to really grasp what these ideas look like in action.

Analysis should be pre-defined before the study with a protocol documented. Clinical studies will often publish these protocols. This process keeps you accountable to following the most scientifically sound process. Doing this should also allow you to test run the efficiency and output of your data collection and analysis processes. You should again document what you did, and importantly why you did it, if completing secondary analysis (exploring why your primary outcome was the way it was). It’s easy to forget our reasoning after the fact. You may publish months or years later. Learn more about the importance of context behind decisions in our Statistics vs Analysis article.

Where Next: Marie Kondo’s Data Guide

Has this sparked a thought? Reach out to us on our community platform. You can even ask specific questions anonymously.

If you have found yourself enthralled by data organisation and want to read more academic information, Australia’s National Health and Medical Research Council have published a guide on the management of data in research. If not, feel free to move on to more exciting articles of ours. Project design will inform you about where your data is coming from while Statistics vs. Analysis shows you how to apply the data.

You’re done with all this research stuff for now? Email photos of your dog to fergal.temple@medicguild.com.

Read Part 6

What Does Statistics vs Analysis Mean

Statistics is, or should be, about scientific investigation and how to do it better. In practice it is often treated as a blunt instrument with numbers plugged into a program and a p value taken away. The reality of statistics is a lot more nuanced and requires context and substantial interpretation. Due to this, we will call the more ideal approach ‘analysis’.

The not great approach: The p value was less than 0.05? Great, it’s important. Higher than 0.05? Move on to the next study.

We’ve all been taught some form of statistics and as such, this article will not be going into theory. If you want that, grab a textbook. Alternatively, check out our free resources for more reader friendly content. Hopefully you have read our article on getting your data organised. This article works in tandem with data organisation to help you avoid common pitfalls that people may hit when applying statistics in pursuit of analysis. If you are struggling with stats this is likely your true pain point. Refreshing on the confusing abbreviations (e.g. alpha, beta, and H0 or H1) doesn’t help when you’re blanking at the screen.

How to Learn and Why

Clinical and academic (research) medicine both rely heavily on vocational education. You learn from your seniors more than a textbook. For analysis, this can be guidance from seniors and looking at what is published in highly regarded journals. The issue here is that peer-reviewed does not necessarily mean quality statistics. This is discussed in the widely cited Nature article by Professor David Vaux.

Researchers are often getting by on reputation and/or trust. This poor quality analysis is terrible for our literature and clinical practice quality. This article is a long but very worthwhile read on the topic. We need to do better for our research and clinical quality, however, doing better will also smooth the research process for you. Publishing well analysed research, in a targeted journal, is much easier than flailing around trying to publish something not so great.

The ability to understand whether statistics have been robustly applied is no easy task. We can  start by learning from those who actively discuss the topic critically. A starting point is this youtube video in which Professor Vaux gives a short-summary of common and big mistakes. In summary, what the experts teach is that we need to know the purpose of a test before then applying it with extensive consideration for context. Specific context is field and project dependent, however, the types of context are typically consistent across medical research.

Types of Statistical Context

Measurement Error

Step 1 of analysis is the ‘cleaning’ of your data for errors. When in a laboratory you would calibrate instruments and run controls to ensure your measurements are consistent. This is not so possible when collecting clinical data. Information is often collected by whoever is on shift via not so ‘standard procedure’. This can then differ further from centre to centre. To account for this, we collect substantially more patient numbers than would be needed in the highly controlled laboratory environment. Despite this, we still need to consider the accuracy of our data.So how do we pick up errors? We apply context.

Point to Note: differing measurements require differing levels of precision. How many decimals are you using? Is it consistent between groups? Copying between different versions of Excel will not carry across all the decimal points, only those showing. The same applies when copying data into another program such as Prism. Prism copies out all of its decimals points to Excel though.

Biological Interpretation

  1. Is the result biologically plausible?
  • pH at 0.72? Maybe an error…
    • Hand collected record: is there a transcribing typo?
    • Electronically collected: what about a wrong unit? decimal vs. percentage?
  • If relevant, look at a time course and see if the change is a trend or random spike out of the realm of possibility.
  1. Do the patients have procedures or examinations that could influence measurements?
    • Patient in the intensive care unit has arterial blood measurements through an indwelling sensor? Is it possible that blood pressure will change transiently if the sensor is knocked or blood is sampled?

Statistical Interpretation

Another less nuanced, but sometimes applicable step is the outlier detection. This is intended for values that do not have the clearly discernible errors described above. Instead, these values are categorized as biological variations far from the typical group. The processes often rely on the fact that most data sets are normally distributed with set percentages of values being set amounts of variability from the central value.

Visually picking out odd values is highly subjective and open to bias. A common and more objective method is the 1.5 IQR Rule. This rule calculates the range that 50% of values sit within and multiplies it by 1.5. This number is then used to calculate the most extreme values accepted. It is subtracted from the lower end of the range and added to the higher end of the range. Prism, a widely used statistical package, offers several more sophisticated pre-programmed tests that you can take a look at. However, the principles of outlier tests remain the same.

Outlier tests look for values that are unlikely due to chance when considering the group’s central value and variability. We should note that when considering outliers, the process can easily be misused. If you exclude values after the fact, the process can be considered p hacking. Some questions that you might do well to consider for best practice:

  • Do you have adequate numbers to determine a population central value and variability?
  • Have you removed prior outliers and now found new outliers?
  • Are these new outliers actually outliers or only on the extreme end due to the prior value removal?
  • Have you applied the tests consistently and before results interpretation?

This set of graphs illustrates outliers and the effect the process choice has on statistical significance measurements.

These data sets have over 30 measurements per group. Graphs A and B show that we need to visualise our data in a more comprehensive manner. A plain bar chart does show increased data variability, however, it does not show that the variability is primarily due to a small set of values extending above the typical group.

If we remove these obvious high values (subjective: exclusions) we no longer have statistical significance (Graph C). This subjective exclusion of values in ‘Treated’ failed to recognise that we need to consider variability on a group by group basis. ‘Placebo’ values are much more tightly grouped and when we use the IQR rule there are several values that have to be excluded (Graph D). This returns our data to statistical significance.

This hopefully shows how result interpretation can be skewed if data analysis does not follow a standardised and justified procedure.

Statistical Test Choice

You now have cleaned up data and are happy to apply your statistical test. What test? You may have heard of a student's t test, confidence intervals and an odds ratio - but do you thoroughly understand what the results of each one means? You need to match results with statistical tests that elucidate the meaning that your original project design set you up for. This is how you produce good quality research. We can’t answer everything in one test.

  1. A t test will let us compare two end results. If we want to consider the effect of time or multiple treatments we need to consider a Two-Way ANOVA.
  2. The 95% confidence interval of a data set will tell us what the absolute value is likely to be, as well as our group's variability. If we want to compare groups and see the likelihood of a difference, we’d need to use confidence intervals of measurements such as the odds ratio or mean difference.

Test Assumptions

Once you have identified the right test for your intended question, make sure your data fits any test assumption. What may the assumptions be? The data being of a normal distribution is perhaps the most common. Whether it be a t test, an ANOVA or linear regression, normality is required for test validity. If you have looked at your data for errors and outliers, you will have already visualised the data. This is how we pick up on distributions, however, there are always statistical tests that can be used as well. 

This link discusses how we can apply data ‘transformations’ in cases where normality does not exist. Be warned that any manipulation (e.g. logarithmic) changes what you can interpret (e.g. t test statistical significance), as the result only happened in the context of the transformation.

Considering Time Course

Continuing on the theme of data manipulation- what do we do if our baseline values are different? Consider the following example. You want to look at the serum creatinine levels of someone with chronic kidney disease after treatment with standard care or standard care plus a new treatment. For this we need to consider the creatinine levels pre-treatment to ensure that our groups (who would have been randomised) did not have different kidney injury severity at baseline. If they are different, what can you do?

A common approach is to express values as a fold change relative to baseline.

Remember, now that you have transformed the results, you can no longer consider statistical tests results in the same light as before.

Fold change is used to examine biological effects (just seeing what is happening but not implications). This may be essential due to baseline differences but is particularly common when having samples analysed by laboratory procedures. This is due to its ability to account for procedure variability. This does not mean that it is solely useful in the laboratory though.

Fold change does not consider absolute values and clinical significance in relation to any potential reference ranges.


Clinical Significance

So, what did we mean by clinical significance? It’s probably easier to first define it by what it is not -  it is not statistical significance. As we discussed in our previous example illustrating outliers, statistical significance is indicative of where an average value sits and the variability of the values around this average. As such, the statistical significance tells us the likelihood that the groups are truly different. This is generally indicated by p values with reference to the normal distribution.

Some important side notes:

  1. We can typically estimate the presence of statistical significance by the overlapping of error bars (means & SD or 95% confidence intervals), doing this will miss some statistically significant relationships. This article is a must read on the topic.
  2. When using p values and the phrase statistical significance, always quote exactly what your p value was. Saying < 0.05 is not good enough. The exact values affords the reader knowledge on what your p value actually means. The probability you incorrectly conclude that groups are different.

Clinical significance adds to the likelihood of a difference being real, by taking into account the context of whether the difference in absolute values will have a clinical impact on the patient. Let's use another example to see this in action. Say you are taking a serotonin and norepinephrine reuptake inhibitor to help manage the anxiety your supervisor is giving you.

In the left panel there is a clear statistically significant difference (confidence interval of mean does not overlap 0), as measured by the Beck Anxiety Inventory, at the 6 week mark for Group 1 and Group 2. Both of which are taking the medication. The response in Group 1 is then larger to a statistically significant extent (p= 0.001).

The right panel shows the mean and 95% confidence interval for the absolute values at baseline and 6 weeks for Groups 1 and 2. The markers for clinical categorisation thresholds and the ability to compare absolute change give us more clinically relevant information. The graph  shows a clinically different presentation in Group 2 but not in Group 1. This is despite the larger change in Group 1. While you may argue that a large change regardless of clinical classification would be clinically significant, we still need to delve into what exactly the change means. These scales are typically validated for their cutoffs. There is an equally high chance that the large anxiety change at higher anxiety levels is a smaller change in quality of life than a smaller anxiety change at lower anxiety levels.

Questions of clinical significance can become even more difficult. For example, your survival curve appears to have a fractional improvement when using a new life support treatment for respiratory failure. The mortality rate is usually 90%. The result is not statistically significant (p<0.05) at p = 0.15. When mortality is this high, what degree of certainty is clinically significant to us? When mortality is this high, how much response variability do we expect regardless of treatment? Remember statistical significance is only a product of the different groups central value and variability.

Unfortunately, measures such as an anxiety scale are not as easy to interpret as potential medication side effects, like hypertension. The nuance here is why outcome specificity is so  highly important.

Outcome Specificity

When designing and later analysing a study, we need to ask ourselves; what is our outcome and does it specifically measure what our study is looking to find out? If your study is looking to test the effectiveness of something (e.g., new treatment or screening) compared to the current standard, then using the current clinically accepted marker could be the best bet. If you are trying to understand a disease to a greater extent or perhaps analyse a mechanism, you will need to refine your focus. This is because a clinical marker can not encompass the whole disease nor is it necessarily an accurate marker.

Those familiar with jugular venous pressure as a fluid balance measure may be aware that clinically recommended and used markers do not always have good correlations to what they’re measuring.

Our first example of outcome specificity centres on Crohn's Disease (CD). We want to test whether our new treatment is effective in managing CD. We decide to measure C-reactive protein as our outcome due to it’s less invasive nature and comparative inexpensiveness. When it comes to drawing our conclusions, we can not say whether our treatment is more effective for managing CD broadly, as management is multi-faceted. As this study showed, C-reactive protein does not necessarily correlate to measures of clinical activity (or symptoms). For a conclusion on broad treatment effectiveness, we would want to see an array of markers improve. For CD, these could include:

  • C-reactive protein (an acute inflammatory phase protein).
  • Endoscopic mucosal healing (indicator of longer term disease remission).
  • Clinical activity (symptom standardised survey).

Measuring everything at once is not always possible and definitely not necessary if your conclusions are specific (which they should be!). With this in mind, our revised hypothetical CD study decides to focus on the ability of a given treatment to increase mucosal healing at 1 month intervals for a year. We do this as we think the reduced complications this goal would achieve (e.g., reduced bowel perforations) could have the most wide ranging benefits. Invasive biopsies (with histology) are the best method to detect mucosal healing in the clinic. However, they are unpleasant and more risky for patients. Before using magnetic resonance imaging as an alternative measure that is more tolerable, we would look for a study such as this one. This particular study shows a strong correlation between the evaluation of mucosal healing by magnetic resonance and biopsy with histological evaluation (the gold standard). This standardisation step is just as important as picking the specific marker to measure.

Gold standard references, such as biopsy with histology, are fantastic to have to hand on hand, but we must be aware that even these can be flawed. Villous atrophy and crypt hyperplasia are indicative of gluten exposure in Coeliac disease. This study showed that histopathological assessment used in regular clinical practice to detect these changes is not sensitive enough. Narrowing down a reference point is no simple task and absolutes don’t ever exist. We often screen large samples of ‘healthy’ non-symptomatic individuals to define a reference range of normal. We need to be particularly aware when completing this step, as our reference range should be representative of our research group of interest. One clear example of why this is so important are the ethnic and sex differences in white cell and platelet count. 

As can now be seen, outcome specificity is one long list of understanding exactly what we’re measuring and what we’re benchmarking it’s interpretation against. It is the context that allows us to use clinical and statistical significance for the right purpose.

So I’m a Professor Now?

Hopefully this rundown has given you a greater appreciation of the need for, and types of context, present when conducting analysis. This context manifests in several ways:

  • Measurements and their reliability.
  • Assumptions behind statistical tests.
  • Matching your desired study outcome with the correct statistical test.
  • Understanding the difference between a statistical difference and a clinically significant difference.
  • Knowing whether your study outcome measures the clinical significance you are looking for.

Understanding the need for context during application is essential. But if you don’t understand the basics of statistical theory, you will struggle to put this scaffold into practice. We have collated some free resources that cover the basics of statistics. Need some help understanding the content? Reach out to us in our free community, where you can chat with us and other med colleagues publicly or anonymously.

Once you feel comfortable with data analysis, consider why we’re doing it. Research is of no use if other people don’t know about it. We discuss the practicalities of scientific communication in our articles overviewing conferences and the basics of publication.


Read Part 7

Conferences: An Overview

Conferences. You may have heard of them, made some posters and given talks in your time. Truthly though, you might not really appreciate their purpose and know how to get started. That’s normal. Conferences are often neglected by people casually engaging in research as they are less flashy and definitely less defined in purpose. This might be why conference jargon is so damn confusing.

This article will explain the purpose of conferences, clear up some jargon and detail the process of presenting research.

Presentations, a form of research communication, typically happen at the end of a long research process. Before you get to this point, you should first get your head around the fundamentals of good research - the necessity of original research, the importance of designing it for purpose, how to streamline the organisation of data you collect and the application of statistics: all of which are the practical considerations that classes may leave out.

Presentation Purpose

Publications are the more famous and beloved sibling of the research communication process. Being a permanent record of the finished project, publications are all about adding to our shared knowledge and gaining a lot of attention in the process (hello impact factor). They are the showboats of the family.

Don’t get us wrong, we love publications too. They just need some humbling. We discuss the basics of publishing here.

Presentations are the quiet achievers, more focussed on self-improvement. They are your work in progress encompassing many broad aims and significant, transferable benefits:

  • Constructive feedback on prior work and direction.
  • Network with other researchers and increase collaboration opportunities.
    • Collaboration provides diverse expertise and can produce both more well rounded research and increased research output.
  • Getting a feel for the trends in a research area.
  • Learn how to do better research by viewing others' work.
  • Gain clarity and improve your science communication as you have to repeatedly crystalise your project for an audience.

These benefits are one reason why specialty colleges may assign significant weight to conferences when assessing candidates’ research experience. You can view one example of this in the 2021 Structured Curriculum Vitae (CV) of the College of Intensive Care Medicine of Australia and New Zealand. You may have heard of PhDs and research MDs (example here) and understand their weight in this context. But have you heard of an ASM? And do you know how conferences differ as they get larger?

If you weren’t aware of specialty application CVs and their role in your future, we’re sorry to do this to you.

Meeting Jargon

An introduction to the type of conferences available to you.

ASM = Annual Scientific Meeting

You can attend an ASM as a medical student or junior doctor without any research experience. As you progress as a researcher, your aim is to present your research works at the ASM for your relevant specialty field.

Each specialty typically has a nation-wide society that holds an annual scientific meeting. For example, Obstetrics and Gynaecology have the RANZCOG (Royal Australian and New Zealand College of Obstetricians and Gynaecologists). You can check out the 2021 ASM which RANZCOG held online due to COVID-19. Despite the more difficult networking that comes with online delivery, you will enjoy much greater affordability, a great perk for junior physicians and students. The registration fees for conferences can get quite expensive (upwards of $100). This does not include the costs associated with travel, accomodation, and everything else that goes into a few days away from home. Conferences often look favourably on student attendance - make sure to state that you are a student when booking for a subsidised fee.

ASM Types

The larger the ASM, the more people eligible and the more prestigious the conference.

ASMs can occur with entrants from regional, national, or international levels. The larger the ASM, the harder it is to be selected for a presentation and consequently more CV points are allocated towards it. Some colleges may allocate a point for ASM attendance (e.g., Australian Orthopaedic Association), but more points are allocated for poster or oral presentations.

Regional ASMs are a welcoming spot for all. Consider international ASMs once you are established and publishing in a field.


The Process

Before getting into the steps for presentation acceptance, you need to understand how to make a presentation. It is more than a visual manuscript. Some key points to consider:

  • There is a different level of detail required. The protocol does not need to be step by step. You overview the processes used, the idea being that an audience member can ask for further detail if they think it is relevant to the conclusions you have drawn.
  • For larger projects you will not be able to include your full data set.
    • If complete, you can crystallize your key findings. This does not mean you should ignore negative findings (e.g., treatment had no effect). It means you should focus on the most clinically relevant information.
    • If the project is still in-progress, you could present a protocol development stage of your project. Showing people why you did something a certain way can test your ability to understand your studies’ limitations. It’s a great opportunity to challenge yourself and challenge others - you may even make someone question what they do when conducting their own research.

Abstract Submission

The first step to presenting your research is an abstract submission.

An abstract is a brief and comprehensive summary of the presentation's contents. Depending on the size of your project, either during or at the conclusion of research, you may be encouraged to submit an abstract to the ASM. If you are not, we’d strongly suggest you take the initiative. As discussed above, presentations are great learning experiences.

The abstract serves as a proposal for your presentation. The ‘Submitting Author’ is the first author who submits the abstract. The ‘Presenting Author’ is the person who will be presenting at the conference. Ideally, you will be fulfilling both roles - but make sure to discuss this with your supervisor. The abstract submission usually closes a few months before the ASM. Don’t try to submit late!

After submission, the abstract will be reviewed by the College and selected for either a poster or oral presentation! Some of the more prestigious meetings will have lower acceptance rates. They may also peer-review the abstracts and, if associated with a journal, may publish them. ISBT and Vox Sanguinis share one such association. The submission of an abstract to the conference and the potential for publication encourage attendees to view your work, and can help you gain recognition within a field.



Poster Presentation

As a newcomer your first presentation will likely be a poster. It requires less experience and is a good training ground for how to field open questions.

The purpose of a poster are twofold - to present a piece of work that colleagues can easily view and to stimulate an exchange of ideas between the presenter and audience. Your poster will likely be a single A0 poster, but the sizing may vary depending on conference guidelines. It is important for your poster to be eye-catching and readable from 2-3 metres. Take a look at the conference's best poster award when designing your poster. This can guide your design.

You will be allocated a time slot (‘Poster Session’; usually 30 minutes) where you and other presenters stand by your posters and field any questions. You may be required to print your poster beforehand and bring it to the conference.

If you’re nervous about presenting a poster, try not to be. They’re meant to be entry level. It is actually pretty easy to get a spot if you put the effort in.

Oral Presentations

Oral presentations typically happen as you progress in your career.  They can also be awarded to findings deemed to be of particular interest. Oral presentations allow you to distribute your research findings to a wider audience (and get more CV points!).

If you want to get an oral presentation you will require good research, networking and choosing the conference strategically (e.g., focussing on a national instead of international conference).

For an oral presentation, you will usually be allocated a 10 minute speaking timeslot. This is followed by a 10 minute Question & Answer time. These timings are typically followed very closely. The presentations occur in parallel (or in ‘streams’) and this means there may be 2-3 presentations happening concurrently. Attendees will get to choose what to listen to. In the new era of virtual conferences, you can be expected to pre-submit a screen recording of yourself giving the presentation. The conference will play the recording live on the day of the event. Following that, you can be expected to answer questions online in real time.

Breaking Down Research or Emotionally: Conferences Don’t Have to Give Both

Now hopefully you feel more confident in understanding

  • Why presenting research is an important step for developing your work and your future publication.
  • That the terminology used isn’t that exciting once you know it
  • How the process flows and what you want to aim to present.

Now you may still be thinking, “‘I’m only a medical student, I’m not qualified to present at these conferences!”. This is a common hangup - but just have a look through any ASM program guide and you will see medical students giving poster and oral presentations. If the idea of presenting in front of professionals still terrifies you, our other articles can help in build research confidence

Student conferences can be a great dry run that will build confidence. We have compiled a list of these opportunities on our resources page. Our free community is a great place to clarify any thoughts you have after reading this article.

Read Part 8

Publishing Research: The Introduction

Most people know the typical format of an original research article. We have the abstract, introduction, methods, and so on. This is something drilled in assignment after assignment. Despite this theoretical exposure, why are we not so confident in understanding how to submit an article for publication?

Understanding the publishing landscape is crucial to getting your research published. You want to pick the right journal the first time round (or at least within the first couple tries). Submitting to multiple journals requires a whole host of tasks that are highly time consuming: rewrite of paper narrative, formatting, cover letters. It almost goes without saying that this reduces your likelihood to persevere and publish. Giving up after you’ve done the research may sound crazy right now, but trust us it happens.

Publications, a form of research communication, are typically saved for the end of a long process. You first need to understand the necessity of original research, the importance of designing it for purpose, how to streamline the organisation of data you collect and the application of statistics. Our linked articles give the practical considerations that classes can leave out.

We’re going to give you:

  • A crash course in how the publishing industry works.
  • An introduction to navigating journals (with primary focus on original research and literature reviews).
  • An overview of other publication types, their purposes, and key points to consider if you are thinking about tackling one.

What Journals Want

Publishing System

The key parties you will interact with when proposing a manuscript (unpublished paper) are:


  • These are the large companies that publish items such as textbooks and journal articles. While there are many, including those owned by universities, very few control the majority of the world's journals. Some of the large publishers are Elsevier, Wiley, Sage, and Springer.


  • The organisation that publishes your article. Note, you literally sign the copyright of your article over to them. Thank you corporate overlords.
  • While some journals accept anything scientifically valid, most have specific fields they focus on and aim to advance. They will have readers interested (and who will cite articles) in these areas
    • You can find a journals ‘scope’ in their for authors section


  • A research field expert who runs the day-to-day operations of the journal. Editors review manuscripts submitted to the journal and decide whether the study fits the journal scope. They will also look for any major study design or interpretation errors.
    • There can be sub-section editors who have the expertise to cover these processes for a sub-field of the journal.


  • Peer-reviewers are active (e.g., published in the last 5 years) researchers within a research field. Journals may have a set amount of publication experience before they approach a researcher, or accept a researcher's offer to peer-review (e.g., 3 first author papers in a field).
    • Journals may ask for a list of potential peer-reviewers. Journals can also accept a list of people you may want them to not use. Research feuds are real.
  • Peer-reviewers decide if your research is scientifically sound with reference to your stated aims, your conclusions about these aims, and the methodology used to arrive at your conclusions. They can also assess if your research is thorough and novel enough for a journal's scope.
    • The limitations of the peer-review processes are highlighted in our article of project design. We discuss what conduct they do and do not typically check.

In an age of niche interdisciplinary fields, increasing collaboration size, and accelerating publication frequency, journals are finding it very difficult to find suitable independent peer-reviewers.

Navigating Journals

The first step to publishing is finding a journal. This is typically guided by seniors in your field, however, if you lack support or they’re also not sure:

  • Search Pubmed for similar articles and where they are published.
  • Use a find journal service that several major publishers provide.

As you progress and have more responsibility, you will find that your relationship with your supervisor and the choice of the journal are two critical parts in getting a publication out relatively quickly. This is because:

  • Having their support helps significantly to guide you through the steps.
  • They will know which journals are likely to be interested in their type of research.

Finding a journal now sounds simple enough. If it is though, why are there so many articles and opinion pieces on the difficulties of finding a journal to publish in? Your article may fit a journal's scope in your mind but it can be an issue of how you ‘sell’ it to someone who isn’t familiar with the project, or perhaps even your specific research area. This selling needs to be in the focus of the manuscript's aims and interpretation, but this communication and even persuasion starts with the cover letter you send to the editor. You can find a template for this here

Generally, projects can be looked at from multiple angles. A study could look at the effect of transfusion on mortality during hemorrhagic shock. A key outcome is those that are transfused more frequently have a greater injury to their lung with no survival benefit. If you decided to submit this manuscript to a respiratory journal but emphasised the initial view you had of the study (reducing mortality in hemorrhagic shock) you wouldn’t get very far. You’d need to emphasise the risk of lung injury when treating near fatal hemorrhagic shock with transfusion.

This manuscript could realistically go to a transfusion medicine, emergency medicine, or  respiratory medicine journal. Picking a journal before you start writing can help you develop (but not determine!) the narrative you will present the data and writing too.

Other considerations once you’ve thought about this include:

  • Is your publication chronological as you completed the work? Publications should follow a logical order that creates a story from problem to conclusion. This may not be chronological.
  • Have you written a balanced analysis of your data with frank acknowledgement of limitations to methodology (not just sample size)?

While these examples have been written with an original research article in mind, most points are widely applicable to differing publication types.

Publication Types

Original research articles may be what you think of when publishing, but there are many different article types. Each article type has a purpose in the knowledge distribution system and requires different skill sets and time input. These differences are recognised when other professionals or specialty colleges are assessing your capabilities.

Despite sharing some central themes, different article types and their requirements can vary from journal to journal. We have summarised this by using a collated list from the 5 journals purported to be the most widely influential in clinical medicine (not necessarily prestigious). The journals are:

While you will likely start your research career with some form of original research or review article, a component of writing good research is first reading other good research. To this end, we have elaborated on the range of publications you will likely come across.


This study is not one you would publish off the bat, but we’ve placed it first for logical progression. When developing a new method you have the opportunity to share that through publication.

Research and Reporting Methods: Articles related to research methods, reporting standards, or developments in clinical practice.

Clinical Trial Procedure: Clinical studies typically register the trial beforehand (Australia and worldwide) and may publish their methods. This provides accountability as clinical trials that find no effect have historically not been published.

Original Research

This type of article is what most of us first think when we think of published research. This article type is wide ranging and encompasses studies from the laboratory to the hospital bed.

The importance of the term ‘original’ is explained in another one of our articles.

This type of publication reports on the original analyses of data. This could be on the prevalence, causes, mechanisms, diagnosis, course, treatments, and prevention of any disease. Completing an original research article can require a long commitment over months to years. There can be ethics, prolonged data collection, method trial/error, and more. With these challenges, it can be beneficial to not go it alone. Embed yourself within a research group and their ongoing research to reap the benefits of mentorship. Once achieved, these studies are the building blocks of career progression in medicine.

Take note that original analyses are not necessarily the same as original data. This is why most journals consider a systematic review and meta-analysis to be original research.

Brief Research Reports: Concise reports of original data that are limited in scope and/or preliminary in nature. Usually describes 1-3 patients or a single family. Can happen in the initial stages of a research area or when cases are highly uncommon.

Case Reports: Reports presenting a single patient or a series of patients using a structured format (Background, Objective Case Report, Discussion, References). These reports present unusual presentations of interest. They should be descriptive in nature and refrain from including inferential analyses.

Case reports can be very similar to assignments you may do during medical school. If you make a point to do the assignment well, you can turn the assignment into a case report afterwards. The effort to payoff ratio here is unbeatably low.

Systematic Reviews and Meta-Analyses: Reviews that systematically find, select, critique, and synthesize original research relevant to well-defined questions about diagnosis, prognosis, or therapy. These studies may:

  • Infer data published: A presumed data distribution with the published patient number and summary statistics (mean, SD, 95% CI).
  • Source original data: Contact publication authors for original data. This allows for more detailed and flexible analysis.

Literature Review

Literature reviews are summaries and critiques of the existing original research. The term review implies it is comprehensive, but reviews do not necessarily cover everything on a topic (e.g., epilepsy). Like original research, reviews have a specific focus.

Literature reviews are often one of the first things a researcher would read when trying to understand a research field. They are often desirable to researchers building a track record as they are quicker to complete than original research and gain more citations.

Literature reviews may be done by students as a way to become familiar with a field. The benefits are largely a flexible project that requires few resources. Reviews can be challenging in that the student will have to put groundwork in to understand the field's historical trajectory and methodologies. These points can be the basis of the critique component. Students may find the more structured approach of a systematic review helpful and achievable for this reason.

Systematic Review: Reviews that systematically find, select, critique, and synthesize evidence relevant to well-defined questions about diagnosis, prognosis, or therapy.

Narrative Review: Review articles without detailed structured methods to identify, collect, appraise, and interpret information that are often summarized descriptively in a narrative form. Narrative reviews are especially suitable for underlying theory or describing cutting-edge and evolving developments. 

Rapid Review: Components of the systematic review process are simplified or omitted to produce information in a timely manner. Could be for a time critical topic such as end stage COVID-19 treatments.

Living Review: A systematic review that is routinely up-dated at defined intervals, incorporating relevant new evidence as it becomes available.


We all know textbooks. From cell biology to anatomy they are a familiar friend. Textbooks, however, are not always as broad and universally needed as cell biology and anatomy. Textbooks can become niche for a specific method or type of treatment. When a researcher has specific expertise in an area that a textbook is covering, they may be invited to write and publish a chapter. This is often done as a team effort by a research group.

For individual medical students, there can be opportunities to contribute to more informal education resources such as DermNet.



Medical students are most likely to be familiar with guidelines. They are the culmination of evidence to date. This does not mean that they are infallible though - they are only as good as the studies they are based upon. These publications are written by leading experts.

Position Papers or Clinical Guidelines: Official statements/recommendations from professional organizations/health authorities on issues related to clinical practice, health care delivery, and public health.



Communications are the social network of research. You’d typically have to be experienced or very confident in a field to make a comment deemed ‘field specific’. Holistic views of medicine would be much more welcome to newcomers with an opinion to share.

Field Specific Expertise Required

An opportunity to participate in the post-publication review of research. These published critiques and debates of research facilitate understanding of the literature and improve future research practice.

Editorials: Commentary on current topics or on papers published elsewhere in the issue. These are typically solicited and reviewed by journal editors.

Ideas, Opinions, and Correspondence: Essays representing opinions or considering controversial issues.

Hypothesis: Describes a substantial jump in thinking that is testable but not so easily testable that readers will wonder why you have not already done it. New data are not part of a hypothesis, but one must include a section on how to test the idea.

In the Balance: Pairs of essays that each take contrary views on unsettled questions related to the practice of medicine.

Holistic Views of Medicine

This type of publication allows for shared discussion on social topics related to medical professionals.

On Being a Doctor/Patient: Short personal essays about the experiences of physicians/patients.

Medicine and Public Issues: Articles related to the economic, ethical, sociologic, or political environment in medicine.

Academia and the Profession: Descriptions and evaluations of innovations in medical education, training, professionalism, and career development.

History of Medicine: Essays, reports, or biographic sketches related to the history or evolution of medicine.

Creative Works: Poetry, comics, animation, video, and other creative works addressing medical topics.

TLDR? Look at the table

There is a lot here. We can distill this long list to variations on a few core purposes. These purposes flow on from each other and are summarised in the table below.


Publication Type Purpose Example
Methods Explaining a new technique. Improved sensor for near infrared spectroscopy (NIRS).
Original Research Generating new knowledge. Applying NIRS to detect whether vasodilator ‘A’ improves neurological function after stroke.
Literature Review Summarising knowledge on a specific research area. The efficiency of vasodilators in treating stroke compared to standard thrombolytics.
Textbooks Summarising knowledge on a broader topic. Acute stroke management.
Guidelines Creating action points based on new evidence. Assessment of whether to use or not use vasodilators, in what scenarios to use them and in what dosage.
Communications Communicating critiques and perspectives (social media of medicine). Spanning critiques of the NIRS method and design of vasodilator studies to ideas about other considerations on how to detect improvements from treatment.


Take this as a thought starter on how literature works as a system and consider what you might be interested in working towards. Each publication type requires different skills and support structures. Does diving straight into this all sound a bit too much?