Literature Review Example: Annotated Samples Across 5 Disciplines (2026)
Finding a strong literature review example is one of the hardest parts of starting a dissertation. You can read a hundred guides about what a literature review should do, but nothing teaches you faster than seeing a real paragraph that achieves synthesis — and then understanding exactly why it works. This guide provides annotated examples drawn from five academic disciplines: psychology, education, business, nursing, and history. Each example is analysed line by line so you can see not just the product, but the technique behind it.
The literature review is consistently the chapter that earns the most critical feedback from dissertation examiners. According to data from UK supervisor reports, the most common issues are: over-summarising individual sources, failing to identify a clear argument, and not connecting the literature to the research question. This guide addresses all three directly, with before-and-after examples that show the difference between a passing and a distinction-level literature review.
What Makes a Literature Review Example Good: Synthesis vs. Summary
The distinction between summary and synthesis is the most important concept in academic writing, and it is the one most frequently misunderstood. Summary describes what individual sources say. Synthesis weaves multiple sources together into a single evaluative argument about the state of the field.
Compare these two versions of the same content:
Smith (2020) found that anxiety in teenagers increased with screen time. Brown (2021) conducted a survey and found similar results. Johnson (2022) also noted that heavy social media use correlated with depression symptoms.
A growing body of quantitative research links heavy social media use to increased anxiety and depression in adolescents (Smith, 2020; Brown, 2021; Johnson, 2022), though important methodological differences complicate direct comparison: Smith employed a clinical measure (GAD-7) with a sample of 1,200 participants, while Brown’s self-report tool and smaller sample (n=342) limits the generalisability of findings that, on the surface, appear consistent. Johnson’s longitudinal design partially addresses this limitation by tracking the same cohort over 18 months, yet her study excludes 16–18-year-olds — the demographic most heavily represented in screen-time studies — narrowing the scope of her conclusions.
The synthesis example names the same three studies but does something fundamentally different: it evaluates their collective contribution to knowledge, exposes methodological limitations, and signals where the research gap lies. That is what examiners want.
Example 1: Psychology (Mental Health and Social Media)
Research Question Context
“How does passive versus active social media use affect anxiety in UK university students aged 18–22?”
Annotated Literature Review Excerpt
The relationship between social media use and mental health has attracted substantial empirical attention over the past decade, yet the field remains divided on whether the association is causal, bidirectional, or mediated by pre-existing psychological vulnerabilities (Orben & Przybylski, 2019; Coyne et al., 2020). Early large-scale studies — including Twenge et al.’s (2018) analysis of US Monitoring the Future data (n = 500,000) — established a population-level association between screen time and depressive symptoms. However, Orben and Przybylski’s (2019) multiverse re-analysis of three large datasets found that the effect size of social media on well-being was comparable to wearing glasses or eating potatoes — a finding that attracted substantial methodological debate (Odgers & Jensen, 2020). This apparent contradiction reveals not a lack of consensus but a measurement problem: studies operationalise “social media use” differently, conflating daily minutes online with qualitatively distinct behaviours. The active/passive use distinction, first formalised by Verduyn et al. (2015) and since replicated across 11 independent studies, offers the most promising resolution. Passive use (scrolling, observing) consistently predicts worse mental health outcomes than active use (messaging, creating, commenting), suggesting that the key variable is not time but engagement mode (Fardouly & Vartanian, 2015; Verduyn et al., 2017; Meier & Reinecke, 2021). Despite this, no published study to date has applied the active/passive distinction to UK university students specifically — the demographic most likely to use social media intensively during a formative life transition that already elevates anxiety risk.
Annotation
- Opening move: Signals the broad field and immediately notes the central debate (causal vs. bidirectional).
- Evidence stacking: Twenge et al. and Orben & Przybylski appear to contradict — but the author shows they reveal a measurement problem, not a genuine contradiction. This is sophisticated synthesis.
- Gap identification: The final sentence is the most important: it precisely states what has not been done (active/passive distinction + UK university students) and explains why the gap matters (heightened risk period).
- Citation density: 9 citations in one paragraph — appropriate for a master’s literature review where the goal is comprehensive coverage of the debate.
Example 2: Education (Inclusive Classroom Practices)
Research Question Context
“What barriers do secondary school teachers in England face when implementing inclusive education for students with autism spectrum disorder?”
Annotated Literature Review Excerpt
Inclusive education — defined as the placement and meaningful participation of students with disabilities in mainstream classrooms — is enshrined in UK policy through the Equality Act 2010 and the SEND Code of Practice 2015. However, a substantial gap between policy intent and classroom practice has been documented consistently across three decades of empirical research (Ainscow, 2020; Norwich & Koutsouris, 2017; Hodkinson, 2019). Teacher confidence emerges as the primary mediating variable: studies from England (Lindsay et al., 2013; Sharma et al., 2021), Australia (Forlin et al., 2014), and Canada (Loreman et al., 2007) collectively show that positive attitudes toward inclusion are insufficient without specialist knowledge and practical training in ASD-specific pedagogy. Critically, UK-based studies report that Initial Teacher Training programmes dedicate an average of 0.5 days to SEND across a one-year PGCE (Curran et al., 2021) — insufficient preparation given that approximately 34% of students with Education, Health and Care Plans have ASD as their primary need (DfE, 2023). While Florian and Black-Hawkins (2011) propose an “inclusive pedagogical approach” that moves away from individual deficit models, critics such as Kauffman and Hallahan (2005) argue that full inclusion is educationally inappropriate for students with complex needs. This unresolved ideological tension shapes how teachers interpret and respond to inclusion mandates, yet existing research has rarely examined how this tension manifests in day-to-day classroom decisions at the secondary level.
Annotation
- Policy grounding: Opening with legislation signals the author understands the institutional context — important in education research.
- Cross-national comparison: Citing studies from three countries shows broad awareness while still returning to the UK-specific gap.
- Statistical anchor: The 0.5-days training figure and 34% ASD statistic make the gap concrete and defensible.
- Ideological tension: Presenting the Florian vs Kauffman debate shows the author understands the field is contested, not just describing one side.
Example 3: Business (Remote Work and Productivity)
Research Question Context
“How does remote work affect individual productivity in UK professional services firms post-2020?”
Annotated Literature Review Excerpt
The COVID-19 pandemic created the largest unplanned remote work experiment in modern economic history, generating a volume of empirical data that prior remote work research — largely conducted on voluntary, self-selected samples — could not produce (Dingel & Neiman, 2020; Barrero et al., 2021). Pre-pandemic meta-analyses (Gajendran & Harrison, 2007; Ng, 2010) found modest positive effects of remote work on productivity and job satisfaction, with effect sizes contingent on task autonomy and the degree of physical separation from the office. Post-pandemic research, however, reveals a more complex picture. Stanford economist Nicholas Bloom’s analysis of 16,000 workers at Ctrip (2015) — one of the largest controlled experiments on remote work — reported a 13% productivity gain among home workers; yet Bloom himself has since acknowledged that the 2022 hybrid model produced diminishing returns once employees worked remotely more than two days per week (Bloom et al., 2022). In the UK specifically, McKinsey’s 2022 survey of 25,000 workers found that 58% wished to maintain hybrid arrangements, yet only 38% reported equivalent productivity at home for cognitively intensive tasks. The discrepancy between self-reported satisfaction with remote work and objective output measures is a structural weakness in the literature that remains unresolved, and it is particularly pronounced in professional services — legal, financial, and management consulting — where knowledge-intensive collaboration makes individual productivity difficult to isolate and measure.
Annotation
- Historical framing: Opening with the pandemic as a natural experiment is precise and grounded — it justifies why this topic warrants fresh research.
- Before/after comparison: Contrasting pre- and post-pandemic findings shows intellectual command of the chronological development of knowledge.
- Author updating their own position: Noting that Bloom himself revised his conclusions is a strong move — it shows the author reads beyond single studies.
- Sector-specific gap: Ending by narrowing the gap to professional services (the study’s industry context) connects the literature directly to the research question.
Example 4: Nursing (Patient Communication and Outcomes)
Research Question Context
“How does therapeutic communication training affect patient satisfaction in NHS acute care wards?”
Annotated Literature Review Excerpt
Patient satisfaction is increasingly used as a proxy measure for care quality in NHS performance frameworks, including the NHS Patient Survey Programme and the Friends and Family Test (NHS England, 2022). While satisfaction is a multifactorial outcome — shaped by waiting times, clinical outcomes, ward environment, and demographic variables — nurse communication has been identified as the single strongest predictor in three independent UK-based systematic reviews (Keenan et al., 2013; Maben et al., 2018; Brown et al., 2020). Effective nurse-patient communication encompasses active listening, empathic acknowledgement, clear information provision, and shared decision-making — behaviours collectively described as “therapeutic communication” (McCabe & Timmins, 2012). Evidence from randomised controlled trials in the US and Australia (Caris-Verhallen et al., 1999; Dwyer & Stanton, 2016) suggests that structured communication training increases both nurse competence scores and patient satisfaction ratings; however, sample sizes are consistently small (n = 30–80), training protocols vary widely, and follow-up periods rarely exceed six months. Crucially, no RCT has been conducted in an NHS acute care setting, where structural factors — high staff turnover, 12-hour shifts, workforce shortages — may substantially moderate the effect of training on communication behaviour in ways that US and Australian studies cannot capture.
Annotation
- Policy context: Anchoring the study in NHS frameworks immediately signals relevance to practice and policy.
- Effect hierarchy: Identifying communication as the “single strongest predictor” — supported by three systematic reviews — makes a strong evidential claim.
- Critical appraisal of RCTs: Acknowledging that existing trials are small and inconsistently designed shows methodological sophistication.
- Context-specific gap: The final sentence argues that NHS-specific structural conditions make it impossible to extrapolate from US/Australian evidence — a precise, defensible gap statement.
Example 5: History (Post-Colonial Identity)
Research Question Context
“How did Caribbean writers in post-independence Trinidad (1962–1980) construct a national identity distinct from British colonial frameworks?”
Annotated Literature Review Excerpt
Post-colonial theory has long grappled with the tension between nationalist cultural projects and the linguistic, literary, and institutional legacies of colonial rule (Fanon, 1961; Said, 1978; Bhabha, 1994). In the Caribbean context, this tension takes a specifically creolised form, as thinkers from C.L.R. James and Frantz Fanon to Stuart Hall have argued that Caribbean identity cannot be understood as a simple rejection of colonial culture but as a dynamic process of “diasporic hybridisation” — a continuous negotiation between multiple cultural inheritances (Hall, 1990, p. 235). The foundational texts of Trinidadian literary nationalism — V.S. Naipaul’s A House for Mr Biswas (1961), Samuel Selvon’s The Lonely Londoners (1956), and the critical interventions of the Beacon group — have been extensively analysed through the lens of colonial mimicry (Bhabha, 1994) and resistance (Cudjoe, 1988; Thieme, 1987). More recent scholarship has complicated this binary. Donnell and Welsh’s (1996) landmark anthology positioned Caribbean literature not as peripheral to metropolitan canon but as constitutive of a transnational literary modernity, and Gikandi’s (2001) analysis of creolisation as an aesthetic — not merely a sociological — phenomenon opened interpretive space that earlier nationalist criticism foreclosed. What remains underexplored, however, is the specific institutional and editorial infrastructure — journals, publishing houses, radio broadcasts, literary prizes — through which post-independence Trinidadian writers constituted their readership and, by extension, their imagined national community in the two decades following independence.
Annotation
- Intellectual lineage: Tracing the field from Fanon and Said through to Gikandi shows mastery of the theoretical tradition — essential in humanities dissertations.
- Moving the debate forward: The phrase “more recent scholarship has complicated this binary” is a classic move that shows the field has evolved and the author is tracking it.
- Institutional vs. textual gap: Identifying the specific absence of research on publishing infrastructure (rather than literary texts) gives the dissertation a precise, defensible angle that does not repeat existing work.
How to Structure Your Literature Review: 3 Proven Models
1. Thematic Structure (Most Common)
Group the literature by theme or concept rather than by author or date. Each section (H2) covers one theme, and within that section you synthesise multiple sources around it. Best for most social science, education, and business dissertations.
Example structure for a mental health study: 2.1 Defining mental health and social media use | 2.2 Quantitative studies of the association | 2.3 Qualitative accounts of experience | 2.4 Mechanisms and theoretical frameworks | 2.5 Gaps and research problem
2. Chronological Structure
Trace how understanding of a topic has developed over time. Each section covers a distinct era or period of scholarship. Best for historical research, or fields where major paradigm shifts have occurred. Requires careful discipline — do not let it become a series of individual source descriptions organised by date.
3. Methodological Structure
Organise the literature by research design: experimental studies in one section, qualitative in another, systematic reviews in a third. Best for systematic literature reviews or methodology-heavy dissertations where comparing methodological approaches is itself part of the research contribution.
How to Critically Evaluate Sources: The RADAR Framework
| Letter | Criterion | Questions to Ask |
|---|---|---|
| R | Rationale | Why was this research conducted? Is the rationale clearly stated? |
| A | Authority | Who wrote it? What are their credentials? Is it peer-reviewed? |
| D | Date | When was it published? Is the evidence still current? |
| A | Accuracy | Are the methods sound? Are the conclusions supported by the data? |
| R | Relevance | Does this source directly address my research question and population? |
Apply the RADAR criteria to every source before including it. This prevents citation padding — the common mistake of including tangentially related sources to inflate your reference list.
How to Identify and State the Research Gap
Your research gap is the single most important statement in your entire dissertation. It is the justification for why your study needs to exist. A gap is not simply “no one has studied X” — that may mean X is trivial. A defensible gap has three components:
- What the field has done (2–3 sentences: the relevant research that establishes the landscape)
- What the field has not done (1–2 sentences: the specific missing study — defined by population, context, methodology, or conceptual angle)
- Why the gap matters (1 sentence: the practical or theoretical consequences of this absence)
Example: “While studies from the US and Australia have examined therapeutic communication training in hospital settings (Caris-Verhallen et al., 1999; Dwyer & Stanton, 2016), no study has investigated this intervention within the structural context of NHS acute care wards, where workforce shortages and 12-hour shift patterns may substantially moderate training effectiveness. The absence of UK-specific evidence limits the practical utility of existing findings for NHS nurse education planners.”
6 Most Common Literature Review Errors (with Fixes)
| Error | Fix |
|---|---|
| Summarising sources one by one | Group sources by theme; evaluate them together using compare/contrast/evaluate sentences |
| No clear argument or thread | Write a one-sentence “story” of the entire chapter before drafting: “The field shows X, but disagrees on Y, and has not yet done Z” |
| Using only old sources | Ensure at least 50% of sources are from the past 10 years; use Google Scholar’s date filter and set alerts |
| No link to your research question | Each section should begin or end by connecting the evidence back to your specific research question |
| Accepting everything uncritically | Use hedging language: “suggests”, “indicates”, “argues” — and explicitly note methodological limitations |
| Vague gap statement | Use the 3-component formula: what the field has done + specific absence + why it matters |
For deeper guidance on structure and methodology, see our complete dissertation writing guide and our article on research methodology types. For citation formatting in your reference list, see our APA citation format guide.
Tesify’s AI thesis writer can help you organise your sources, draft synthesis paragraphs, and ensure your literature review builds a coherent argument. Try it free at app.tesify.app.
Frequently Asked Questions
How long should a literature review be?
For an undergraduate dissertation (8,000–12,000 words), the literature review typically covers 2,500–3,500 words. For a master’s dissertation (15,000–20,000 words), aim for 4,000–6,000 words. Doctoral theses may have literature reviews of 15,000–25,000 words. These are not strict rules — a systematic literature review dissertation might have an 8,000-word literature review at master’s level because the review is the methodology.
What is the difference between a literature review and an annotated bibliography?
An annotated bibliography lists sources with a brief description and evaluation of each one — it is a reference tool that treats sources individually. A literature review synthesises multiple sources into a coherent argument about the state of knowledge in a field — it is an analytical piece of writing that creates knowledge through synthesis. An annotated bibliography may be a useful preparatory step before writing a literature review, but they are not the same thing. See our guide on annotated bibliography examples for the format of each citation style.
How do I find sources for my literature review?
Start with Google Scholar, your university library database (JSTOR, EBSCO, Scopus, Web of Science), and discipline-specific repositories (PsycINFO for psychology, ERIC for education, PubMed for health sciences). Use Boolean operators (AND, OR, NOT) and date filters. For snowballing, check the reference lists of the most relevant papers you find — the sources they cite are likely also relevant. Set up keyword alerts in Google Scholar to stay current throughout your research period.
Can I include books in my literature review, or only journal articles?
Both are appropriate. Peer-reviewed journal articles are generally preferred for empirical evidence and recent findings. Foundational books and book chapters are important for theoretical frameworks, historical context, and conceptual definitions. A healthy literature review typically draws on a mix of both — using books to establish theoretical groundings (e.g., Bhabha, 1994; Said, 1978 in humanities) and journal articles for current empirical evidence. Avoid over-reliance on textbooks, which are secondary sources that summarise primary research.
How recent do my sources need to be?
A common rule of thumb is that at least 60–70% of your sources should be from the past 10 years, with the majority from the past 5 years in rapidly evolving fields like technology or medicine. However, foundational theoretical works (Foucault, Bourdieu, Piaget, etc.) can be older and should be cited as they are still the primary reference points in their fields. Always check your department’s guidance — some supervisors prefer a strict 5-year window for empirical claims.
What is a conceptual framework, and do I need one?
A conceptual framework is a visual or written representation of the key concepts and relationships in your study, derived from your literature review. It shows how the variables or phenomena you are studying relate to each other and to existing theory. Not all dissertations require a standalone conceptual framework section, but most benefit from having one either at the end of the literature review or in the methodology chapter. It is particularly important in social science and health research where multiple theoretical influences bear on the study design.




Leave a Reply