How to Write a Dissertation Methodology Chapter: Step-by-Step Guide with Examples (2026)

thesify.team@gmail.com Avatar

·

How to Write a Dissertation Methodology Chapter: Step-by-Step Guide with Examples (2026)

Your dissertation methodology chapter is the engine room of your entire study. It explains not just what you did to collect and analyse data, but why those choices were the right ones for your research question. A well-written methodology chapter demonstrates academic rigour, earns examiner confidence, and makes your findings credible. Most undergraduate and master’s methodology chapters run between 1,500 and 4,000 words; PhD chapters can reach 8,000 or more.

Whether you are writing a qualitative interview study, a quantitative survey, or a mixed methods design, this guide walks you through every section — with realistic, annotated examples drawn from the kinds of dissertations that earn distinctions at Oxford, Cambridge, UCL, Harvard, and Stanford.

Quick answer: A dissertation methodology chapter covers your research philosophy (ontology and epistemology), research design, sampling strategy, data collection instruments, analysis method, validity and reliability measures, and ethical considerations — all justified in relation to your specific research question. It is usually Chapter 3 of a dissertation, placed after the literature review and before the results.

What Is a Dissertation Methodology Chapter?

The methodology chapter is your dissertation’s blueprint. It tells the reader — including your examiner — exactly how you designed your study and why those decisions were appropriate for answering your research question. It is distinct from the methods section of a scientific lab report: a dissertation methodology goes deeper, engaging with the philosophical assumptions that underpin your approach and situating your design within the existing scholarly conversation about research methods.

Think of it this way: the literature review establishes what is already known about your topic; the methodology chapter explains how you will add to that knowledge. Your examiner is not just checking that you collected data — they are assessing whether you understood the strengths, limitations, and assumptions of every choice you made along the way. For a full overview of how this chapter fits into your wider submission, see our thesis structure guide.

The Purpose of the Methodology Chapter

  • Justify your research design — show why your chosen approach is the best fit for your question
  • Enable replication — provide enough detail that another researcher could repeat your study
  • Demonstrate awareness of limitations — acknowledge what your design cannot do, not just what it can
  • Establish credibility — link every decision to established methodological literature

Where Does It Go?

In the standard five-chapter dissertation structure, the methodology chapter is Chapter 3, positioned after the literature review and before your results or findings chapter. In some disciplines (particularly the arts and humanities), chapters may be structured differently, but the expectation of methodological transparency remains.

Methodology Chapter Structure: The 8 Essential Sections

Most dissertation methodology chapters include the following eight components, though their order and emphasis will vary by discipline, institution, and research design. Always check your university’s dissertation handbook for specific requirements.

Section What It Covers Typical Length
1. Research Philosophy Ontological and epistemological position (positivism, interpretivism, pragmatism, etc.) 150–300 words
2. Research Approach Deductive, inductive, or abductive reasoning 100–200 words
3. Research Design Overall strategy: experimental, case study, survey, ethnographic, etc. 200–400 words
4. Research Method Qualitative, quantitative, or mixed — and the specific techniques used 300–600 words
5. Sampling Strategy How participants or data sources were selected and why 200–400 words
6. Data Collection Instruments, procedures, timeline, and pilot testing 300–500 words
7. Data Analysis Framework used to interpret the data (thematic analysis, regression, etc.) 200–400 words
8. Ethics, Validity & Limitations Ethical approval, informed consent, trustworthiness, and study constraints 200–400 words

Together these sections form the Saunders et al. “research onion” — a widely used framework taught at universities including Cambridge and Warwick to help students think through methodology layer by layer. You do not need to name the model explicitly, but understanding its logic will help you write a coherent, internally consistent chapter.

Qualitative vs Quantitative vs Mixed Methods: How to Choose

The single most important decision in your methodology chapter is your choice of research method. Get this right, and the rest of the chapter follows logically. Get it wrong — or fail to justify it — and examiners will question the validity of everything that follows.

For a deep dive into qualitative designs specifically, see our guide to qualitative research methods. The decision framework below will help you make the right call for your study.

Dimension Qualitative Quantitative Mixed Methods
Research question type How? Why? What does it mean? How many? How much? Is there a relationship? Both — usually one informs the other
Data type Words, images, observations Numbers, measurements Both
Sample size Small (8–30 typical) Large (100+ typical) Varies by phase
Philosophy Interpretivist / constructivist Positivist Pragmatist
Strengths Depth, context, nuance Generalisability, replicability Triangulation, comprehensiveness
Common analysis methods Thematic, narrative, grounded theory Regression, ANOVA, descriptive stats Integration of both

The key rule: your methodology must follow your research question, not the other way around. If your question asks “why do first-generation university students disengage from academic support services?”, qualitative semi-structured interviews are a natural fit. If it asks “is there a statistically significant relationship between library usage and degree attainment?”, a quantitative survey or secondary data analysis is the right call.

How to Write Your Methodology Step by Step

Follow these thirteen steps in order. Each step builds on the previous one, so the chapter reads as a coherent, justified narrative rather than a disconnected list of procedures.

  1. Re-read your research question. Pin it above your desk. Every decision in this chapter should connect back to it. If a methodological choice does not serve your research question, you do not need it.
  2. Identify your ontological position. Do you believe there is a single objective reality (realism) or that reality is constructed by individuals (relativism)? Your answer will shape everything else.
  3. State your epistemological stance. Given your ontology, how can knowledge be produced? Positivists believe in objective measurement; interpretivists believe in contextual understanding; pragmatists choose whatever works best for the question.
  4. Choose and justify your research approach. Are you testing a theory (deductive) or building one from your data (inductive)? Qualitative studies are typically inductive; quantitative studies are often deductive.
  5. Select your research design. Case study, survey, experiment, ethnography, action research, systematic review — choose the design that matches your question and time frame. Undergraduate students often work with surveys or semi-structured interviews; PhD students may use more complex ethnographic or longitudinal designs.
  6. Choose your data collection method(s). Decide between interviews, questionnaires, observation, document analysis, secondary datasets, or a combination. Specify whether interviews are structured, semi-structured, or unstructured; whether questionnaires use Likert scales, open-ended questions, or both.
  7. Design your sampling strategy. Describe who or what you are sampling, how you will recruit participants, and how many. Justify your sample size with reference to methodological literature — not just convenience. Purposive sampling is common in qualitative research; probability sampling is the gold standard in quantitative work.
  8. Pilot your instruments. Always mention whether you piloted your survey or interview guide, and what changes you made as a result. This demonstrates methodological rigour even at undergraduate level.
  9. Describe your data collection procedure. Write this in the past tense (after you have collected data) or the future tense (in a proposal). Include the timeline, the setting, and any procedural steps — for example, how you recorded and transcribed interviews.
  10. Describe your analysis framework. Thematic analysis? Regression? Content analysis? Discourse analysis? Name the approach, cite the key methodologist who developed it (e.g., Braun and Clarke for thematic analysis, Creswell for mixed methods), and explain how you applied it.
  11. Address validity, reliability, and trustworthiness. In quantitative research, discuss internal and external validity, reliability (Cronbach’s alpha, test-retest), and construct validity. In qualitative research, use Lincoln and Guba’s criteria: credibility, transferability, dependability, and confirmability.
  12. Discuss ethical considerations. Include how you obtained ethical approval (from your institution’s ethics committee), how you obtained informed consent, how you ensured confidentiality and anonymity, and how you stored data securely. Also address any specific ethical risks relevant to your sample — for example, if you interviewed vulnerable populations.
  13. Acknowledge limitations. No methodology is perfect. Briefly note the key weaknesses of your design — small sample size, self-selection bias, cross-sectional rather than longitudinal data — and explain why you proceeded despite these constraints.
Pro tip from experienced supervisors: Write your methodology in the past tense even if you write it before you collect data. “Data were collected via semi-structured interviews” reads as more confident and academically conventional than “data will be collected.” You can always revisit minor details after fieldwork.

Methodology Example: Qualitative Study

The following is a realistic, annotated example of a methodology chapter extract for a master’s dissertation at a UK university. The study investigates first-generation students’ experiences of academic transition. Read it in full — notice how every claim connects back to the research question and is supported by a citation.

3. Methodology

3.1 Research Philosophy

This study adopts an interpretivist philosophical position, premised on the ontological assumption that social reality is not fixed but is actively constructed through the lived experiences and interpretations of individuals (Bryman, 2016). Because the research question asks how first-generation students make sense of their transition to university — rather than measuring the frequency of specific behaviours — an interpretivist approach is the most appropriate epistemological stance. This aligns with what Creswell and Poth (2018) describe as a constructivist worldview, in which “individuals seek understanding of the world in which they live and work” (p. 24). A positivist approach was rejected because it would impose predetermined categories onto participants’ experiences, potentially obscuring the nuance this study seeks to understand.

3.2 Research Approach

An inductive research approach was adopted. Rather than beginning with a theory to test, this study built conceptual understanding from the data itself — consistent with the interpretivist tradition (Saunders et al., 2019). This approach is suited to an under-researched topic where existing frameworks (such as Tinto’s (1987) integration model) have been criticised for failing to account for the cultural capital deficits unique to first-generation learners (Jury et al., 2017).

3.3 Research Design

A qualitative case study design was employed, focusing on a single Russell Group university in England — referred to as “Northern University” to preserve institutional anonymity. Case study research, as defined by Yin (2018), is appropriate when the researcher seeks to understand a contemporary phenomenon within its real-world context, particularly when the boundaries between phenomenon and context are not clearly defined. The single-site design was chosen to allow for detailed, contextually rich data rather than surface-level comparison across multiple institutions, a trade-off acknowledged in Section 3.7 below.

3.4 Data Collection

Data were collected through twelve semi-structured, one-to-one interviews conducted between October and December 2025. Semi-structured interviews were selected because they allow for both consistency across participants (through a shared topic guide) and flexibility for the researcher to probe unexpected but relevant themes as they emerge (King and Horrocks, 2010). The interview topic guide (Appendix A) covered four thematic areas: pre-arrival expectations, academic challenges encountered in the first semester, experiences of support services, and perceived identity shifts. Each interview lasted between 45 and 75 minutes and was conducted via Microsoft Teams to accommodate participants’ schedules. Interviews were audio-recorded with participants’ consent and transcribed verbatim using Otter.ai, with manual correction to ensure accuracy.

3.5 Sampling Strategy

Purposive sampling was used to recruit twelve first-generation undergraduate students — defined as students whose parents or guardians did not attend higher education — enrolled in their first year at Northern University. Purposive sampling is appropriate in qualitative research when the researcher aims to select participants who can provide rich, relevant information about the phenomenon under study (Patton, 2015). Recruitment was carried out via the university’s Widening Participation Office, which distributed a participant information sheet to eligible students. A sample size of twelve was determined on the basis of the principle of information power (Malterud et al., 2016): given the specific focus of the study, the homogeneity of the sample, and the use of a detailed topic guide, twelve in-depth interviews were considered sufficient to generate conceptually rich data. Saturation — the point at which no new themes were emerging — was reached after the ninth interview, confirming the adequacy of the sample.

3.6 Data Analysis

Thematic analysis, following the six-phase framework of Braun and Clarke (2006), was used to analyse the interview data. This approach was chosen for its flexibility and accessibility: unlike grounded theory or interpretative phenomenological analysis, thematic analysis does not presuppose a particular theoretical framework and can be used within an interpretivist paradigm to identify patterns of meaning across the dataset (Braun and Clarke, 2019). Analysis proceeded as follows: (1) familiarisation with the data through repeated reading and note-taking; (2) systematic generation of initial codes across the entire dataset; (3) searching for themes by clustering related codes; (4) reviewing and refining themes against the coded extracts and the full dataset; (5) defining and naming final themes; and (6) producing the written analysis. NVivo 14 was used to manage the coding process and maintain an audit trail.

3.7 Ethical Considerations and Limitations

Ethical approval was granted by the University Faculty Research Ethics Committee (reference: FREC-2025-1142) prior to data collection. Informed consent was obtained from all participants in writing before each interview. Participants were reminded of their right to withdraw at any stage without consequence, and no incentive was offered for participation to avoid undue influence. To protect anonymity, all participants were assigned pseudonyms and any identifying details — including module names, specific staff members, and home regions — were removed from transcripts prior to analysis. Data were stored on an encrypted university server compliant with the UK Data Protection Act 2018.

The principal limitation of this study is its lack of generalisability: findings reflect the experiences of twelve students at a single institution and cannot be extrapolated to all first-generation university students in the United Kingdom. This is an inherent trade-off of the case study design chosen to achieve depth of understanding. A further limitation is the potential for social desirability bias, as participants may have moderated their responses in an interview setting. Reflexive member-checking — sharing draft themes with three participants for feedback — was employed as one strategy to mitigate this risk.

Methodology Example: Quantitative Study

The following example is drawn from a business management dissertation examining the relationship between employee autonomy and job satisfaction across remote and in-office workers. It demonstrates how to write a rigorous quantitative methodology chapter for a master’s-level submission.

3. Research Methodology

3.1 Research Philosophy

A positivist research philosophy underpins this study. Positivism holds that social phenomena can be investigated using objective, value-free methods analogous to those used in natural science (Bryman, 2016). This epistemological position is appropriate because the study seeks to quantify a relationship between two measurable constructs — employee autonomy and job satisfaction — and to assess whether that relationship is moderated by work location. The assumption is that both constructs have a reality that can be captured through validated self-report scales, independent of the researcher’s perspective.

3.2 Research Approach and Design

A deductive approach was taken, testing the hypothesis derived from Self-Determination Theory (Deci and Ryan, 1985) that higher perceived autonomy predicts higher job satisfaction. A cross-sectional survey design was used, selected for its efficiency in collecting standardised data from a large sample within the dissertation timeframe (Bryman, 2016). Whilst a longitudinal design would have permitted causal inference, the practical constraints of a twelve-month master’s programme made it unfeasible. The cross-sectional design is adequate for detecting associations and patterns consistent with the theoretical framework, though causal claims are appropriately qualified in the discussion chapter.

3.3 Data Collection Instrument

Data were collected using a self-completion online questionnaire, developed in Qualtrics and distributed between January and February 2026. The questionnaire comprised three sections. Employee autonomy was measured using the Work Design Questionnaire — Autonomy subscale (WDQ; Morgeson and Humphrey, 2006), a 9-item scale with well-established validity (Cronbach’s alpha = 0.91 in the original validation study). Job satisfaction was measured using the Minnesota Satisfaction Questionnaire — Short Form (MSQ-SF; Weiss et al., 1967), a 20-item scale rated on a 5-point Likert scale from “Very Dissatisfied” to “Very Satisfied” (internal consistency reliability typically 0.80–0.90). Work location was captured as a three-category variable: fully remote, hybrid, and fully office-based. A pilot test was conducted with eight master’s students in the Business School who reviewed the questionnaire for clarity and logical flow; two items in the demographic section were reworded as a result.

3.4 Sampling and Recruitment

A convenience sample of 247 working adults was recruited via LinkedIn and through the researcher’s professional network. Participants were eligible if they were employed in the UK for at least 20 hours per week and had been in their current role for a minimum of three months — a threshold established to ensure sufficient familiarity with their working conditions. Though probability sampling would have been preferable for generalisability, access constraints common to student research made it impractical (Saunders et al., 2019). A post-hoc power analysis using G*Power (Faul et al., 2007) confirmed that a sample of 247 provided 95% power to detect a medium effect size (f² = 0.15) in a multiple regression with three predictors at α = .05. Participants were 56% female, 41% male, and 3% non-binary, with a mean age of 31.4 years (SD = 7.2). Fully remote workers comprised 34% of the sample, hybrid workers 48%, and in-office workers 18%.

3.5 Data Analysis

Data were analysed using IBM SPSS Statistics (Version 29). Descriptive statistics and Pearson correlation coefficients were first computed to examine relationships between key variables. Multiple linear regression was then used to assess whether autonomy predicted job satisfaction after controlling for age, gender, and tenure. A moderated multiple regression analysis, following the process outlined by Hayes (2022), was conducted to test whether work location moderated the autonomy–satisfaction relationship. Prior to regression analysis, assumptions were tested: normality was assessed using the Shapiro–Wilk test; homoscedasticity was examined using Levene’s test and residual plots; multicollinearity was assessed using Variance Inflation Factors (VIF), all of which fell below the conventional threshold of 5.0.

3.6 Ethics and Limitations

Ethical approval was granted by the University Business School Ethics Panel (reference: BSE-2025-0891). Participation was entirely voluntary; informed consent was obtained via a digital consent form preceding the questionnaire, and participants were informed they could withdraw at any time by closing the survey. No personally identifiable data were collected, and IP addresses were not recorded. Data were stored in a password-protected Qualtrics account accessible only to the researcher and supervisor.

The primary limitation is the use of convenience sampling, which restricts the generalisability of findings to the wider UK working population. Self-report measures introduce the risk of common method variance — the inflation of observed correlations because both the predictor and outcome are measured using the same instrument at the same time point. This limitation is mitigated, though not eliminated, by the use of validated scales with strong psychometric properties. A further limitation is the cross-sectional design, which prevents causal conclusions; the association found between autonomy and satisfaction is correlational, and reverse causation cannot be ruled out.

Methodology Example: Mixed Methods

The following extract is from an education research dissertation examining the effectiveness of a peer mentoring programme on STEM retention rates among underrepresented students at a UK university.

3. Research Methodology

3.1 Research Philosophy and Design

A pragmatist philosophical approach was adopted, acknowledging that neither purely interpretivist nor purely positivist lenses are sufficient to answer the research question: “To what extent, and in what ways, does participation in the peer mentoring programme affect STEM retention rates and student sense of belonging?” Pragmatism holds that methodological choices should be guided by what best addresses the research problem, permitting the integration of qualitative and quantitative methods within a single study (Creswell and Creswell, 2018). A sequential explanatory mixed methods design was employed: quantitative retention data were collected and analysed first, and qualitative interviews were subsequently conducted to explain and contextualise the statistical findings.

3.2 Quantitative Phase

Retention data for two cohorts (2023–24 and 2024–25) were obtained from the university’s Student Records Office. The dataset included first-to-second-year retention rates for 412 underrepresented STEM students, of whom 198 had participated in the peer mentoring programme and 214 had not. A binary logistic regression analysis was conducted in R (version 4.3.2) to assess the odds of retention associated with programme participation, controlling for prior academic attainment (A-level points), gender, and disability status.

3.3 Qualitative Phase

Following analysis of the quantitative data, fifteen semi-structured interviews were conducted with student participants (n = 9 mentees, n = 6 mentors) purposively selected to represent variation in retention outcomes and demographic background. Interviews explored participants’ experiences of the programme, their perceptions of its impact on their sense of belonging and academic confidence, and the mechanisms through which they believed it had (or had not) affected their decision to continue. Interviews were analysed using reflexive thematic analysis (Braun and Clarke, 2021), with themes mapped to the quantitative findings to produce an integrated interpretation. Integration occurred at the interpretation stage — what Fetters et al. (2013) call “weaving” — wherein quantitative findings framed which themes demanded the most analytical attention.

3.4 Ethical Considerations

Ethical approval was granted by the Faculty of Education Ethics Committee (FEEC-2025-214). Student Records data were provided in fully anonymised form under a formal data sharing agreement with the university. Interview participants provided written informed consent, and care was taken to ensure that students who had not been retained were not contacted in ways that could reactivate distress associated with their withdrawal.

How to Justify Your Methodological Choices

The weakest methodology chapters simply describe what the student did. The strongest ones justify every decision by connecting it to the research question, situating it within methodological literature, and acknowledging what was sacrificed. Here is a practical framework for building justification into every paragraph.

The “Because… Which Means… Even Though…” Formula

For each methodological choice, practise completing this three-part sentence:

  • “I chose X [method/design/sample]…”
  • “…because it is appropriate for studying Y [your research question/phenomenon]…”
  • “…which means it will allow me to Z [specific thing you can do with this method]…”
  • “…even though it cannot W [a genuine limitation you are acknowledging].”

For example: “Semi-structured interviews were used because the study seeks to understand participants’ subjective experiences of a novel workplace policy, which means the interview guide can maintain thematic focus while permitting follow-up on unexpected responses, even though this approach yields data that are not generalisable to the wider population of employees in similar organisations.”

Cite the Methodologists, Not Just the Researchers

A common mistake is to cite only substantive researchers (e.g., Tinto on student integration, Maslow on motivation) when making methodological claims. Your methodology chapter should cite methodologists — authors of research methods textbooks and articles such as:

  • Bryman, A. — Social Research Methods (5th ed., 2016)
  • Saunders, M., Lewis, P., & Thornhill, A. — Research Methods for Business Students (8th ed., 2019)
  • Creswell, J. W. & Poth, C. N. — Qualitative Inquiry and Research Design (4th ed., 2018)
  • Braun, V. & Clarke, V. — Thematic Analysis: A Practical Guide (2022)
  • Yin, R. K. — Case Study Research and Applications (6th ed., 2018)

For a structured overview of how to document and cite your sources correctly, see our guide to research methodology citations.

Common Methodology Mistakes Examiners Flag

These are the errors that appear again and again in examiner reports across UK and US universities. Recognise them early and you will save yourself weeks of revision.

1. Describing Methods Without Justifying Them

Saying “interviews were used to collect data” is a description, not a justification. Every method needs a “because” — grounded in both the nature of your research question and the methodological literature. Examiners at institutions including UCL and Edinburgh have noted this as the single most common weakness in methodology chapters.

2. Confusing Research Design with Research Method

Research design (case study, survey, experiment) and research method (interviews, questionnaires, observation) are not the same thing. A survey is a design; a questionnaire is the method of data collection within that design. Getting this distinction right signals methodological literacy.

3. Ignoring Philosophical Underpinning

Skipping the research philosophy section — or writing one sentence about it and moving on — is a missed opportunity, especially at master’s and PhD level. Examiners want to see that you understand why you approached the study the way you did, not just what you did.

4. Over-claiming Generalisability

A qualitative study of fifteen participants cannot generate findings that apply to “all employees in the sector” or “university students globally.” Overstating what your study can conclude is one of the quickest ways to lose marks in the methodology and discussion chapters.

5. Insufficient Ethical Detail

Listing your ethics committee approval number but saying nothing about how you handled consent, data storage, or participant risk is inadequate. Describe each ethical safeguard in a sentence or two. If your study involved sensitive topics, explain how you managed potential participant distress.

6. Writing the Methodology After the Findings

Some students write the methodology to match the analysis they have already done, rather than planning the methodology before collecting data. Examiners can often detect this — the chapter reads as post-hoc rationalisation rather than a planned research strategy. Draft your methodology chapter before you enter the field, even if you revise it afterwards.

7. Not Referencing Methodological Literature

Your methodology chapter should have almost as many citations as your literature review — but they should be citations of methods texts, not just substantive theory. If your reference list has no Bryman, Saunders, Creswell, or equivalent, your chapter will feel thin regardless of how well-written it is.

8. Forgetting to Address Reliability and Validity (or Trustworthiness)

These are not optional extras — they are the criteria by which your study will be judged. Quantitative studies must address internal and external validity, reliability, and construct validity. Qualitative studies should address credibility, transferability, dependability, and confirmability using Lincoln and Guba’s (1985) widely accepted framework.

Want to write your methodology chapter faster? Tesify is built specifically for dissertation students — helping you structure your methodology, generate your topic guide, and organise your citations in one workspace. Try it free.

Frequently Asked Questions

How long should a dissertation methodology chapter be?

For an undergraduate dissertation, the methodology chapter is typically 1,000–2,000 words. For a master’s dissertation, 1,500–4,000 words is standard. PhD methodology chapters can extend to 6,000–10,000 words, particularly in social sciences where philosophical underpinning receives extended treatment. Always follow your institution’s specific guidelines — many universities publish a word count breakdown by chapter in their dissertation handbooks.

What tense should I use in the methodology chapter?

The methodology chapter is conventionally written in the past tense when describing what you did: “Interviews were conducted…” and “Data were analysed using…”. References to published methods literature remain in the present tense: “Braun and Clarke (2006) describe thematic analysis as…”. If you are writing your methodology as part of a proposal (before data collection), use the future tense: “Data will be collected…”. You can revise to past tense after fieldwork.

Do I need to include a research philosophy section if I am an undergraduate?

It depends on your discipline and institution. Business, social science, and education dissertations at undergraduate level almost always require at least a brief discussion of research philosophy (one to two paragraphs). Science and engineering dissertations may not use this framing at all, focusing instead on experimental design and measurement validity. Check your dissertation handbook and look at past successful submissions in your department, or ask your supervisor directly.

How do I justify my sample size in qualitative research?

The concept of data saturation (Guest et al., 2006) is the most widely accepted justification for sample size in qualitative research — you continue collecting data until no new themes emerge. For a master’s dissertation, 10–20 semi-structured interviews or 6–12 focus groups is typical. You can also cite the concept of “information power” (Malterud et al., 2016), which argues that sample size adequacy depends on the aim, specificity, and quality of data rather than a fixed number. Do not simply say your sample was “appropriate” — cite a source and explain why the number fits your study.

What is the difference between methodology and methods?

Methods are the specific techniques used to collect or analyse data — for example, a questionnaire, an interview, or a statistical test. Methodology is the broader study of why particular methods are appropriate: it includes your philosophical position, your research design rationale, and your justification for every major decision. A short lab report might have a “methods section.” A dissertation has a “methodology chapter” that situates those methods within a research logic and a scholarly tradition.

Should the methodology chapter include a literature review of methods?

Yes — but it is a methods literature review, not a substantive one. You should cite and engage with key methodological texts to justify your choices: for example, citing Yin (2018) when explaining why you chose a case study design, or citing Braun and Clarke (2006) when explaining thematic analysis. This shows your examiner that your choices are informed by scholarly debate, not just personal preference. However, the methodology chapter is not the place for extended literature synthesis on your substantive topic — that belongs in Chapter 2.

How do I handle ethics in the methodology chapter?

Dedicate a subsection to ethics, typically at the end of the chapter. Cover four elements: (1) institutional ethics approval — include your committee name and reference number; (2) informed consent — how you obtained it and what participants were told; (3) confidentiality and anonymity — how you protected participants’ identities and stored data; (4) any specific ethical risks relevant to your study and how you mitigated them. For a dissertation involving human participants, the British Psychological Society’s Code of Ethics and Conduct (2021) or the APA’s Ethics Code are commonly cited frameworks.

Can I use secondary data in my dissertation methodology chapter?

Absolutely. Secondary data analysis — using datasets that have already been collected by government bodies, research organisations, or previous studies — is a rigorous and increasingly common methodology, especially in economics, public health, and social policy. Your methodology chapter should explain: what dataset you used, who collected it and when, why it is appropriate for your research question, and what its known limitations are (e.g., missing variables, sampling frame constraints). Secondary data can also be combined with primary data in a mixed methods design.


Writing a strong dissertation methodology chapter is not about following a rigid template — it is about demonstrating that you made thoughtful, evidence-based decisions at every stage of your research design. Use the structures and examples in this guide as your foundation, then adapt them to your specific question, discipline, and institution’s expectations. For further reading on how to present your research context before the methodology, see our guide to the literature review methodology process. You can also explore how Spanish-speaking students approach the equivalent section in our sister resource on metodología TFG, and French-speaking students will find relevant guidance in our coverage of méthodologie mémoire.

Ready to write your methodology chapter? Start your dissertation with Tesify — purpose-built for students who want structured, citation-ready help from day one.

thesify.team@gmail.com Avatar

Leave a Reply

Your email address will not be published. Required fields are marked *