Monday, 24 March 2025

A study note on academically honest usage of AI in doing a dissertation project

A study note on academically honest usage of AI in doing a dissertation project (information from perplexity.ai)


My question 1: "For university students doing dissertation projects of the academic oriented type, what AI usages by the students for supporting the following tasks are considered academically dishonest (thus unacceptable) behaviors: (1) formulating research objectives, (2) literature review, (3) formulating research methodology, (4) data analysis, (5) making report conclusions and recommendations, and (6) reflecting on dissertation project experience? Could you offer your evaluation with reference to the fields of (1) business management, (2) housing studies and (3) accounting and finance?"

The reply: "The use of AI in academic projects can be both beneficial and problematic, depending on how it is utilized. Here's an evaluation of AI usage in various tasks for dissertation projects across different fields, highlighting what might be considered academically dishonest:

Tasks and AI Usage

  1. Formulating Research Objectives:

    • Academically Dishonest: Using AI to generate research objectives without understanding or contributing to their development. This undermines the learning process and the student's ability to critically formulate objectives.

    • Acceptable: Using AI for suggestions or ideas, but ensuring that the student understands and integrates these into their own work.

  2. Literature Review:

    • Academically Dishonest: Relying solely on AI-generated summaries or analyses without reading and critically evaluating the literature.

    • Acceptable: Using AI to help organize or summarize literature, but ensuring that the student reads and interprets the sources themselves.

  3. Formulating Research Methodology:

    • Academically Dishonest: Letting AI design the methodology without the student's understanding or input. This could lead to inappropriate or unfeasible methods.

    • Acceptable: Using AI to explore potential methodologies, but ensuring the student understands and justifies their choice.

  4. Data Analysis:

    • Academically Dishonest: Using AI to perform complex analyses without understanding the underlying statistical methods or interpreting the results.

    • Acceptable: Using AI tools for data processing or visualization, but ensuring that the student interprets and explains the findings.

  5. Making Report Conclusions and Recommendations:

    • Academically Dishonest: Relying solely on AI to generate conclusions and recommendations without critically evaluating the data and research findings.

    • Acceptable: Using AI for suggestions, but ensuring that the student critically evaluates and supports their conclusions with evidence.

  6. Reflecting on Dissertation Project Experience:

    • Academically Dishonest: Using AI to generate reflective statements without genuine personal reflection or insight.

    • Acceptable: Using AI for grammar or coherence checks, but ensuring that the reflections are based on the student's actual experiences and thoughts.

Field-Specific Considerations

Business Management

  • Ethical Impacts: The use of AI in business management research can raise ethical concerns, such as data privacy and algorithmic bias, which must be addressed in the dissertation3.

  • Instrumental Impacts: AI can enhance research efficiency but may also reduce critical thinking skills if over-relied upon3.

Housing Studies

  • While specific studies on AI in housing studies are less documented, the general principles of academic integrity apply. AI should be used to support research, not replace critical thinking or original contributions.

Accounting and Finance

  • Bias and Security: AI can introduce biases in financial data analysis and pose security risks, especially with sensitive financial information2.

  • Integrity: AI-generated work must be transparent and acknowledged to maintain academic integrity in financial research.

In all fields, the key is to ensure that AI tools are used to enhance learning and research, not to circumvent the academic process. Transparency about AI usage and ensuring that students understand and contribute to their work is crucial to maintaining academic integrity."


My question 2: "How should the usage of AI on these dissertation project activities be recorded in a reflective journal log and reported in a dissertation report to promote academically honest and transparent research behavior?"

The reply: "To promote academically honest and transparent research behavior, recording AI usage in a reflective journal log and reporting it in a dissertation report should follow these guidelines:

Recording AI Usage in a Reflective Journal Log

  1. Detail AI Tools Used: Specify the AI tools, models, and versions used for each task, such as ChatGPT for brainstorming or Grammarly for text revision13.

  2. Describe Interaction with AI: Document the prompts used and the results obtained from AI interactions. This can include screenshots or logs of conversations with AI tools16.

  3. Reflect on AI Impact: Reflect on how AI usage influenced your research process, including any challenges or insights gained16.

  4. Regular Entries: Maintain regular entries in the journal to track the evolution of your research and AI usage over time1.

Reporting AI Usage in a Dissertation Report

  1. Transparency in Methodology: Include a detailed description of AI usage in the methodology section, explaining how AI tools were used and for what purposes78.

  2. Acknowledgments Section: Use an acknowledgments section to disclose AI tool usage, providing details like the model, version, and date of use47.

  3. Documentation Table: Consider using a documentation table to summarize AI usage across different stages of the research process1.

  4. Prompt Logs and Reflections: Include prompt logs and reflections in an appendix to provide additional transparency and insight into the research process16.

  5. Citation and Referencing: Properly cite and reference AI-generated content using established referencing styles, such as APA or Harvard notation37.

Example of Reporting AI Usage

Example in Methodology Section:

"When formulating research objectives, I used ChatGPT (version 4.0) to explore potential research questions. The interaction with ChatGPT helped refine my focus on [specific area of study]."

Example in Acknowledgments Section:

"This dissertation utilized AI tools, including ChatGPT (version 4.0) for brainstorming and Grammarly for text revision. The authors are responsible for the content."

By following these guidelines, you can ensure that your use of AI is transparent, documented, and academically honest."



My question 3: "What are the research gaps and research issues on this topic of "academically honest AI usage for doing dissertation projects"?"

The reply: "The integration of AI into dissertation projects raises unresolved ethical and practical challenges, with several research gaps and issues emerging across academic disciplines. Below is an analysis based on current literature and institutional guidelines:

Key Research Gaps

  1. Field-Specific Ethical Frameworks

    • Limited guidance exists for domain-specific AI usage in fields like housing studies or accounting, where AI-generated conclusions could misrepresent financial data or housing policies13.

    • No standardized protocols for evaluating AI's role in methodological design (e.g., AI-assisted econometric modeling in finance)26.

  2. Detection of AI Misuse

    • Current plagiarism-detection tools struggle to identify AI-generated text masquerading as original work, particularly in subjective tasks like reflective journaling16.

    • Lack of empirical studies on AI-driven data fabrication in quantitative fields (e.g., manipulated datasets in accounting research)34.

  3. Long-Term Academic Skill Erosion

    • Insufficient evidence on how AI reliance impacts critical thinking in research design or domain expertise development (e.g., interpreting housing policy nuances)16.

    • No longitudinal studies on whether AI-assisted dissertations affect post-graduate research capabilities14.

  4. Institutional Policy Fragmentation

    • Disparities in university guidelines: Some ban AI outright, while others permit limited use without clarity on transparency requirements25.

    • No consensus on AI authorship attribution (e.g., citing AI-generated literature summaries in business management)26.

Critical Research Issues

Ethical Accountability

  • Transparency vs. Privacy: Balancing detailed AI-usage logs with student data privacy protections5.

  • Bias Amplification: AI tools may reinforce existing biases in datasets (e.g., financial forecasting models trained on non-representative data)46.

Technical Limitations

  • Many AI tools lack explainability for complex outputs (e.g., statistical conclusions in accounting research)5.

  • Inadequate tools for verifying AI-assisted qualitative analysis (e.g., thematic coding in housing studies)36.

Cultural and Disciplinary Nuances

  • Varying ethical norms: AI use deemed acceptable in business management (e.g., market trend analysis) may conflict with housing studies’ emphasis on human-centered research24.

  • No cross-disciplinary frameworks for evaluating AI’s role in knowledge creation (e.g., generative AI proposing novel hypotheses)6.

Areas Requiring Further Study

  1. Standardized Documentation Practices

    • Developing universal templates for reporting AI use in methodologies (e.g., detailing prompt engineering for literature reviews)25.

  2. AI Literacy Integration

    • Pedagogical strategies to teach ethical AI interrogation (e.g., validating AI-generated references in finance dissertations)46.

  3. Governance Mechanisms

    • Proposals for algorithmic audit trails to track AI contributions in research workflows5.

    • Ethical frameworks for AI-human collaboration in sensitive tasks (e.g., housing equity analysis)35.

Emerging Challenges

  • Generative AI Hallucinations: Risks of fabricated sources in literature reviews, especially in niche fields like historical housing policies13.

  • Dependency Loops: Students using AI to refine AI-generated content without independent verification16.

These gaps highlight the need for interdisciplinary collaboration, robust policy development, and AI tools designed explicitly for academic rigor. Future research should prioritize field-specific case studies and institutional partnerships to address these challenges256."




Reference: 



No comments:

Post a Comment