GoldBerry for Universities

Make acceptable AI use easier to do well.

Students are increasingly being asked to evidence responsible AI use, and institutions need practical workflows that support this consistently. GoldBerry offers a university-ready approach: student-first, verified .ac.uk users, declared research support, and a clear boundary between review and substitution.

Student-led GoldBerry research interface
Open local student portal Student page Main site

Pilot logic: start with verified .ac.uk students, prove transparent research support works, then expand to lecturer and support-team workflows.

The higher-education problem

Universities are moving toward clearer AI-use declaration expectations. Students are increasingly asked to explain what tool they used, what prompts they entered, what output they received, and how they adapted it.

The practical problem is not only whether students use AI. It is whether they can use it transparently, critically, and in a way staff can understand.
Student experience

Students need fewer, better, more defensible AI interactions rather than long trails of vague prompting they can barely explain afterwards.

Lecturer experience

Staff need a clearer distinction between acceptable research support and unacceptable substitution.

Institutional challenge

Universities need a more teachable and reviewable model than either blanket prohibition or vague permissiveness.

Why GoldBerry fits this gap

GoldBerry is not strongest as an essay generator. It is strongest as a research instrument: a way to interrogate a source, challenge a draft, widen a literature review, and make AI use easier to declare honestly.

GoldBerry reduces friction while increasing seriousness.
What GoldBerry is for
  • source review
  • literature review broadening
  • draft self-audit
  • perspective and evidence-gap detection
  • declaration-ready research support
What GoldBerry is not for
  • undeclared text generation
  • hidden generation
  • submitting AI text as student work
  • bypassing reading or judgement

Student-first pilot model

The first version should not try to solve everything for everyone. Start where the use case is clearest: undergraduates and taught students who must disclose AI use and need a more disciplined research workflow.

Recommended first pilot
  1. Access limited to verified .ac.uk student email accounts.
  2. Use cases restricted to source review, literature broadening, and draft self-audit.
  3. Built-in declaration summary generated after each interaction.
  4. Lecturer / librarian / academic-skills access added later as a second phase.
What to test
  • Can students use GoldBerry without drifting into substitution?
  • Does GoldBerry reduce disclosure confusion?
  • Does it produce better source interrogation than generic prompting?
What to measure
  • student clarity in declarations
  • quality of critical engagement with sources
  • staff confidence in interpreting AI-supported work

Institutional value

GoldBerry gives universities a practical middle path between blanket restriction and unclear, ad-hoc use.

For academic integrity

A more reviewable form of AI use that is easier to teach and easier to defend.

For teaching and learning

Students can use AI for critique and research support rather than only for generation.

For support teams

Libraries, study-skills teams, and lecturers get a clearer workflow they can actually demonstrate.

GoldBerry is useful because it makes acceptable AI use easier to do well.

That means less bureaucratic sprawl for students, more legibility for lecturers, and a more operationally workable AI policy environment for institutions.

Immediate next steps

  1. Use the local student portal to demo the workflow.
  2. Refine the declaration output and academic examples.
  3. Test the student-first pilot framing with real users or institutional partners.
  4. Add lecturer and support-team workflows only after the student experience is solid.
Use local prototype Back to student page Eval pack