AI Policy | Ethical, Transparent, Human-Led

1. Purpose of This Policy

This AI Policy sets out how AI tools are used in my work. It ensures transparency, safeguards people, and protects the integrity of services delivered. My approach to AI is guided by human-centred values, responsibility, and inclusion.

AI supports my work, but it never replaces critical thinking, safeguarding, professional judgement, or lived experience.


2. Human Oversight First — Always

  • All AI outputs are reviewed, edited, or rewritten by me.
  • AI never operates independently or makes decisions without human oversight.
  • I remain fully accountable for any work produced using AI assistance.

People make decisions.
AI supports decisions.


3. AI Is Used Only as a Supportive Tool

AI is used to:

  • brainstorm ideas
  • improve clarity in written communication
  • generate drafts for refinement
  • support learning models
  • accelerate research synthesis (not replace it)
  • create early design concepts
  • support accessibility and comprehension

AI is not used to:

  • create final judgements
  • generate personal, financial, medical, or legal advice
  • process sensitive personal data
  • replace human evaluation or expertise
  • profile individuals or communities

4. No Personal Data Into AI Tools

To protect the privacy and dignity of individuals and organisations:

  • I do not input names, identifiable data, confidential information, or sensitive material into AI systems.
  • Any materials used with AI are anonymised or synthetic.
  • I comply with UK GDPR, ICO guidance, and ethical data principles.

5. Transparency in AI Use

Where AI contributes to writing, ideation, research summaries, visual prototypes, or knowledge generation:

  • I disclose that AI was used as part of the process.
  • Final content is always reviewed, shaped, and approved by me.

Transparency builds trust — especially in digital inclusion and community work.


6. AI Does Not Replace Lived Experience or Community Knowledge

My work is grounded in:

  • lived reality
  • culture
  • context
  • trust
  • human behaviour
  • empathy

AI supports this process by helping organise or speed up thinking, but the core insight always comes from humans.

Community partners such as Community Tech Aid play a central role in tailoring learning variants and informing design logic. AI never substitutes for their expertise or perspectives.


7. Ethical and Inclusive AI Practice

I commit to:

  • avoiding AI systems that reinforce bias
  • using tools that respect accessibility and inclusion
  • being sensitive to digital inequality and data poverty
  • ensuring AI does not disadvantage people with low digital confidence
  • promoting responsible AI literacy as part of my mission

AI must serve people — not the other way around.


8. AI for Good, Never for Exploitation

AI is used to:

  • improve communication
  • increase access
  • support learning
  • reduce digital fear
  • accelerate creativity
  • empower communities

AI is never used to manipulate, mislead, or exploit users.


9. No AI Automation in Client Decision-Making

Clients retain full control of decisions about:

  • research
  • strategy
  • funding priorities
  • digital inclusion pathways
  • service design

AI remains a supportive tool, not a decision-maker.


10. Continuous Review

AI is evolving.
This policy will evolve with it.

I will:

  • review AI tools annually
  • monitor ethical, legal, and safety developments
  • update the policy in line with UK and international best practice

The goal is long-term trust, safety, and transparency.