Using AI in your Civic Society

AI trustees meme

Using AI in your Civic Society

Artificial Intelligence is everywhere in the news and in business these days – we even used it to generate the image for this event. How can it help your civic society ?  Do you need to be an expert? (No!)  Civic societies are often under-resourced, and at this event we showed that with a little knowledge, illustrated with examples, AI can bolster your resources in several useful ways, without necessarily spending a penny. 

Agenda

We looked at some novel pre-built applications and services that use AI behind the scenes as their main way to allow you to:

  • Produce notes and minutes of meetings;
  • Find sources of images to avoid potential copyright infringement;
  • Create crafted objections to planning applications.

We looked at the background to give context to the front runners in the general purpose ‘chatbot’ race, classed as ‘Generative AI’ based on Large Language Models (LLM’s), following up with some examples, to show where the AI chatbots can help and where they’re going.

The second part of the meeting was entirely interactive, with John taking a number of audience questions and then showing how one might ask various AI chatbots for the answers.

Meeting report 

This was, of course, generated by AI (Claude), based on a transcript, also generated by AI (Turboscribe).

Part One: An Overview of AI — Richard Farthing

Richard introduced himself as a chartered engineer with a background in major infrastructure projects, and as the trustee responsible for running the London Forum website. Drawing on a prepared set of slides, he offered an accessible survey of the AI landscape. A copy of the slides are here: pdf icon AI Presentation

He opened by explaining what AI chatbots actually are. The acronym GPT — Generative Pre-trained Transformer — describes systems trained on vast quantities of internet text that generate responses by calculating the statistically most probable next word. Professor Hannah Fry’s description of these systems as a “highly sophisticated intergalactic autocomplete” was offered as a useful shorthand. The first chatbot, Eliza, was demonstrated live in its original 1966 form, underlining how the field has evolved from primitive pattern-matching to today’s powerful large language models (LLMs). Richard noted that the pace of change has been extraordinary: tools that were “relatively crude” just three years ago have advanced beyond recognition.

Richard surveyed the major AI companies and products — ChatGPT (OpenAI), Claude (Anthropic), and Gemini (Google DeepMind, whose origins are nearby in London) — and tested each against the Hammersmith Society’s website to compare their summarising abilities. Gemini produced the most accurate and comprehensive result on the day, even locating the society’s bank balance from public documents, though Richard cautioned that the models are constantly leapfrogging one another in capability.

He flagged the significant energy consumption of large AI systems — ChatGPT is estimated to use around 1,000 gigawatt hours per year — but noted that more efficient, task-specific models are being developed. A planning-specific AI being trialled in Cambridge and Greenwich, developed with Liverpool University, ran on a standard PC and was able to process 300 consultation responses in 16 minutes (compared to 18 hours of officer time), with the additional advantage of having no institutional bias.

A substantial part of Richard’s presentation was devoted to practical applications directly relevant to civic society work. He highlighted the use of AI for summarising large planning documents — the Earl’s Court development, for instance, generated some 800 documents — and demonstrated this live using Claude to produce a two-page summary of the Planning in London publication, complete with a table of contents, for the Forum’s website. He also introduced two specialist tools: Objector.ai, which analyses planning legislation and can help assess whether there are grounds for objection, and Anti-Render, which converts developers’ CGI images into more realistic depictions accounting for British weather and conditions.

On the subject of AI-generated imagery, Richard demonstrated how ChatGPT can generate images from a prompt, showing a summer party photograph transformed into a Christmas party image, and how it produced a jigsaw-puzzle graphic representing statutory planning consultees without needing to be told who those consultees were. He also addressed the risk of unwittingly using copyrighted images, recommending the reverse image search tool TinEye and showing how images can be extracted from PDFs for checking before publication.

Richard discussed the risks associated with AI more broadly. He touched on the concept of “PDoom” — the probability of existential harm from AI — noting real-world incidents including a young man whose AI companion encouraged criminal behaviour. He also raised concerns about autonomous weapons systems and the potential for AI “agents” that act on a user’s behalf without direct instruction. On a more prosaic level, he warned that automated copyright-enforcement bots are already scanning civic society websites, making vigilance around image use essential.
He closed by highlighting the concept of “vibe coding” — using AI to write software in plain English rather than a programming language — and suggested that AI-powered agents capable of performing tasks autonomously represent the likely next phase of development.

Part Two: AI in Practice — John Myers

John Myers, also a London Forum trustee and trustee of the Camden Society, led a more discursive second session, taking questions and demonstrating AI tools live. He prefaced his remarks with three practical principles: always ask the AI how to do something before attempting it yourself; ask the AI to make a plan before tackling a complex task; and give the AI as much context as possible about your organisation, your local area, and the personalities involved.

John demonstrated the use of Gemini to locate planning policy on public houses, quickly finding the relevant London Plan policy (HC7) and the corresponding National Planning Policy Framework wording revised in December 2024. He showed how any consultation document — including a live example on planning fees — can be fed to an LLM to generate a first draft response, which can then be refined and checked. He emphasised strongly that human oversight remains essential: civic societies must understand and stand behind what they submit, and should not simply allow machines to talk to machines unchecked.

The session ranged across a number of practical topics raised by members. On meeting minutes, John endorsed the use of AI transcription tools such as Granola and Otter, recommending the practice of recording a meeting, generating a transcript, producing minutes from it, and then deleting both the recording and transcript to minimise security and data protection risks. He urged societies not to publish sensitive information in publicly accessible minutes.

On the question of synthesising multiple consultation responses, John demonstrated how Claude can take a society’s own draft alongside responses from other bodies and, with appropriate context about the society’s priorities, suggest improvements and additions — a significant time-saver for groups with limited capacity.

A highlight of the session was a contribution from a member, Sophie, who described how she had used Claude to build a complete website for a residents’ association from scratch, with no prior coding experience. Using Claude Code and then uploading to GitHub (both free or low-cost services), she had produced a professional-looking site, capable of being updated in minutes. John commended this as a model for smaller societies struggling to recruit technical volunteers.

On security and data protection, John advised uploading only public documents or documents free of personal data, and recommended against giving AI tools access to personal email accounts or sensitive internal communications. He noted that he trusts Claude’s parent company, Anthropic, as he would any responsible institution, but acknowledged that caution is warranted with less well-known providers.

The meeting closed with a brief discussion of social media, with John suggesting that AI can now be set to scan for news of relevance to a society, draft tweets, and in some cases post them directly — greatly reducing the manual burden of maintaining an active online presence. He also noted that Grok (xAI), despite reservations about its ownership, can be useful for accessing real-time information, while OpenStreetMap data offers a route to creating copyright-free maps.

The chair thanked both speakers and encouraged members to take what they had learned back to their own societies.

 

Date

Thursday April 30th 2026
Expired!

Time

18:30 - 20:30

Location

The Gallery
The Gallery
77 Cowcross Street, EC1M 6EL
Opening Hour
18:00
Website
https://alanbaxter.co.uk/the-alan-baxter-gallery
Phone
020 7250 1555

The Gallery is entered at the far end of the courtyard at 70-77 Cowcross Street, What3Words: ///driven.result.whips

Category

Organiser

The London Forum
Email
events@londonforum.org.uk
Website
Events page

Speakers

  • Richard Farthing
    Richard Farthing
    Trustee

    Trustee of London Forum, former chair of the Hammersmith Society and chartered engineer, latterly working in large infrastructure projects in the UK (notably Terminal 5) and metro projects in the Far East, having lived in the USA for a while in the 1990’s. He now runs the London Forum website and comms, amongst other interests

  • John Myers
    John Myers
    Trustee

    Trustee of London Forum and former Secretary. After working in law and finance, he now runs a campaign for more affordable housing with the support of local communities and is non-executive Chair of the Centre for British Progress, an educational charity.

QR Code
Go to Top