AI in Local Government: AI Driven Legal Challenges

AI in Local Government: AI Driven Legal Challenges

Published on:
Reading time: 3 minutes read

There has been much discussion on how generative Artificial Intelligence (AI) could impact on the manner in which local authorities deliver services and analyse information. While AI can offer tangible efficiencies, the use of generative AI (where AI uses existing learning to create new content) brings heightened responsibilities in transparency, accountability and information governance, areas that are particularly significant for councils because of their statutory duties and democratic role.

Did you know that:

  • sharing confidential information with an AI tool may breach confidentiality undertakings
  • there is no guarantee that third party IP rights are not incorporated into any output from an AI tool, so there is a risk of a third party claiming the output infringes their rights.

Local authorities could utilise AI in a number of ways, including summarising large documents and analysing data trends. Public‑sector guidance emphasises safe, secure, responsible and transparent use of AI.

A core emerging expectation for councils is transparency about when and how AI tools are used. Best practice is shifting towards clear public visibility, which may include creating a dedicated ‘How we use AI’ section on the council’s website, labelling AI‑generated or AI‑assisted outputs, and publishing clear explanations of safeguards and oversight processes.

This approach helps maintain public trust, reinforces fairness and aligns with public‑law principles that govern local authority decision‑making.
One of the most important implications for councils is the need to recognise that anything entered into an AI system, or produced by it, may be subject to disclosure.

Under data protection legislation, individuals may request their personal data through a Subject Access Request (SAR). If a council has input personal data into AI tools, or generated outputs relating to an identifiable individual, that material may need to be disclosed.

For wider information‑rights obligations, local authorities are also subject to the Freedom of Information Act 2000 (FOIA) and the Environmental Information Regulations 2004 (EIR).

If AI has been used to create or inform information the authority holds, the prompts, outputs or internal notes can fall within the scope of an FOIA or EIR request. Councils must therefore maintain appropriate audit trails, avoid entering sensitive information into uncontrolled systems and ensure AI use does not undermine legal compliance.

Councils must take extreme care around the possibility that decisions are influenced by AI, with all decision‑making needing to be demonstrably rational, fair and fully reviewed by officers. AI must remain an assistive tool; officers remain responsible for validating accuracy, avoiding bias, and ensuring alignment with statutory duties and published policies.

Additional emerging risks also arise from the increasing use of AI by litigants in person and from the appearance of so‑called AI‑driven ‘law firms’ or legal services providers. Litigants in person may rely heavily on generative AI tools to draft pleadings, correspondence or legal arguments without a full understanding of procedural rules, evidential standards or the limits of the technology. This creates a risk that councils are presented with documents that appear legally coherent but are inaccurate, misleading or based on non‑existent authorities. In parallel, there is growing concern across the legal sector about unregulated providers presenting themselves as law firms while relying primarily on AI‑generated outputs, potentially without appropriate professional oversight, regulation or insurance. For local authorities, this raises issues around identifying the status and credibility of those they are dealing with, managing litigation risk, and ensuring that officers and legal teams do not place reliance on AI‑generated material that has not been independently verified. Councils should therefore approach AI‑generated submissions with appropriate caution and ensure that all material is subject to the same scrutiny and validation as any other unverified legal content.

So how can and should local authorities utilise the benefits without the risk? Ideally this would be through establishing and maintaining a clear governance framework. This includes creating an internal AI use policy, training staff on safe and ethical usage, conducting Data Protection Impact Assessments (DPIAs) for higher‑risk use cases, ensuring procurement processes address AI‑specific risks and publishing a clear public explanation of AI use.

AI provides valuable opportunities for councils to enhance efficiency and service quality, but only if used openly, responsibly and with clear lines of accountability. By making AI use visible to residents, maintaining robust information‑governance controls and embedding strong human oversight, local authorities can innovate without compromising trust, fairness or legal compliance.

Did you find this article useful?

Written by:

Patricia Grinyer

Patricia heads the Weightmans banking and finance team and advises on all aspects of financial services specifically public sector finance.

Related Sectors: