Artificial Intelligence (AI) has become a powerful tool in the modern workplace. Platforms like ChatGPT can assist with research, improve productivity, and automate repetitive tasks. They can generate useful first drafts of policies, contracts, or communications. It is even tasking itself in real-time and seeking to learn from its mistakes.
When used sensibly, AI offers businesses a strong operational foundation. But the key word here is “sensibly.” Without human oversight, expert knowledge and know-how, AI can introduce bias, spread misinformation, and create costly legal risks.
We’ve previously explored how AI can benefit your business and how to protect yourself from inadvertent risks. This article focuses on a far more troubling trend: the deliberate misuse of the Data Protection Act and the use of AI to target employers and other businesses.
The Rise of DIY Legal Letters and Hostile Communications
One of the most common issues we’re now seeing is the use of AI to draft legal-style letters, which are often aggressive, accusatory, and riddled with inaccuracies. Rather than seeking qualified legal advice (which may involve costs and balanced feedback on the strength of their case), some individuals turn to AI platforms to create threatening emails or letters that appear legally sound but are fundamentally flawed.
These letters can escalate situations unnecessarily, damage business relationships, and spark disputes that could have been resolved amicably. They often lack context, contain legally incorrect assumptions, and are drafted without reference to actual UK law or legal processes. The end result? Increased litigation costs, internal stress, and unresolved conflicts for no one’s benefit.
AI-Generated Contracts: A Legal Minefield
Another concern is individuals or businesses using AI to generate contracts in an effort to avoid legal fees. These contracts are sometimes presented to consumers or third parties who may not realise that the terms are vague, contradictory, or unenforceable. When things go wrong, and they often do, the fallout can be significant. This is especially true when the terms relate to cancellation rights, damages, or liability.
AI doesn’t audit the accuracy or enforceability of clauses. It doesn’t understand nuance or negotiate outcomes. A legally binding contract is not simply a matter of stringing legal words together. It requires careful thought, clarity, and knowledge of the relevant law. Relying on AI to produce these without legal oversight is not just risky, it’s dangerous. We are now seeing this fallout in litigation cases, relying on these documents and finding fault further down the line.
The Weaponisation of Data Subject Access Requests (DSARs)
Perhaps the most burdensome misuse we’re witnessing is the growing trend of weaponising Data Subject Access Requests (DSARs). Originally designed to protect individuals’ personal data, DSARs are increasingly being used as a strategic tool by disgruntled employees, ex-contractors, or litigants to overload businesses with complex, costly, and time-consuming administrative tasks.
Some individuals submit broad, unfocused DSARs, often drafted by AI and full of legal jargon, much of it misapplied. Businesses are forced to dedicate significant IT and HR resources, seek legal advice, and trawl through vast data sets to respond in time.
While there are legitimate DSARs, and businesses should always treat them seriously, we are seeing more cases where they are clearly designed to exert pressure, distract management, or pave the way for speculative litigation.
We recently had a third party make a DSAR request to a solicitor requesting privileged communications between a solicitor and its client. Clearly, this is exempt and not disclosable, but the AI-generated letter wrongly quoted the exception to the rules to confuse and distract the exemption criteria out of context. This caused unnecessary costs drafting letters between the parties to clarify the actual position.
Why AI fails with DSARs
General DSARs created by AI are very wide, asking for everything and the kitchen sink, making them often invalid. This is because they are not compliant and permit employers to deny the request or create justifying back-and-forths seeking clarification. There is no effort to narrow or tailor the request, which is flawed under GDPR.
‘I require all emails, internal notes and documents, personnel records, CCTV footage and data relating to me.’ This statement gives no timeline, context, or third-party custodian details, and for any business, this would be an impossible task.
Often DSARs are submitted before a Tribunal hearing disclosure date has been set as a fishing expedition to establish a claim rather than navigating the appropriate legal structure, and are done only to place an administrative burden on the employer. Often, the letters are threatening and the tone bordering on harassment as they require responses; otherwise, reports will be issued, and it often forgets to take into account sensitive or third-party data and confidentiality.
The Core Issue: AI Does Not Make You a Lawyer
There is a dangerous perception emerging that AI makes everyone legally equipped. It does not.
AI does not understand the legal merits of a case, the nuance of language, or the regulatory responsibilities that govern professional conduct. The SRA’s Code of Conduct binds solicitors to act ethically, proportionally, and with a view to resolution. AI-generated letters or complaints, however, often inflame tensions, increase the risk of discrimination, and may breach ethical standards if misused.
We’ve seen examples of AI content drawing from foreign laws, inaccurate internet sources, or making broad claims without evidence. These letters don’t follow pre-action protocols and frequently ignore the duties of fairness and proportionality, leading to confusion, acrimony, and avoidable legal fallout.
We are even seeing AI create fake cases and citations, which are challenging the courts and professionals who have blindly relied on this.
So What Can Employers and Businesses Do?
To protect themselves from this growing threat, businesses should take a proactive approach:
1. Establish Clear Data Protection Policies
Ensure your organisation has a transparent and robust data protection policy. This should set out how data is processed, how long it is retained, and who has access. Destroy data in line with retention policies to reduce exposure and minimise irrelevant disclosures.
2. Be DSAR-Ready
Have a standardised DSAR response process in place. Appoint a data protection officer in-house or seek timely legal advice to ensure privileged or irrelevant data is not disclosed. Remember, you cannot reject a DSAR without a reason, but you can respond lawfully, proportionally, and with proper guidance.
3. Create an AI Use Policy for Staff
Introduce a staff AI policy that governs how AI can be used in the workplace. Ensure staff do not input sensitive or confidential information into public AI platforms. Provide training and conduct regular audits to ensure compliance.
4. Rely on Qualified HR and Legal Support
Avoid using AI to generate contracts, letters, or HR templates without oversight. Always have a qualified HR advisor or solicitor review critical documents, particularly those affecting staff, liability, or customer rights.
5. Comply with Deadlines or Communicate Early
Whether responding to a DSAR or any other legal request, be aware of statutory deadlines. If more time is needed, seek an extension; don’t ignore it. Silence can be interpreted as avoidance and may strengthen an opponent’s hand.
Conclusion
AI is a powerful tool, but like any tool, it can be used for good or evil. While many use AI to improve efficiency and communication, we’re increasingly seeing it misused to intimidate, provoke, or pressure businesses into settlements or compliance without a legal basis.
Let’s be clear: not every person is misusing AI, and not every DSAR or legal letter is spurious. But the rise in misuse is real and rising. The burden often falls disproportionately on the businesses trying to follow the rules. Employers must be vigilant, proactive, and willing to blend technology with proper legal and human oversight. AI is not a lawyer, and treating it like one can have serious consequences.
A City Law Firm Limited is a leading entrepreneurial law firm in the city of London, with a dynamic and diverse team of lawyers. It has been awarded many accolades for his innovative work with emerging technologies, founders initiatives and scale-up support. They specialise in tech law, scaling and investment ready business law, IP and commercial litigation. They offer bespoke, specialist, friendly advice and support at competitive prices.