
As AI tools rapidly reshape marketing workflows, we’ve implemented clear internal standards to ensure secure, transparent, and responsible use. Why? Because we take our responsibility seriously, to our clients, to their data, and to the trust they place in us.
Through team-wide training, tool vetting, and enterprise-grade agreements, we’ve built a privacy-first framework that balances innovation with protection. From sandboxing new tools to enforcing strict data usage rules, our approach ensures AI becomes an asset, not a liability. Every AI-assisted output is reviewed by humans, and we’re transparent with clients about how we work. Because in marketing, trust is everything, and we’re safeguarding it at every step.
This post shares our approach to AI security: how we’re moving faster without cutting corners, why thoughtful adoption matters, and what it looks like to protect client trust in a tech-driven world.
In the marketing world, it’s hard not to be swept up in the AI wave. From idea generation to proposal writing, AI tools are helping teams move faster, do more, and unlock value that wasn’t feasible a year ago.
But in that rush for speed, it’s dangerously easy to forget that:
As an agency that regularly deals with strategies, budgets, performance insights, even credentials, we can’t afford to treat data security as someone else’s problem. That’s why we’ve taken a considered approach to how AI is used inside the agency. Speed and efficiency matter, yes, but this should never be at the cost of trust.
We’ve set internal standards that ensure our use of AI stays aligned with our values: responsibility, transparency, and protection of client interests. We’ve been quietly working on this at the same time as trying and testing tools as they emerge, involving the whole team to make sure that we share learnings and discuss challenges.
We started this with a series of team one-to-ones. We realised that the devil is in the detail, and everyone was in danger of doing their own thing when it came to learning about generative AI and what it meant for them, personally and at work.
Through our one-to-ones we built up a clear picture of usage – which chatbot services were being tried, for instance. We also used it to discuss what individual needs everyone had in this area, where they could see gaps, etc.
It also provided a great opportunity to discuss data security. It was a really insightful exercise, and although it took some time, it was worth every minute. The result was a clear agency map of usage, levels of understanding and data knowledge. It’s something we now schedule in to repeat on a regular basis.
This process helped us enormously to pin down which platforms to test more seriously in-house before later adopting these from a security-first approach.
We keep an eye on changes and new releases across platforms and ensure we share these with the rest of the team and encourage everyone to cross-share what they’re seeing too. It’s like one giant melting pot and we do this through our internal Teams chats and a weekly bulletin of ‘what’s new this week’ which everyone inputs into.
We’ve developed our own set of security and usage guidelines – our bible! This is updated on a regular basis and shared across the team. The latest example being the introduction of agents in ChatGPT and what this means, particularly in terms of security and data. It keeps us on our toes but it’s essential.
All of this leads to the standards below which we’ve now agreed and adopted as a team:

We’re firm believers in the power of AI to speed up workflows. But it’s never a substitute for judgement. Tools assist. People decide.
That’s why every AI-supported workflow inside the agency includes one of the team in the loop. Whether we’re using it to explore campaign concepts, draft early proposal content, or help structure ideas – nothing gets delivered to a client without a review, a tweak, or a sense check from someone who understands the brief, the brand, and the context.
In some respects, you could say it’s a balancing act – keeping creativity high and risks low. We 100% accept the benefits of using AI but there’s no way we’re going to dilute the creative core of our business in doing so.
We use AI responsibly; we talk about it openly too. When we work with clients, we’re clear about:
If clients have their own AI policies or preferred guardrails, we’re happy to align. We see AI as a collaboration, not a secret weapon. This openness builds trust. And in an industry where confidence is everything, that’s worth protecting.
These are the principles we’ve put in place to guide our use of AI across the agency. I’m sharing them in case there’s anything here, over and above what you’re already doing, that might be useful as a takeaway:
AI has given us incredible new ways to work: Faster, smarter, more creatively. But with that comes new responsibility.
We’re beyond just experimenting with AI; we’re implementing it in a way that protects the trust we’ve worked hard to earn. That means clear rules, practical safeguards, and real human judgement behind every deliverable.
I hope some of this resonates with you and has been a useful ‘take’ on our approach because in pulling my thoughts together here and sharing where we are with AI and security, I’ve realised I’m not just talking about our agency – this is applicable to everyone working in the marketing space right now.
