InBold AI & Data Ethics Policy

InBold A/S  ·  Version 1.1  ·  In effect from 2025  ·  Review: annual

Introduction

InBold uses artificial intelligence in our day-to-day work — for research, copy and image production, media planning, transcription, automation, and the betterhype platform. This policy sets out the principles we apply when we do.

It is written for our clients, our colleagues, our suppliers, and the public. It sits alongside our Privacy Policy, Whistleblower Policy, Terms and Conditions, and our Code of Conduct, and does not repeat what those documents already cover.

Scope

This policy covers every AI-based system we develop, procure, or use on behalf of InBold A/S and all its offices in Scandinavia and Asia. It applies whether the system is generative (for example, large language models, image and video synthesis) or non-generative (for example, classifiers, recommendation engines, predictive analytics and automation).

Our five principles

1. People stay in charge

AI is a tool, not a decision-maker. A named InBold colleague is responsible for every piece of work where AI has played a meaningful role, and for the outcome it produces.

2. Honesty about what we make

We tell our clients when AI shapes their deliverables in a material way, and we tell audiences when the law or reasonable expectation requires it. We do not pass off AI-generated work as something it is not.

3. Confidentiality first

Client material, personal data and unpublished work do not go into AI tools that could learn from them or expose them. We use enterprise-grade tools with contractual data protection for any work that touches client information.

4. Fair to the people in the work

We do not use AI to generate or manipulate likenesses of real people without their consent, to target audiences in ways they would not expect, or to make decisions about individuals’ employment, finances, or status without human judgement.

5. Proportionate to the risk

Not every use of AI carries the same risk. Low-risk uses — drafting, brainstorming, summarising public material — sit inside our normal ways of working. Higher-risk uses — anything touching client data, public claims, or vulnerable audiences — go through a defined approval path before they go live.

What we will not do

  • Generate or modify likenesses of real, identifiable people without their explicit, documented consent.
  • Pass confidential client material into AI tools that are not covered by an enterprise data agreement.
  • Use AI to make decisions about hiring, performance management, promotion or termination of InBold colleagues.
  • Publish AI-generated content as if it were human-authored where the audience would reasonably expect a human author and the distinction is material to them.
  • Use AI to design or deliver communications that target children, health-distressed audiences, or other vulnerable groups in ways that exploit them.
  • Use AI for biometric or emotional analysis of consumers without their informed, specific consent.

Who owns this at InBold

Responsibility for this policy and for our day-to-day AI practice sits with the InPilot AI and Automation Group, an internal working group that reports to the Managing Partners. The group includes representation from leadership, digital and media, creative, people, and our Saigon production hub. It maintains the operational rulebook that supports this policy, reviews new AI tools before group-wide adoption, and convenes out-of-cycle whenever a material change in technology, regulation or risk requires it.

This public policy is reviewed at least once a year. The next scheduled review is December 2026.

AI and our footprint

Generative AI is energy-intensive at the model and data-centre level. We acknowledge this footprint, and we factor it into our procurement and our day-to-day choices: preferring enterprise providers with disclosed environmental commitments, avoiding redundant generation, and reporting on AI-related impact as our ESG measurement matures.

Speak up

If you are a colleague, client, supplier, or member of the public and you have a concern about how InBold is using AI, please raise it through our existing whistleblower channel at inbold.trusty.report. Reports can be made anonymously and are handled under our Whistleblower Policy.