SocietalAI

When AI appears human, trust becomes a Board issue.

Digital Interaction Trust helps organisations decide where AI must be distinguishable — and how to govern it — across voice, video, and digital channels.

  • Clarify where disclosure is required — and where it isn't
  • Measure misidentification and trust impact in real interactions
  • Set human-in-the-loop escalation and accountability standards

Confidential. Board-ready. Human-in-the-loop.

The Unseen Connection — a visual narrative about uncertainty in digital interactions. Are you speaking to a human, or an AI clone?

The trust frontier is already here

AI agents increasingly operate convincingly in everyday digital interactions. People often assume "human by default". When identity is unclear, organisations face trust, accountability, and regulatory risk. The question is no longer whether AI can pass as human—it's whether your organisation knows when it should, and when it mustn't.

What SocietalAI provides

Human–AI Distinguishability Framework (HADF)

A practical model covering disclosure, perception, agency, and context sensitivity — so leaders can decide where distinguishability matters.

Digital Interaction Trust Test (DITT)

Controlled assessments using real interactions (Zoom, Teams, voice) to measure human vs AI misidentification, trust impact, and contextual risk.

Trust & Disclosure Advisory

Guidance on when disclosure is required, how to design human-in-the-loop escalation, and how to set Board-level accountability and policy readiness.

How it works

1

Briefing

30–45 minute session to understand your context, risks, and objectives

2

Assessment

Scoped diagnostic and testing across your key digital interaction channels

3

Board-ready recommendations

Policy frameworks, escalation protocols, and accountability standards

Who it's for

Boards & Non-Executive Directors

CX, Digital, HR and AI leaders

Regulated and people-centric sectors

Request a Digital Interaction Trust Briefing

  • Board-level clarity in 30–45 minutes
  • Sector-relevant risks and choices
  • Recommended next steps

Assess Your Human–AI Trust Exposure

  • Misidentification map across key interactions
  • Trust and risk analysis
  • Board-ready governance recommendations

Frequently asked questions

Is this anti-AI?

Not at all. We support responsible AI deployment. Our work helps organisations use AI effectively while maintaining trust and meeting regulatory requirements.

Do we need to disclose AI in every interaction?

No. Disclosure requirements depend on context, risk, and regulatory environment. We help you determine where disclosure is necessary and where it isn't.

How long does an assessment take?

Typically 2–4 weeks from initial briefing to final recommendations, depending on the scope and complexity of your digital interaction channels.

Will this work with our existing AI vendors?

Yes. Our framework and assessments are vendor-agnostic and designed to integrate with your existing AI systems and providers.

Is this confidential?

Absolutely. All assessments, findings, and recommendations are treated with strict confidentiality and are designed for internal Board and leadership use.