Blog post -
A Deeper Dive into AI Ethics in PR
AI Ethics is a hot topic in PR but is focused largely on obvious issues like defining clear AI policies, organizational AI leadership, copyright clarity, AI usage disclosure and combating misinformation.AI Ethics is a hot topic in PR but is focused largely on obvious issues like defining clear AI policies, organizational AI leadership, copyright clarity, AI usage disclosure and combating misinformation. But a deeper dive brings disturbing AI ethical issues to the surface such as client data leakage, user data and document training, Shadow AI and other technology based security issues.
How PR professionals handle sensitive information is an ethical issue since the damage that can be caused by data leakage from AI models can be severe. A key example is Samsung which discovered that proprietary source code and confidential meeting transcripts were public after employees uploaded them into personal ChatGPT accounts. But surprisingly, wading through the byzantine privacy and training rules of AI technology companies make it difficult to identify incorporate into AI policies and practices.
Corporate executives’ early enthusiasm for GenAI is now tempered by hard security fears: a recent KPMG study reports an 81% jump in corporate leaders worried about data security, use of their data for training, IP control of released data and many other issues. A recent Gartner study reports that 88% of CISOs worry about secure AI deployment with 79% seeing data leakage as the primary AI adoption issue underscoring that GenAI’s promise is colliding with unresolved corporate security and governance issues.
And the security risks of AI tech providers doesn’t receive enough attention. Major products like ChatGPT, Anthropic, Meta, Gemini, Co-pilot, Grok, DeepSeek and Mistral have a blizzard of various security and privacy rules, practices and options that vary widely. For example, ChatGPT’s base plan uses data for training by default with human review; Meta has no training opt-out and Anthropic just made data training their default. And LLM user data retention defaults can range from 30 days (OpenAI) to 18 months (Google) or up to five years for Anthropic’ s Claude.
Shadow AI is the unsanctioned use of personal AI accounts, chatbots or plug‑ins which risks silent, confidential data leaks. Shadow AI can create rogue data flows that could bypass NDAs, breach confidentiality, and undermine the very trust PR exists to protect. This isn’t hypothetical: it’s the everyday convenience of a junior staffer trying to “just polish the prose,” which can create an evidentiary and reputational time bomb the organization can’t see, can’t audit, and can’t unwind.
As the PR industry continues to develop ethical AI standards, firms need to go beyond just creating guidelines, use policies and training and first focus on AI technology. The remedy is leadership and safe defaults like single sign-on access (SSO); zero‑retention enterprise tools; audit logs; ban sensitive inputs into public models and work with fully vetted technology provides to make data security a top priority.
###
AI Disclosure: Précis Public Relations by Précis AI was used to conduct research and produced an initial draft from the author’s detailed theme and outline. The draft was substantially edited, and fact-checked by Précis using a second, independent, AI to confirm facts, identify hallucinations and verify sources.
David Fuscus
David Fuscus is the CEO/Founder of Précis AI whose PR-specific AI platform Précis Public Relations is built on the company’s AI DataVault™ security system. He also founded the Washington DC PR agency Xenophon Strategies and in 2023 he was inducted into the Public Relations Society of America’s Hall of Fame.