ao link
Business Reporter
Business Reporter
Business Reporter
Search Business Report
My Account
Remember Login
My Account
Remember Login

Lying with AI

Linked InXFacebook

Freelance writer Theo Green describes how to avoid spreading misinformation when using AI

 

When artificial intelligence systems began generating text, imagery and audio that could rival human creativity, many people celebrated this newly found potential. Yet, beneath the promise of efficiency and fresh insights, lies a darker reality: most AI tools can, and unknowingly will, fabricate facts, distort events and reinforce untruths with alarming ease.

 

In a media environment already burdened by disinformation and partisan spin, the advent of AI-generated misinformation represents an urgent challenge. How, then, can journalists, educators and everyday users harness AI responsibly, ensuring that they do not inadvertently perpetuate falsehoods or spread disinformation?

 

 

The dangers of synthetic content

Over recent months and years, large language models such as ChatGPT have advanced at a breakneck pace. They can draft essays, answer technical queries and even compose poetry that appears genuinely inspired, and which to an unwary viewer can seem legitimate.

 

Yet these systems are wholly probabilistic: they select words and constructs according to statistical patterns in their training data, rather than by consulting any notion of objective truth.

 

When prompted with questions they find hard to answer, they can “hallucinate”, presenting the questioner with invented dates, non-existent studies or misattributed quotations, all delivered with an air of authority that may mislead the unwary. This is because these systems lack a broader context and make decisions based solely on raw data, rather than considering all relevant factors.

 

Such inaccuracies are not always trivial. In finance or politics, fabricated statistics or out-of-context statements might skew public opinion or obscure vital debate. When using AI, even well-intentioned individuals risk amplifying errors, resharing them across social media or embedding them uncritically within formal publications. Any content generated by AI and designed for sharing should therefore be read and edited by a human to make it a more reliable and useful document.

 

 

Understanding why AI “lies”

To mitigate these dangers, one must first understand why AI systems propagate falsehoods. Unlike a human researcher, an AI neither weighs evidence nor analyses primary sources. Instead, it draws upon patterns gleaned from the data used to train it, often vast bodies of text in websites, books and periodicals, some of which may themselves contain inaccuracies or bias.

 

If asked to provide a citation that it cannot find in its training data, an AI that has not been programmed with appropriate guardrails might invent a plausible reference rather than acknowledge its ignorance. If asked to summarise complex research, it may oversimplify or omit critical caveats.

 

In short, the technology excels at fluency, but not at veracity. The AI model will scan data at face value, but it cannot get a deeper understanding of the real meaning of the data. For example, it may not take into account how and why it was gathered, much less the way that the data is presented.

 

In addition, errors inherent in the underlying data can propagate through successive generations of outputs. The AI will often edit its tone, word choice and formatting of text it created earlier, but very rarely the actual data or quotes it has sourced. Misinformation within the text it presents is merely echoed; sensational or prejudiced passages may be disproportionately represented.

 

In the absence of meticulous auditing, a model’s outputs will reflect the flaws present in its training. This means that AI alone is often an unreliable creator of content: its outputs must generally be overseen by a human to ensure trustworthy content.

 

 

How to avoid sharing AI-driven falsehoods

Because our world is increasingly filled with AI-generated content, it’s important to have some strategies that help you, or your organisation, to identify and avoid promoting misinformation.

 

Regard AI as an aide, not an arbiter.  Approach the outputs from public AI tools with scepticism. They can be used to generate and draft ideas, outline structures, or suggest phrasing, but using them as a definitive source is inadvisable. When presented with an AI-asserted fact, pause and ask yourself two important questions:

  • Do I want this to be true? (If you say “yes” to this question, then you should examine the fact even more closely.)  
  • Is this likely to be true? (Use your common sense: if it is unlikely, then maybe it is false.)        

Independently corroborate every assertion. Should AI provide a statistic, date or quotation, verify it against reputable references. Consult any sources the AI quotes. Compare these with primary studies, official documents or respected news organisations. A brief search online or of trusted databases may confirm or repudiate the claim the AI is making. But if verification proves impossible, flag the information as unconfirmed, omit it entirely, or place a warning on the content so that people know it’s AI-generated.

 

Insist on transparency of provenance. Encourage AI tools to offer confidence metrics for any statistics or sources for any facts. The aim should be to allow users to ascertain for themselves the validity of the assertions being made. Ask the AI to quote its sources and then provide them in your article, having previously checked them to ascertain their trustworthiness, and indeed whether they exist.

 

Craft precise, well-scoped prompts. The nature of the prompt you use directly influences the quality of an AI’s response. Broad or vague enquiries often yield broad, unreliable answers. Instead, pose tightly framed questions: for example, “Evaluate three peer-reviewed studies on…” rather than “Tell me about…”. This gives the AI a framework to work within.

 

Annotate areas of uncertainty.  When drafting from AI-generated suggestions, mark any doubtful segments. Footnotes or parenthetical annotations indicating that a statistic or claim awaits confirmation serve as reminders to both author and reader. This can be critical to prevent disinformation and can also enable the work to be updated once those facts or figures become confirmed or more universally accepted.

 

Ensure robust human oversight.  Use AI to augment, rather than replace, human judgment. Assign subject-matter experts or experienced editors to review AI-generated content, particularly in sensitive domains such as medicine, finance or law. Human reviewers remain indispensable in discerning nuance, context and ethical implications. They can detect if the AI has misunderstood anything and check the AI’s sources to see if they have been used appropriately or if they are being used out of context.

 

Promote digital literacy. Organisations should equip their stakeholders (and especially their employees) with an understanding of AI’s limitations. Encourage a critical mindset, recognising that plausibility does not equate to truth, and help people to avoid blindly accepting the results that AI models may give them.

 

Design for accuracy. Technological design choices exert a profound influence on the propensity for misinformation. AI developers can avoid generating misinformation by asking models to reference verified databases or external knowledge graphs. Systems can be designed to convey degrees of confidence, rather than offering unqualified assertions. The data sets used for training should be reviewed so that misleading, extremist or demonstrably false content is excluded. Adversarial testing can be used to challenge systems with demanding prompts designed to expose their hallucination tendencies.

 

Share information responsibly. Ensure the quality of AI-generated content by developing governance processes, such as checklists for fact-checking. Only share AI-generated content for an authorised reason and under the supervision of authorised employees who should check that copyrighted material or personal data is not inadvertently included. Always be transparent about the presence of AI-generated content in published information, warn people that it may be inaccurate, and allow content users easy routes for feeding back their concerns about accuracy.

 

 

The enduring human-machine alliance

 As AI continues to mature, the distinction between human-written and machine-assisted text will become increasingly blurred, making it harder to tell the two apart. Yet the responsibility for truth remains unequivocally upon the human using the AI model. While machines may accelerate research and relieve us of repetitive tasks, only human intellect can evaluate context, moral nuance and the ramifications of misinformation.

 

In an era when falsehoods can traverse the globe at digital speed, it is very dangerous to be complacent. The fact that AI is a tool and not an all-knowing oracle must always be remembered.

 


 

Theo Green is a freelance writer. This article was planned in part using ChatGPT and DeepSeek AIs, but was written by a human

 

Main image courtesy of iStockPhoto.com and Dragon Claws

Linked InXFacebook
Business Reporter

Winston House, 3rd Floor, Units 306-309, 2-4 Dollis Park, London, N3 1HF

23-29 Hendon Lane, London, N3 1RT

020 8349 4363

© 2025, Lyonsdown Limited. Business Reporter® is a registered trademark of Lyonsdown Ltd. VAT registration number: 830519543