With the use of Artificial Intelligence (AI) rapidly growing in media companies, you may be wondering the best way to incorporate it into your own newsroom.
Because it’s still so new, companies across the world are still trying to figure out what policies and guidelines to establish for their newsrooms — which means you probably are too.
These are just some of the questions that your media company should probably be asking as you look to incorporate AI:
- Why should we incorporate AI into our workflows?
- What types of AI should we use? (Generative, transcription, etc)
- What are the most trusted sources of AI out there right now?
- How should we disclose the use of AI tools in our content?
- Should we use AI in the creation of images, videos and other visuals?
Through conversations with media companies and AI companies, learnings and studying research through organizations like the Nieman Lab, we’ve addressed some of those questions, among others, below to help your media company create your own guidelines to AI usage.
3 things to consider when approaching AI
Human oversight: Companies should be willing to embrace AI, but should be mindful of the risks. With how early our industry is into AI tools — especially generative AI — it’s critical that your workflow still utilizes human oversight. This will help with risk management of using the tools, enhancing their capabilities while still maintaining ethical guidelines, copyright laws and your commitment to transparency.
You are probably already using AI: Essentially, generative AI tools should not replace the role of journalists and editors, but can be strategically used to enhance productivity and may be used to enhance the quality of journalism. And in many ways, several of us are already using AI and may not know it. Do you use a transcription service for interviews or meetings? What about closed captioning services for your videos? Then you’re using AI.
Be flexible: Because of the growth we’re rapidly seeing in these technologies, approach creating your guidelines to be flexible and amendable. Know that, most likely, your guidelines will need to be reviewed and amended on an ongoing basis as the technology evolves, but all should remain in line with existing codes of conduct and journalism principles as a basis.
Intention of Use
Fact checking: AI should be used as a tool to aid in the process of sourcing information and writing, not to replace traditional sourcing or to create content to be published without first being checked by a human. Any information obtained through generative AI should be fact checked with sources to confirm, in keeping with best practices of journalism to ensure credible sourcing and accuracy of information.
We recommend any use of generative AI should be approved by senior management in your newsroom first to avoid any ethical breaches or misunderstandings in your or your newsroom’s workflow, which is why creating guidelines should be part of your AI introduction.
A real-world example: Our team used ChatGPT to help generate the following stories below. We asked ChatGPT to help us generate locations to be included in the listings, but followed that up by fact checking those responses generated by the AI to put together the list.
- 4 hiking, biking and walking trails in Brookhaven worth exploring
- From craft beers to award-winning cocktails, find an adult beverage at these 5 places in Allen
- These are best local craft breweries in The Triangle
Human approval: All AI-generated content should also be reviewed by a human prior to publication or sharing of any kind. The inclusion of human oversight helps avoid plagiarism, copyright infringement, the spread of misinformation, bias, and other concerns that may arise with the use of AI.
How to label it: Content generated and/or written by AI should be labeled clearly. The easiest way to do so is a disclaimer at the bottom of an article.
Who is responsible: Any content generated by AI still remains the responsibility and accountability of the writer and publication. Journalists are ultimately responsible for the accuracy, fairness, originality, and quality of every word in their stories.
When you don’t have to explain yourself: The use of AI in cases where it is only used as a tool in your workflow (for example, using Otter.ai to transcribe interviews or Grammarly to check grammar and spelling) does not need to be explicitly labeled, however.
Images and models
Stock images: AI may be used to help generate images, videos or other visuals, only when used in place of stock images. We do not recommend using it to generate visuals of sources or events. For example, if you need a stock photo of someone working in a coffee shop, that’s probably OK to use AI for. But if you’re looking for a photo to accompany a story covering an event or meeting, an AI-generated image could be misleading of actual people or events.
Graphics: AI can be also used to help create other visual aspects including graphs, charts, social media posts, as tools to accompany a story.
Fact checking images: NPR recommends research Scientist Mike Caufield’s SIFT method of fact checking images, keeping in line with well established media literacy guidelines that already exist:
- S – Stop
- I – Investigate the source
- F – Find better coverage
- T – Trace the original context
Google is also making it easier to see when a photo first appeared online, which can help identify AI-generated images or those shared with misleading or false context.
Copyright: Currently, works or images created solely by AI are not protected by copyright. On March 16, 2023, the Copyright Office launched an initiative to examine the copyright law and policy issues raised by AI, including the scope of copyright in works generated using AI tools and the use of copyrighted materials in AI training. The Office established the webpage copyright.gov/ai/ to track live updates on the status.
Other content uses
Headline writing: Headlines can be tricky. If you’re stumped on the perfect headline, consider running the story through AI to generate a headline for you, or one that you can use to help you create your own. This is also a great way to ensure your story stays on topic if AI is able to generate an accurate and concise headline.
With all of the channels you may be distributing your content — including online, in print, on social media and in newsletters, to name a few — sometimes you just need a new way to write the same headline multiple times. AI can be used to help generate some other ideas.
Summarizing for distribution: Similarly, your team is likely recapping each article to promote it in a social media post or sharing it on a newsletter. Once again, putting your story through AI and asking it to generate a summary can be another great time saver. Just be sure to maintain human oversight with each of these uses, as well.
Generating SEO: Keywords and descriptions are key to your story being SEO-friendly. Similar to providing a summary, AI can help identify keywords, metadata descriptions, Alt Text, closed captioning and more that can all be used to help boost your story’s SEO.
Privacy and confidentiality
When using any AI, be careful providing sensitive information to external platforms. Be mindful when including confidential information, even when it remains unpublished content as input for generative AI-tools.
Trusted AI right now
AI used should be technically robust and secure to minimize risk for error and misuse, which includes but is not limited to sources including: