The Impact of Artificial Intelligence on Media - An Interview with an AI Researcher
Media outlets today use artificial intelligence (AI) as their main assistive technology, and the content of an article is created by a journalist. However, along with the development of modern technologies, the capabilities of artificial intelligence have improved, and there have been assumptions that journalists may be replaced by AI.

The German researcher of digital communications and transformation, Lennart Hofeditz, who studies the establishment of artificial intelligence in different professions and the challenges related to it, spoke to us about the future of artificial intelligence in journalism.

The main messages of the interview:

  • There are specific topics that AI can write about better than a human, but for analytical, complex articles, a journalist is irreplaceable.
  • It is impossible for AI to be biased or spread misinformation since it creates text or images based on sources that are provided by a human.
  • To avoid spreading misinformation, it is important to increase media literacy in three areas: media, artificial intelligence, and data.

In 2021, Lennart was researching the use and credibility of artificial intelligence in journalism. People (especially young people) prefer social networks to get information. Artificial intelligence is often used to spread the news in these networks, so it is important to know what AI is in general and how it works in media.

1) The findings from the research, how much do readers trust AI?

In general, people trust digital technologies and artificial intelligence less than journalists. However, different cases can be discussed here, for example, active users of social networks trust an article published by AI more than those who use them less.

The trustworthiness of an AI-generated media product is determined by several factors:
  • The media type
  • The content of the article
  • Media reputation

Hence, if you’re a credible news organization with valid sources people trust the AI-based articles your agency provides.

There are specific topics that AI is good at writing about; For example, covering sports events or financial news or creating photos and videos, although from an ethical perspective, a journalist, unlike AI, can verify the obtained information and sources, therefore, the author can use artificial intelligence to diversify the media product.

2) There is an opinion that AI may be less biased, what has your research shown?

When journalists use AI for reporting, they provide the AI ​​with a specific database [on which the content of the AI-generated final product depends], so if the AI-generated article is biased, it is still the journalist's professional responsibility to provide credible sources to the AI ​​and thus to the audience.


3) How can artificial intelligence be used to spread "deep fakes" and misinformation?

Today, AI can already create content that’s similar to human written text, 3D images, and Deep Fakes of politicians and other famous people, and the latter is clearly a threat to their reputation and security.

However, we are now working on a project that should curb the disinformation spread by artificial intelligence. It is a simulation of how fake news spreads in social networks, and it gives us the opportunity to show journalists, the representatives of various organizations, or politicians which sources are reliable.

Additionally, another way to combat misinformation is “labeling”. Today, it is possible that artificial intelligence, the so-called "Meaningful bots" find fake news that is spread on social networks and point out that the information is not true. However, our research has shown that often people do not trust these labels, they believe in conspiracy theories and think that the organization that checked the facts is biased.

I think that journalists should explain to the audience how fact-checking organizations work and the artificial intelligence that is used to verify the information.

4) How is social media using AI to target the audience?

The algorithm, which social networks mainly use, works based on keywords, which means matching the words used in the user's description, interests, or posts and providing them with disinformation accordingly.

5) How to protect people from disinformation when technically, AI can spread it faster?

To protect against misinformation, it is important to raise literacy in three areas: media, artificial intelligence, and data. Also, it's important to explain to people how the algorithm works in general. To consider the example of Germany, we put a questionnaire on the governmental websites, which also includes fake news, and users have to guess which story was real. In case of incorrect answers, the system explains which story was not true and why.

6) How to explain the role and benefits of AI to people?

Explainable artificial intelligence (XAI) is a research field that examines what we need to explain to people to make AI more familiar and trustworthy. For example, there is a "decision tree" algorithm, and XAI explains it with a visual of the tree with leaves, where the leaves are used to illustrate different decisions.

7) What artificial intelligence systems (tools) are used in the media?

The GPT3 model uses AI to "teach" the program to write based on experience (already existing data), which obviously adds more human features to the text. For example, if we want GPT3 to write about a sport event, we need to "give" both general information about the topic, and also specific ones we want the AI ​​to include in the text it generates. At this stage, this system is sophisticated and writes on extensive, complex topics, the program is used by influential publications, including The Guardian.

Another system used by publications is the same company's (Open AI) product DALLE2. Cosmopolitan magazine was the first to use DALLE2 and GPT3 and print an AI-designed cover. Using the program is quite simple, it is necessary to indicate the keywords, and based on those words artificial intelligence creates a photo. You will have to pay a small fee for each image, this system is a revolution for graphic designers.

Editorial note:

Illustrations created by DALLE2 are also used in "Mediachecker" articles. For the presented photo, we specified as keywords:

detailed and realistic portrait of a person with freckles, round eyes and makeup, red lipstick, white shirt, skin texture, dry lip, 85mm lens, natural lighting, contrasting colors, and realistic photo with highlighted details.

Also, there is LaMDA artificial intelligence, which is conversation based. In the future, we can say that if a journalist provides it with topics, the AI ​​will prepare questions on these issues and will be able to conduct an interview. This AI is used by Google: for example, if you want to book a place for a meeting, the artificial intelligence calls the facility itself, talking to the representative, and does the reservation of the table instead of you.

8) As you mentioned, the DALLE2 system can create creative images, which was previously the job of graphic designers. Is it possible that some of the journalist's functions will be included by AI?

I see it in a positive way and I think we should learn to use these opportunities to make our jobs easier. AI can create a creative photo, but you still need a graphic designer to find the right keywords, because Artificial Intelligence ​​may not give us the desired image on the first try.

The same can be said about the GPT3 program, if any publication uses this AI to write an article, then this particular media outlet is responsible for the content, because GPT3 does not automatically publish the text, and the editor or journalist must verify the material prepared by artificial intelligence.

9) What are the main challenges in using AI in journalism?

The first and most important challenge is the lack of technical knowledge among journalists. They often don't know how to use it, referencing databases and sources for AI. Unfortunately, often, when an inexperienced journalist uses artificial intelligence, instead of diversifying the material, it may even damage something. I think that the solution is for media organizations to constantly take care of and fund the improvement of the knowledge of their employees in this regard.

10) How do you see the future of AI in journalism?

It is difficult to make accurate predictions because it is a very fast-developing field. However, I would say that in 5-10 years, journalists will have more of an editor's function and will control the product created by AI.

80% of the content (photo, text, video) can be created by artificial intelligence, but the journalist (editor) has to decide what to publish and provide reliable AI sources.

ავტორი : თამარ თოიძე;
კომენტარი, რომელიც შეიცავს უხამსობას, დისკრედიტაციას, შეურაცხყოფას, ძალადობისკენ მოწოდებას, სიძულვილის ენას, კომერციული ხასიათის რეკლამას, წაიშლება საიტის ადმინისტრაციის მიერ

ასევე იხილეთ