As AI enters newsrooms, journalists have urgent responsibility
Artificial Intelligence News
Jun 17, 2017
ALL NEW TECHNOLOGIES have their champions and their naysayers, luddite matching tit for tat with techno-utopian. The arrival of artificial intelligence in newsrooms is no different. Some see its various iterations as tools for reducing grunt work; others see a field full of ethical land mines. Most see a little bit of both.
Artificial intelligence is a boon when processing large amount of material, and adapting to digitalization. Natural language processing can help analyze tweets en masse. The New York Times used machine learning to take its archive of recipes, add a structure to each one (by classifying what is an ingredient, what is a step, etc.), and create the NYT Cooking app. The Times also hopes to use automation to create a digital archive for its photo collection, which is currently sitting in the basement of the Times building.
Artificial intelligence still hasn’t entered most newsrooms, and, as Jonathan Stray has written for CJR, it won’t replace journalism so much as augment it. As journalism dips its feet into automated story writing, data scraping, and the like—and as the tools themselves become easier to use and widely available—now is the time to consider what AI has to offer journalism, and weigh its potential drawbacks.
This week, a group including technologists, journalists, and legal experts gathered at Columbia Journalism School for a conference on the impact of artificial intelligence on journalism. Hosted by the Tow Center for Digital Journalism and the Brown Institute for Media Innovation, the event touched on AI’s entrance into reporting and writing, as well as into the relationship between news outlets and their readers. (You can watch the full livestream here.)
Take, for example, the advent of virtual voice-activated assistants such as Amazon’s Alexa, Google Home, and the new Apple HomePod—or even the now commonplace Siri on iPhone. These services, said Harvard Berkman Fellow Judith Donath, are anthropomorphized algorithms: They are computer code meant to evoke a human quality. This becomes relevant to the news when these services start delivering us headlines every morning. Because they sound human, they could change our behavior. “You’re not worried your newspaper will think you’re shallow,” Donath said.
The biggest stumbling block for the entrance of AI into newsrooms is transparency. Transparency, a basic journalistic value, is often at odds with artificial intelligence, which usually works behind the scenes. This raises ethical issues when journalists begin using AI to assist in reporting. How transparent can or should they be about the code behind the story? Does explaining technical concepts increase trust, or decrease it? For instance, if a robot writes an article (they already write many basic sports stories), how should a news outlet disclose that to the reader? Other questions are more complicated, especially when journalistic ethics confront the proprietary concerns of technology companies. Angela Bassa, director of data science at iRobot, said the choice to be transparent often means sacrificing other benefits such as profit and scalability. When a data operation gets big, it’s harder to make it public simply because of the effort required to anonymize, format, and host the data. On the reporting side, lawyer Amanda Levendowski pointed out, working with AI means collecting vast swathes of data. Journalists have a special responsibility to acquire it legally and ethically—even public social media posts.
Personalization of news is also a major concern. It poses problems for the establishment of a shared reality. Facebook’s all-knowing “algorithm” was blamed after the 2016 election for feeding people the news they wanted to see, rather than a balanced, bipartisan diet. Personalization also challenges the concept of news as a public record. If we each see the same basic story, but tailored to our age, gender, or cultural touchstones, there is no single story to archive, and therefore no single history of a given event.
The idea of artificial intelligence entering our lives becomes more approachable when it is broken down into realistic scenarios. But that doesn’t make determining what we, as journalists, want our relationship with AI to be any less difficult. As Jon Keegan, a senior research fellow at Tow, said after the event: “Journalists who want to work with AI have a serious responsibility to understand the caveats with using these tools, how they work, and to speak with experts who use this tool in their daily work.”
Nausicaa Renner is editor of the Tow Center for Digital Journalism’s vertical atColumbia Journalism Review. She tweets at @nausjcaa.
Full Article: https://www.cjr.org/tow_center/artificial-intelligence-newsrooms.php