The EU-funded InVID and WeVerify projects were featured prominently in this year’s South-By-Southwest Conference and Festival (SXSW2019) and the Media Verification Team of CERTH-ITI was there to present them. Postdoctoral Associate Dr Markos Zampoglou was invited from MeVer, alongside Denis Teyssou from AFP, our partner in both InVID and WeVerify.
The EU House at SXSW (EU@SXSW) was situated on the “Palm Door on 6th” building. The InVID/WeVerify presentation and demos consisted of a number of touch-screens where a simplified version of the InVID Plugin was run on a set of hand-picked real world examples. We were there to guide visitors through the use of the plugin, explain the results, and discuss the role and impact of the WeVerify project in a world where disinformation is becoming an increasingly worrying problem.
Our presentations drew much attention from visitors of many different fields, most prominently from the tech and media industries. The forensics and keyframe analysis features provided by MeVer were a major point of discussion, since similar functionalities are not currently used on other publicly available verification tools. Overall, more than one hundred people came in contact with the demos and the presentations, hopefully broadening the user base of the applications.
However, SXSW 2019 also included a host of events, exhibitions and talks that were relevant to our work. To the extent that our schedule allowed, we tried to get in contact with the community and follow the hottest issues discussed. Some notable events included:
- AI and the future of journalism
Meredith Broussard (New York University), Rubina Fillion (The Intercept), Elite Truong (The Washington Post), Emily Withrow (Quartz)
The panel noted how recent progress in AI tools means that a large team of experts is no longer needed to integrate AI technologies in a news company. Existing tools and small local partnerships are more than sufficient nowadays. The tasks that AI tools can fulfill include automatic comment moderation (not only removing inappropriate content, but highlighting good contributions), personalized story recommendation, semi-automated writing and updating local stories where reporters are unavailable -Heliograf received a mention here-, and even potentially building interactive stories that users can interact with using NLP. Despite recognizing the current limitations of AI technologies, a take-home quote from the panel was that “the next Pulitzer story might be AI-assisted”. Not because it will be written by a bot, but because AI will have contributed significantly in analyzing data, and spotting newsworthy events and their highlights. On a less optimistic note, the panel concluded by discussing certain social issues that may arise from the use of AI (“Black Mirror scenarios”), most notably issues of inequality (e.g. no support for vernaculars in NLP), Deepfakes, privacy issues, impact on jobs, and the new field of AI ethics, which was the subject of another panel.
- Conspiracy Theorists on Social Media
Ben Collins (NBC News), Charlie Warzel (The New York Times), Kelly Weill (The Daily Beast), Brandy Zadrozny (NBC News)
This event, more aligned with the “soft” (i.e. social) aspects of the relevant MeVer projects, attempted to approach the issue of right-wing conspiracy theorists using social media platforms to organize campaigns and disseminate disinformation. The panel, which consisted of news professionals specializing in disinformation and the “dark” parts of the Web noted the transition of conspiracy theorists away from open platforms like public Facebook pages to more controlled outlets such as Discord, and highlighted that such disinformation campaigns often have very serious real-world impact, such as the 2014 Isla Vista killings, the 2015 Charleston church shooting, the 2018 Toronto van attack, and the “Pizzagate” incidents.
- Can We Fight Fake News Without Killing the Truth?
Wajahat Ali, Rim-Sarah Alouane (University of Toulouse Capitole), Anjana Susarla (Eli Broad College of Business, Michigan State University), Shaarik Zafar (Facebook)
The panel discussed the scope of the “fake news” issue, the role of online platforms and AI, and the role of media literacy. The observation that disinformation is on the rise was questioned, and -while hard to quantify-, it was noted that recent studies show that the actual amount of hoaxes designed to misinform is to an extent more limited than it was two years ago. On behalf of Facebook, Shaarik Zafar noted that the company is closing about a million fake accounts a day, which has a significant impact on misinformation. Furthermore, Facebook is using 3rd-party fact checkers in 40 countries to verify content. However, it was agreed that the fight against disinformation needs everybody’s help: journalists, academia, civil society, and non-expert users.
- AI-Powered Media Manipulation and Its Consequences
Joan Donovan (Data & Society Research Institute), Jessica Fjeld (Berkman Klein Center for Internet & Society, Cyberlaw Clinic), Matt Groh (MIT Media Lab), Mason Kortz (Berkman Klein Center for Internet & Society, Cyberlaw Clinic)
The panel dealt with the novel tools that have arisen in the recent years (many of which fall into MeVer’s areas of research), including automatic image and audio generation and manipulation, as well as user profiling and targeted advertising, and the ways such tools can be used to cause real-world harm through disinformation or polarization. Matt Groh from MIT described recent breakthroughs in Generative Adversarial Networks. Jessica Fjeld discussed the legal framework for dealing with the consequences of fake or harmful content and showed that, while the existing framework appears sufficient to deal with most cases, it is the real-world practical issues, such locating the perpetrator (often across borders) or containing the material once it has been disseminated in the Web, that make the issue difficult to tackle. Joan Donovan discussed real-world cases of violent polarization caused by weaponised misinformation, while Mason Kord focused on the changing views on the desirability of regulating platforms vs the existing deregulatory views on free speech and the Internet.
- Algorithms Go to Law School: The Ethics of AI
Lynne Parker (White House), Tess Posner (AI4All), Francesca Rossi (IBM), Lucilla Sioli (European Commission)
Recent advances in AI technologies have opened new horizons in all areas of society, but has also created ethical challenges that had never existed in the past. The panel attempted to highlight these challenges and showcase the approaches that the EU and the US are taking towards a solution, exposing the differences but also the common ground. Lucilla Sioli from the European Commission described the drafting by the EU of an “assessment checklist” ensuring the implementation of basic principles, (transparency, non-discrimination, respect for dignity etc.) and that developers of any system containing A.I. will be forced to ensure compliance with the checklist. In contrast, Lynne Parker of the White House Office of Sci & Tech Policy informed the panel that the President has signed a six-page Executive Order on A.I. ethics. The order “includes dozens of references to liberty and American values” for which companies will themselves ensure compliance.
Overall, while our stay at SXSW was short, it was packed with interesting events and meetings with significant actors in the field (mostly US-based ones). One take-home feeling from the event was that AI is on everyone’s mind, as it featured prominently in the titles and discussions of many panels. In particular, its impact on the news industry and the war between disinformation and verification was widely discussed, both within many events but also in the informal conversations outside. It’s an exciting time for those in the tech and media industries, and we are glad to be part of this wave of new ideas and new solutions to these emerging challenges.
The content of this post is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0).