Bruised Grid

HOW TO HANDLE BAD CONTENT

Two articles illustrate the state of thought on moderating user-generated content

Ben Thompson of Stratechery rounds up recent news on content moderation on Twitter/Facebook/Youtube and makes a recommendation:

“Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

“That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.”

Full blog post here.

The Social Capital newsletter responds:

“… If we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behavior that the internet has empowered, but rather a few dimensions of them.”

The piece goes on to describe four types of bad content. Link.

Michael comments: The discussion of content moderation—and digital curation more broadly—conspicuously ignores the possibility of algorithmic methods for analyzing and disseminating (ethically or evidentiarily) valid information. Thompson and Social Capital default to traditional and cumbersome forms of outright censorship, rather than methods to “push” better content.

We’ll be sharing more thoughts on this research area in future letters.

  • Hunter Walk, former director of consumer product management at Youtube, describes the process of user-generated content moderation at large sites. Link.
  • On new attempts to use AI to spot fake news: “Testing a demo version of the AdVerif.ai, the AI recognized the Onion as satire (which has fooled many people in the past). Breitbart stories were classified as ‘unreliable, right, political, bias,’ while Cosmopolitan was considered ‘left.’ It could tell when a Twitter account was using a logo but the links weren’t associated with the brand it was portraying. AdVerif.ai not only found that a story on Natural News with the headline ‘Evidence points to Bitcoin being an NSA-engineered psyop to roll out one-world digital currency was from a blacklisted site, but identified it as a fake news story popping up on other blacklisted sites without any references in legitimate news organizations.” Link.
  • With a sexpartite fake news typology, researchers present initial analysis and visualizations pertaining the the 2017 French Presidential election: they found platform disparities (more elaborate fake news like conspiracy theories and fact-manipulation dominated on Twitter, while all forms of misinformation, and clickbait in particular, circulated widely on Facebook), and party disparities (Asselineau and Le Pen supporters distributed the lion’s share of suspiciously sourced content). Link (in French). ht Margarita
  • Related to the above, a possible attempt at promoting accuracy from the middle: “Researchers from Aalto University and University of Rome Tor Vergata have designed an algorithm that is able to balance the information exposure so that social media users can be exposed to information from both sides of the discussion.” Link. ht Margarita
  • Tangentially related: Gobo from MIT Media Lab. “Sign up for Gobo, link it to your other social media profiles, and you can take control of your feed. Want to read news you aren’t otherwise seeing? Use our ‘Echo Chamber’ filter to see what we call ‘wider’ news.” Link.

NEGATIVE EMISSIONS

“Has the world come to rely on an omaginary technology to save it?” Climate models depend on a carbon-negative technology that barely exists

“But here’s where things get weird. The UN report envisions 116 scenarios in which global temperatures are prevented from rising more than 2°C. In 101 of them, that goal is accomplished by sucking massive amounts of carbon dioxide from the atmosphere—a concept called “negative emissions”—chiefly via BECCS [bioenergy with carbon capture and storage]. And in these scenarios to prevent planetary disaster, this would need to happen by midcentury, or even as soon as 2020. Like a pharmaceutical warning label, one footnote warned that such ‘methods may carry side effects and long-term consequences on a global scale.’

“Indeed, following the scenarios’ assumptions, just growing the crops needed to fuel those BECCS plants would require a landmass one to two times the size of India, climate researchers Kevin Anderson and Glen Peters wrote. The energy BECCS was supposed to supply is on par with all of the coal-fired power plants in the world. In other words, the models were calling for an energy revolution—one that was somehow supposed to occur well within millennials’ lifetimes.”

ABBY RABINOWITZ and AMANDA SIMPSON provide a thorough overview of the key players and technologies in this emerging field. Full piece here. (Some classify these approaches as carbon geoengineering, others use the term “geoengineering” to refer only to solar.)

  • “Now scientists are documenting how sequestering carbon in soil can produce a double dividend: it reduces climate change by extracting carbon from the atmosphere, and it restores the health of degraded soil and increases agricultural yields.” Link.
  • See also the New Yorker piece we sent last month. Link.

WHERE MATTERS

A review of place-based development policies

“The general thrust of the evidence does not point to enterprise zones creating jobs or reducing poverty. Notwithstanding the difficulties mentioned above, careful analysis has found mixed results of place-based policies (as discussed in my research note). Studies generally do not find increasing employment in response to these policies. There is even less evidence of reduced poverty in these zones and there is also some evidence of rising housing prices which could hurt those who rent and live in those areas, as opposed to property owners who may not reside in the Enterprise Zone. The types of place-based policies that may be more effective support infrastructure investment or higher-education institutions. The Tennessee Valley Authority emphasized the construction of hydroelectric dams to bring power generation to the region it covered. This was a classic case where a place-based policy fostered economic growth, in this case through industrialization. More recently, there is evidence that university research facilities attract high-tech, innovative firms which, in turn, can form industry clusters that benefit from agglomeration economies.”

Full post here.

“MANY ANALYSTS, ONE DATASET”

Quantifying the consequences of using different analytical frameworks

“Overall 29 different analyses used 21 unique combinations of covariates. We found that neither analysts’ prior beliefs about the effect, nor their level of expertise, nor peer-reviewed quality of analysis readily explained variation in analysis outcomes. This suggests that significant variation in analysis of complex data may be difficult to avoid, even by experts with honest intentions. Crowdsourcing data analysis, a strategy by which numerous research teams are recruited to simultaneously investigate the same research question, makes transparent how defensible, yet subjective analytic choices influence research results.”

Full study available here.

  • A narrative post about making the study. Link.

+++

  • From Wired, a summary of discussions about AI and Ethics from last week’s Neural Information Processing Systems (NIPS) conference. Link.
  • Related to the above, video of Kate Crawford’s NIPS keynote, “The Trouble with Bias,” and a few key points in a Twitter thread.
  • “An overarching constraint for free human mobility is that international political borders are only selectively permeable. Drawing on Ernst Bloch’s work on the possible, the author examines open-borders and no-border arguments and explores the conditions of their possibilities.” Link.
  • Considerations on a robot tax. Link.
  • In Nature, famous statisticians give opinions on how to fix statistics. Link. (JFI favorite Andrew Gelman is featured; his excellent blog is here.)
  • Addressing bias through datasets: “We have to teach our algorithms which are good associations and which are bad the same way we teach our kids.” Link. ht Margarita
  • A new study from the People’s Policy Project: “During the Obama presidency, the homeownership rate crashed by 4.4 percentage points, erasing all the increase of the mortgage bubble, eventually falling to the lowest level since 1965, before slightly rebounding. It was the greatest destruction of middle class wealth since the Great Depression, and its impact disproportionately wounded black wealth.” Link. ht reader Patrick S.
  • “In an effort to improve our understanding of how political information changes as it propagates from the media to one person to another, I conduct a novel online experiment in which I track information diffusion through individuals in communication chains. I then use content analysis to examine how the information is actually changing, finding that the amount of political information communicated decreases as the number of people in the chain increases. Furthermore, the information is increasingly distorted as the length of the chain increases.” Link.
  • “During the summer between high school and college, college-related tasks that students must navigate can hinder successful matriculation. We employ conversational artificial intelligence (AI) to efficiently support thousands of would-be college freshmen by providing personalized, text message-based outreach and guidance for each task where they needed support.” Link. ht Will
  • A special issue of American Behavioral Scientist on “The role of state education policy in ensuring access, achievement, and attainment in higher education,” with several articles. Link. ht Will
  • “This paper provides evidence of the long-run effects of a permanent increase in agricultural productivity on conflict. We construct a newly digitized and geo-referenced dataset of battles in Europe, the Near East and North Africa covering the period between 1400 and 1900 CE. For variation in permanent improvements in agricultural productivity, we exploit the introduction of potatoes from the Americas to the Old World after the Columbian Exchange. We find that the introduction of potatoes permanently reduced conflict for roughly two centuries. The results are driven by a reduction in civil conflicts.” Link.

Each week we highlight research from a graduate student, postdoc, or early-career professor. Send us recommendations: editorial@jainfamilyinstitute.org

Subscribe to Phenomenal World Sources, a weekly digest of recommended readings across the social sciences. See the full Sources archive.