Do I See Right?

DISTINCT FUSION

Tracking the convergence of terms across disciplines

In a new paper, CHRISTIAN VINCENOT looks at the process by which two synonymous concepts developed independently in separate disciplines, and how they were brought together.

“I analyzed research citations between the two communities devoted to ACS research, namely agent-based (ABM) and individual-based modelling (IBM). Both terms refer to the same approach, yet the former is preferred in engineering and social sciences, while the latter prevails in natural sciences. This situation provided a unique case study for grasping how a new concept evolves distinctly across scientific domains and how to foster convergence into a universal scientific approach. The present analysis based on novel hetero-citation metrics revealed the historical development of ABM and IBM, confirmed their past disjointedness, and detected their progressive merger. The separation between these synonymous disciplines had silently opposed the free flow of knowledge among ACS practitioners and thereby hindered the transfer of methodological advances and the emergence of general systems theories. A surprisingly small number of key publications sparked the ongoing fusion between ABM and IBM research.”

Link to a summary and context. Link to the abstract. ht Margarita

  • Elsewhere in metaresearch, a new paper from James Evans’s Knowledge Lab examines influence by other means than citations: “Using a computational method known as topic modeling—invented by co-author David Blei of Columbia University—the model tracks ‘discursive influence,’ or recurring words and phrases through historical texts that measure how scholars actually talk about a field, instead of just their attributions. To determine a given paper’s influence, the researchers could statistically remove it from history and see how scientific discourse would have unfolded without its contribution.” Link to a summary. Link to the full paper.

CROWD BURST

Novel research for the study of violence

Leveraging the ubiquity of video recording in public space and advancements made in computer vision, researchers use real-time vector analysis to map the transition from aggression to violence among crowds.

“Crowd scenes are especially challenging as they present constant, often monotonous, spatially unconstrained human motion. This may not only reduce the effectiveness of a human observing the videos over long periods of time, but it can also flood a Computer Vision system with large quantities of motion information, making methods relying on interest points too time consuming.

In order to design a system capable of operating in real-time, we forgo high-level shape and motion analysis and intensive processing, instead following the example of methods for dynamic texture recognition in collecting statistics of densely sampled, low-level features.… Our method considers statistics of how flow-vector magnitudes change over time. These statistics, collected for short frame sequences, are represented using the VIolent Flows (ViF) descriptor. ViF descriptors are then classified as either violent or non-violent using linear SVM.”

Link to the paper.

  • The researchers published their video database and code here. More recent research has developed further descriptors to improve upon the Violent Flows strategy. (Link, link, link.)
  • On the “microregulation” of violence. Sociologists mapping CCTV footage of aggressive incidents find counterintuitive statistical evidence about violence and group size: “Group size was significant.… Group members tended to de-escalate rather than escalate violence, and they appeared to do so more frequently as group size increased.… The consequence of third parties using more conciliatory than escalatory behaviors was a reduction in the likelihood of severe violence.” Link.
  • A book by Randall Collins on the micro-sociology of violence: “Not violent individuals, but violent situations—this is what a micro-sociological theory is about. We seek the contours of situations, which shape the emotions and acts of the individuals who step inside them.” Link. (And link to a PDF of the introductory chapter.)

FITTED JUSTICE

Surveying quantitative definitions of fairness

Researchers SHIRA MITCHELL and JACKIE SHADLEN have published a thorough and accessible systemization of fairness definitions from the statistics and machine learning literatures, assembling and comparing some of the “many conflicting and non-obvious mathematical translations of political intuitions about fairness.”

Alongside the tutorial, the authors raise forward-looking questions for practitioners:

“For more than fifty years, understandings of fairness have been shaped by the language of protected classes, disparate treatment and disparate impact.… As new understandings of fairness emerge from the quantitative conversation, we ought to consider:

  • Are we framing a conversation we want to live with for fifty years?
  • Whose concerns become easier or harder to articulate in a new framework?”

Link to the document. (And link to a slightly older, more condensed version.)

  • Arvind Narayanan’s presentation at FAT also sought to identify a variety of definitions of fairness, aiming to “help policymakers and others better understand what is truly at stake in debates about fairness criteria (such as individual fairness versus group fairness, or statistical parity versus error-rate equality)… [and] to highlight technical observations and discoveries that deserve broader consideration. Many of these can be seen as ‘trolley problems’ for algorithmic fairness, and beg to be connected to philosophical theories of justice.” Link to the video. Link to the text outline.
  • A new paper examines the effects of fairness definitions over time: “Conventional wisdom suggests that fairness criteria promote the long-term well-being of those groups they aim to protect.
    We study how static fairness criteria interact with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that even in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not.” Link.

+ + +

  • A major new study on race and income inequality in the United States authored by Raj Chetty, Nathaniel Hendren, Maggie Jones, and Sonya Porter. Link to the paper, link to a non-technical summary, and link to coverage (with detailed data visualizations) in the New York Times.
  • In brief: colleges should bear the burden of student loan default. Link. ht Bobby
  • Preview the introductory chapter to Glen Weyl and Eric Posner’s new book Radical Markets. Link.
  • At the current rate, “substantially transforming the energy system would take not the next three decades, but nearly the next four centuries.” Link.
  • On the difficulty of reproducibility in machine learning, and some encouraging solutions. “I’ve had several friends contact me about their struggles reproducing published models as baselines for their own papers. If they can’t get the same accuracy that the original authors did, how can they tell if their new approach is an improvement?” Link.
  • The effects of unconditional cash transfers on child emotional health. Link.
  • Washington State has passed automatic voter registration laws. Link. For context, a 2015 paper on voting patterns found that not being registered was by far the most cited reason people stayed away from the polls. Link to that paper. (And link to Brennan Center’s AVR legislation tracker.)
  • “What caused the U.S. income-based repayment program to grow much larger than many experts expected? That is where the Australia case is instructive.… Borrowers pay a low interest rate equal to inflation and are exempt from payments until they earn AU$55,874 (US$36,850). Once a borrower’s income exceeds the threshold, he owes a flat percentage of all his income (i.e., first dollar) toward the loan that year. This rate increases as a borrower’s income rises, creating a progressive repayment structure.” Link. ht Ankit
  • Open Markets’ Matt Stoller and Barry Lynn with advice to the FTC on how to deal with Facebook: restructure it. Link.
  • On the European political roots of post-war international development discourse. Link.
  • A VoxEU post on the state of redistribution through taxes and transfers in OECD countries: “The decline in redistribution over the last two decades documented here took place in a period of flattening market income inequalities – indeed, in many OECD countries redistribution has declined by a sufficient margin to push up inequality after taxes and benefits despite a mild decline or stagnation in market income inequality.” Link.
  • Do autocratic regimes produce higher literacy rates? Link.
  • In a recent paper, Lant Pritchett argues against international development orthodoxy, presenting the case that wealthy state borders impose an extraordinarily high cost on the global poor that outweighs existing development initiatives. “What I mean by ‘the best you can do’ is addressing ‘global poverty’ in the ways that individuals who want to give philanthropy directly to ‘the poor’ seek. As we will see, there are some programs that do modestly better than cash, but not many and not by that much. What I mean by the ‘least you can do’ is just let individuals engage in the perfectly ordinary economic transactions of taking a job and getting paid a wage…. The ‘best you can do’ in situ is much less effective than the ‘least you can do.’” Link. ht Sidhya

Each week we highlight research from a graduate student, postdoc, or early-career professor. Send us recommendations: editorial@jainfamilyinstitute.org

Subscribe to Phenomenal World Sources, a weekly digest of recommended readings across the social sciences. See the full Sources archive.