↳ Education

October 2nd, 2018

↳ Education

The "Next Big Thing" is a Room

If you don’t look up, Dynamicland seems like a normal room on the second floor of an ordinary building in downtown Oakland. There are tables and chairs, couches and carpets, scattered office supplies, and pictures taped up on the walls. It’s a homey space that feels more like a lower school classroom than a coworking environment. But Dynamicland is not a normal room. Dynamicland was designed to be anything but normal.

Led by the famous interface designer Bret Victor, Dynamicland is the offshoot of HARC (Human Advancement Research Community), most recently part of YCombinator Research. Dynamicland seems like the unlikeliest vision for the future of computers anyone could have expected.

Let’s take a look. Grab one of the scattered pieces of paper in the space. Any will do as long as it has those big colorful dots in the corners. Don’t pay too much attention to those dots. You may recognize the writing on the paper as computer code. It’s a strange juxtaposition: virtual computer code on physical paper. But there it is, in your hands. Go ahead and put the paper down on one of the tables. Any surface will do.

⤷ Full Article

November 18th, 2017

Duchamp Wanted

PREDICTIVE JUSTICE | FACTORY TOWN, COLLEGE TOWN

PREDICTIVE JUSTICE

How to build justice into algorithmic actuarial tools

Key notions of fairness contradict each other—something of an Arrow’s Theorem for criminal justice applications of machine learning.

"Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them."

Full paper from JON KLEINBERG, SENDHIL MULLAINATHAN and MANISH RAGHAVAN here. h/t research fellow Sara, who recently presented on bias in humans, courts, and machine learning algorithms, and who was the source for all the papers in this section.

In a Twitter thread, ARVIND NARAYANAN describes the issue in more casual terms.

"Today in Fairness in Machine Learning class: a comparison of 21 (!) definitions of bias and fairness [...] In CS we're used to the idea that to make progress on a research problem as a community, we should first all agree on a definition. So 21 definitions feels like a sign of failure. Perhaps most of them are trivial variants? Surely there/s one that's 'better' than the rest? The answer is no! Each defn (stat. parity, FPR balance, contextual fairness in RL...) captures something about our fairness intuitions."

Link to Narayanan’s thread.

Jay comments: Kleinberg et al. describe their result as choosing between conceptions of fairness. It’s not obvious, though, that this is the correct description. The criteria (calibration and balance) discussed aren’t really conceptions of fairness; rather, they’re (putative) tests of fairness. Particular questions about these tests aside, we might have a broader worry: if fairness is not an extensional property that depends upon, and only upon, the eventual judgments rendered by a predictive process, exclusive of the procedures that led to those judgments, then no extensional test will capture fairness, even if this notion is entirely unambiguous and determinate. It’s worth consideringNozick’s objection to “pattern theories” of justice for comparison, and (procedural) due process requirements in US law.

⤷ Full Article