↳ Climate

February 17th, 2018

↳ Climate

Two Flags

BASIC OPPORTUNITY

Considerations on funding UBI in Britain

The RSA (Royal Society for the encouragement of Arts, Manufactures and Commerce) published a discussion paper on UBI. ANTHONY PAINTER outlines some key points here, including some thoughts on funding:

“To fund the ‘Universal Basic Opportunity Fund’ (UBOF), the Government would finance an endowment to cover the fund for 14 years from a public debt issue (at current low interest rates). This endowment would be invested to both fund asset growth and public benefit. The fund could be invested in housing, transport, energy and digital infrastructure and invested for high growth in global assets such as equity and real estate. This seems radical but actually, similar mechanisms have been established in Norway, Singapore and Alaska. In the latter case, Basic Income style dividends are paid to all Alaskans. Essentially, the UBOF is a low-interest mortgage to invest in infrastructure and human growth that brings forward the benefits of a sovereign wealth fund to the present rather than waiting for it to accumulate over time.”

Full paper is available here. And here is the longer section on “The technicalities of a Universal Basic Opportunity Fund,” including building and administering the fund. ht Lauren

  • A new working paper on the Alaska Permanent Fund: "Overall, our results suggest that a universal and permanent cash transfer does not significantly decrease aggregate employment." Link.
⤷ Full Article

December 16th, 2017

Bruised Grid

HOW TO HANDLE BAD CONTENT

Two articles illustrate the state of thought on moderating user-generated content

Ben Thompson of Stratechery rounds up recent news on content moderation on Twitter/Facebook/Youtube and makes a recommendation:

“Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

“That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.”

Full blog post here.

The Social Capital newsletter responds:

“… If we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behavior that the internet has empowered, but rather a few dimensions of them.”

The piece goes on to describe four types of bad content. Link.

Michael comments: The discussion of content moderation--and digital curation more broadly--conspicuously ignores the possibility of algorithmic methods for analyzing and disseminating (ethically or evidentiarily) valid information. Thompson and Social Capital default to traditional and cumbersome forms of outright censorship, rather than methods to “push” better content.

We'll be sharing more thoughts on this research area in future letters.

⤷ Full Article