Part of my series of notes from ICLR 2019 in New Orleans. As you can maybe already guess, this talk was a little different!
- You are part of this world.
- These tools won’t be used the way you think they will be.
- Alternative paths are possible.
- Facebook news feed – optimised to keep people engaged
- using ML algorithms but business model incentivizes certain kinds of information spreading more than others
- recommender systems – people who like one conspiracy theory often like others
- relatively innocent conspiracy theories can escalate to more dangerous ones
- “You’re never hardcore enough for YouTube”
- more engagement == success, but how does that get achieved?
- interpretability & bias are relatively commonly talked about
- potency-at-scale restructuring power
- for some people, if they get scared, they tend to vote for authoritarians… what if your political campaign’s algorithm could find those types of people?
- might not know your model is doing problematic things
- can we get early intervention without mass surveillance?
What do we do?
- if you build it, people will use it, and not necessarily for what you want
- “if I don’t build it, someone else will” – maybe so, but at least it won’t be you building it
- there are alternative ways of doing things, they’re often just harder
- privacy-preserving ML, etc.
- people in this room are in demand by employers and have a lot of potential power
- by choosing what we work on, we can drive the direction companies take
- so what do we do?
- organize – have a voice
- build alternatives
- “I have a feeling I won’t be invited back… not with these sponsors”
Zeynep and the sponsors of ICLR.
- refusal is powerful, but we need alternatives
- product people should team up with people on the security side
- they’re used to thinking of worst-case scenarios
- if you collect the data, people will come for it
- there’s no historical precedent for anything else