It’s no secret that search engines, news feeds, product recommendations and so on, use our personal data to cut through the enormous data available and find bits that are relevant to us. But how do we know if the systems and algorithms in charge are fair? How do we know if the information we end up getting is really the best for us?
To establish a fair system and build trust and transparency when it comes to internet use, a team of researchers from the University of Oxford, the University of Nottingham, and the University of Edinburgh, want to learn more about the perspectives and concerns of internet users. Their goal is to create a sort of a ‘fairness toolkit’ that consists of ethical guidelines and policy recommendations.
“Selections made by algorithms are commonly presented to consumers as if they are inherently free from human bias and fair but there is no such thing as a neutral algorithm,” confirms Dr Ansgar Koene from the University of Nottingham.
This two-year project is called UnBias, and is relevant for a society as a whole as its aim is to ensure that transparency and trust are not missing from the internet. Its results will be widely disseminated through peer-reviewed journals, community groups, schools, and youth clubs.
University of Nottingham (http://www.nottingham.ac.uk/news/pressreleases/2016/october/online-emancipation-protecting-users-from-algorithmic-bias.aspx)