Us vs. them? Decision making in an algorithmic age

We started working on The Decision Maker’s Playbook in 2014. It was in that year that Google acquired DeepMind, a leading Artificial Intelligence company. Two years later, algorithms dethroned the incumbent Go champion, a testimony to how mature they have become.

Algorithms and decision support systems increasingly influence our choices. They do this in a number of ways, but they primarily use our past behavior and the revealed preferences of people similar to us to help filter our options. Take review platforms such as Yelp or Foursquare, for example. It’s nothing new that we follow the advice of our friends and visit the restaurant that most of them recommend. But the internet has made it orders of magnitudes easier to share and aggregate data such as reviews or recommendations. So even though we theoretically have many options, we end up mostly booking the restaurant on top of the list (and those with the largest number of reviews, a new study found).

Soon the world will be dominated by algorithms which predict fairly accurately what we want, prefer and do next. The more data these algorithms gather, the better their predictions will become. It’s not science-fiction to suggest that in the near future, algorithms will know which of our mental (and emotional) buttons to press in order to make us believe, want, or do things. In itself, this is not necessarily bad – after all, we use these algorithms voluntarily because they offer us some kind of benefit. They take some load off our shoulders to sift through options and present us with the most suitable ones – or they present the pieces of information that we are most likely to latch on to.
And they don’t have to do that perfectly: it’s enough for them to simply be better than what we can come up with ourselves. Google Maps’ routing feature occasionally leads us to blocked streets and we need to take a detour. But in the overwhelming majority of cases, Google Maps leads us the along the most time-efficient paths, saving us hours, days or even weeks of lifetime.

It’s not hard to see that incentives may not be fully aligned: From the perspective of their owners, algorithms are tools to accomplish a goal – such as selling services or keeping us on a website so we can be exposed to ads. In solving these goals, such as profit maximization, algorithms serve us suggestions (such as nuggets of ‘news’ that are designed to stir outrage and polarization) that may not be in line with the goals we set for ourselves (being open-minded and weighing evidence of either side to form a balanced opinion). Our dependency on algorithms can make us subject to manipulation.

The more we use digital systems, the more data that is collected. The more data collected, the better algorithmic models can be trained, and the more powerful the algorithms get. The more powerful these algorithms become, the more we delegate our decision-making authority to them. And this is where the problem lies: in doing so, we give up part of our autonomy, and we become dependent on algorithms. We become vulnerable against big tech, which is – thanks to large scale effects of data – is increasing the concentration of information and making it harder for us to ‘opt-out’ or switch. Just as Google Maps is better than a taxi driver at selecting the fastest route, we surrender to algorithms for much more important decisions: what we read, who we date, or who we vote for.

I’m highlighting the potentially dangerous aspects of technology here, and the problems related to it: technological dependence, safety, biases, intransparency or ‘black boxes.’ I don’t talk about the tremendous welfare gains that algorithms such as search engines have generated such as in the fields of logistics or healthcare. These achievements are clearly laudable, but they don’t take away from the risks.

There’s need for self-regulation and government intervention to shield against fraud, manipulation and addiction. But there’s also the need for algorithmic literacy — to enable children, teenagers, and adults to understand, reflect on and wisely interact with algorithms.
Critical thinking – and decision-making – has got some serious competition: machine algorithms. The smarter we are in reflecting on the models driving machines and critically compare those to the models in our head, the better we can preserve our autonomy.


This text a further refined version of an excerpt of The Decision Maker’s Playbook (to be published in the summer of 2019 with Financial Times Press).