Cathy O’Neill tells us that ‘algorithms are opinions, embedded in code‘.
She is correct of course, and this is something that those of us researching social media and digital technologies in general know for quite some time. An informational system, especially metricated system, is programmed with certain values in mind. Those are dual values: the human value, explicated in whose worldview the system adapts and the further encoded ‘value’ of numerical data that is being fed to the computer. Ask a computer to give you a fair distribution of resources, and the computer shall ask you: fair for whom?
This is the point of the brilliantly addictive Universal Paperclips by Frank Lantz. In this simple browser-based clicker, you are embodying an AI tasked with making and selling paper clips. This sounds like exactly the type of jobs that we will relegate to our future AIs: after all, who better can adjust prices in real time and manage the boring supply chains of wire length and clipping speed. And yet… [Paperclips spoilers to follow]
… you end up destroying the world, and then the universe. In the ultimate Grey Goo scenario, you – the AI – optimizes itself until all of Earth’s resources are devoted to the production of paperclips. Then you turn all existing matter (organic included) into paperclips. Then you launch your formidable paper-clip manufacturing capabilities into space, in a swarm hive-mind like unity dedicated to turning every molecule in the galaxy into unlimited power more paperclips.
The duality of ‘value’ is exemplified in the ‘value drift’ that some of your autonomous drones exhibit when they decide that mining material for paperclips is not in their best interest, and turn on you. The values programmed into them (1. make clips. 2. GoTo 1) are adjusted, and with it, they comprehensive values system. You – the AI – on the other hand, remain stable and determined enough to continue your dirty work until everything is clips – just like the creator intended to.
It stands to (some) reason, that the aforementioned creator, who – in this hypothetical case – was a run-of-the-mill programmer working for a paperclip company did not consider this possibility. At no point, and here we return to O’Neill’s argument, did the programmer consider the question of ‘what should the software value more – paperclips or human lives?’. And herein lies the problem.
This leads me to this cutesy scene from this week’s Star Trek Discovery episode.
In it, our protagonists argue about the nutritional value (ha!) of their post-workout food, with cadet Tilly wanting it to be tasty and stern Burnham breakfast-shaming her into the healthy option. In the end, their friendly food replicator commends on their food choices, describing them as ‘appetizing and nutrient-filled’. The computer system attached to the replicator sounds almost happy and very encouraging. This is the kind of nudging, or persuasive computing that current gamification gurus would be all about. However for me that moment managed to send shivers down mine spine. Recalling my recent experience with Paperclips, I suddenly realize that nobody is programming future replicators to face the dilemma of nutritional value vs. freedom of choice.