Computational Arts-Based Research and Theory: Journal Week 12

Social matching algorithms

The Strangerationist experiment pushes an interesting question forward: how do social suggestion algorithms work? What values and principles inform them? What consequences do the properties of these algorithms have?

Firstly, by pointing out that these services utilize algorithms when making social suggestions, it evidences the undeniable truth that these suggestions have an algorithm driving them. It might seem a bit redundant to point this out, but a characteristic of many modern social networks is that their presence is so ubiquitous and their use so accepted that their mechanisms are hardly questioned. By pointing out how they work, it kind of lifts the veil of the machine that drives them.

This has a series of implications.

First, pointing out that friend suggestions utilize algorithms also points out that there is an agenda driving these suggestions. I don't mean to sound conspiratorial, as clearly the agenda of most social media is probably to “deliver the best product they can by fostering meaningful relationships”, by which they mean making sure their users stay on their sites for as long as possible. However, once the veil of the mechanism is lifted, users can begin to realize that these algorithms might not be as simple as they initially seemed.

Why?

Well, it can be useful to compare it with other automated systems. Consider something like stock management in large supermarkets or department stores.

How does a store know which products to keep in stock and which products to stop stocking? (As a disclaimer, I don't know for a fact that this is how things work, but I'd be surprised if it wasn't. )

The store probably has some sort of system that analyzes the store's sales and sees which products are being sold, how frequent, at what time of the year, etc. It makes sense, this way stores know that maybe people don't buy as much tofu, for example, in a given area as in another area. If it doesn't sell well, it might not be a good idea to stock as much tofu, given it's a perishable product, stocking it without a good indication it might sell will probably mean it'll go unsold.

This algorithm is so banal it is hardly alarming, but when dissected, it really isn't. When we try to mentally map the two scenarios(social media and supermarkets), trying to understand what is analogous between them, we might think people assume either the position of products, or the position of consumers. Both are right, and they both have disturbing implications.

The first is the fact that people's identity can be measured, qualified, quantified, stocked, and presented as discardable products. Besides the obvious issues – who determines what measurements are being done? Who determines what a “quality” person is? There are many biases on how people are evaluated that are implied with these algorithms.

And as a consumer, your data is being captured, pooled, and analyzed to serve you a product you didn't necessarily know you were consuming.

Which ultimately leads me to what I think the Strangerationist project was aiming to do: to create a highly visible system, where opting in is very evident, and where suggestions have hardly any biases in them, and people with interests in potatoes are pointed out. I think I like that person, I like potatoes too.