agonzalezsosto

Witnessing

Accounting for everything that goes on in daily life feels overwhelming and basically impossible. In all of our daily activities there's many byproducts and implied processes that might not be immediately evident.

The practice of “witnessing” can change how we conceive certain processes that we incur in each day. This idea of “witnessing” is a matter of perspective. When I wake up and I take a shower, the water I'm using is a secondary character in the narrative I follow of what my day is going to be like. By working with different witnessing practices we can effectively change who this main character is.

What if the story of my day is told from the perspective of water? What information would this change of perspective bring to me? Granted, I'm changing perspective for my own benefit, but my point here is that having the perspective of other parts of processes we partake in daily can reveal a lot about our relationships with these processes.

How can this change of perspective actually manifest? This is where the more tangible side of “witnessing” becomes more relevant. Obviously I can't turn myself into water, but I can monitor the water consumption my apartment has, and I can visualize this with different technological tools. I could have a phone app that visualizes and demonstrates in real time how much water is being consumed.

This is an idea that is somehow already being implemented in current phone systems. On iOS and macOS devices, you get daily statistics for your software usage. A built-in OS utility lets you know how much time you've spent on different apps, and what their category is. This utility reports how much your usage is changing over time, and you can chose to use this information as you will. Maybe realizing you've been using Instagram for 4 hours each day is a good way of re-evaluating your behavior.

I think this becomes a matter of how information can be directed towards the attention of people. We have a limited bandwidth of attention and information we can consume, which I think is one of the most consistent themes we encounter when thinking about the early 21st century – how we're being consistently invaded by overwhelming amounts of information. However, much of this information is often irrelevant. I think it's important to be conscious of what information we chose to consume, as it's an important part of our cognitive diet.

Observing our daily routines with “witnessing” practices makes me think of how the concept of emergence might come up in patterns of our day to day behavior. So far in my experience, emergence is a term often used to describe the behavior of natural systems, and in my own practice is mostly concerned with aesthetics and behaviors in the context of art. However, what if observing our day to day life could lead us to understand the basic characteristics that underpin the emergent behaviors we incur in? What if this new perspective can shed light in small habits we can change that can have a large social and personal impact?

Virtual Reality

Let me preface whatever I have to say with the following: I have a hard time relating to virtual reality. I just don't buy it. I don't find it interesting. I don't think that it's the direction my instincts want technology evolving towards. Be it for artistic or entertainment or utilitarian purposes, I'm not in for the ride.

I think the line between technology and nature is interesting though, and I feel the introduction of VR forces me to ponder on this topic. I feel a lot of concepts can be played with around here. While it could be argued that creating make-believe worlds controlled by 3D graphic engines is contributing in creating a dystopia where it'll be impossible to distinguish truth from reality, the same could be argued about creating make-believe worlds controlled by social media websites running on phones.

However, the difference is clear. One takes much more of effort, and is more evident about its presence. I wonder what this means about the technologies we take into our lives: how invasive do they need to be for us to worry about the effects they have on our lives? And how does this “invasivity” manifest?

worlds?

Many of the points that seem to be used to make VR more appealing are points that are true about any art form that invites any degree of immersion from its audience. Immersion is a word that I feel has been over-used, or somewhat boxed-in by the XR community. I feel that this community (along with the video game and enhanced cinematic experience industries) have taken this word and anchored it with implications of technological seamlessness.

For me things get slippery here. Yes, I do think that something that is immersive has a degree of seamlessness between something that is real and something that isn't (the “real” world and the “constructed” worlds, I guess). However, I feel that the VR/XR interpretation of immersion/world-building is far too literal for my taste.

Any writer can create worlds. Any artist can create a sense of immersion with their work. A world is simply an implied set of rules that guide the behavior and parameters of elements that belong to it. A world isn't tied to a tool, or medium. I dislike this usage of the term “world-building”, it feels self important.

However, I want to conclude what could possibly be construed as negativity by saying: I encourage people to keep researching VR and keep working on it. Humanity must be open for all avenues of scientific and artistic research, and who knows, maybe something deep and interesting can come out of it. I just doubt it'll be for me.

Touching

Touch is an interesting, conflicting sense to work with. It's something that creates a certain line between an artist and their audience, sometimes in a literal sense, sometimes in a metaphorical sense, and sometimes somewhere in between.

I think that the line created by the social contract between artist and audience is important to consider – it's part of what frames the interaction from a social point of view. Touch plays a role in parts of the constraints that should be considered in developing artistic pieces.

The limits of performance are fuzzy, it's an idea that some people dedicate their lives exploring. However, even if artists don't deliberately address them, these limits constrain art pieces. I believe touch is a fundamental barrier between people in general (and thus, artists and audiences), and as such, have traditionally defined a limit beyond which art pieces don't usually pass through. These implied limits of personal space and spatial independence have limited (not necessarily in a bad way) the scope of what can be achieved by art pieces.

Physicality and touch is not something that is often conceived as material for an art form. It's interesting that it receives this under-privileged position among the senses. Hearing has music, and sight has visual art. However, the previously mentioned constraints touch has probably never allowed for this sense to be used in an artistic context to the same extent as the other senses.

Another thing I find interesting about touch is how often its used in metaphors: “I found your piece to be so touching” can be a highly flattering and pleasant thing to hear. But what does that mean?

The way I see it, saying that art is touching comes out directly out of the fact that touch represents the fringes of art. It alludes to the fact that personal space can be of such intimate and deep importance to individuals, that being “touched” by that art piece represents complete immersion into the work, with a certain degree of intimacy permitted by the allowance of touch (albeit in a metaphorical sense).

I think that one of the interesting things about modern art pieces that allude to touch or incorporate it in some way (mostly through technologically enhanced means) is that it stands (or maybe could stand) in between the romanticized conceptualization of touch I described previously, and the acceptance of the fact that touch is simply a sense, and as such, through contextual cues and sensorial orchestration, can be used to create a coherent artistic experience.

It's interesting that bringing in a new sense into the world of art brings a series of questions. Personally, I have no clues how touch could be involved in art pieces other than as an augmentation of other senses. Mostly, I think of how sound can be paired with skin sensation, as I often feel goosebumps with music I find powerful, and I think it could be interesting to explore the connections between sound and goosebumps this way.

I think that touch is a sense that is often very private to individuals, and as such, needs to be framed in a way that respects these conceptions of personal space, but also invites audiences to experience art forms in a new way that might lead the aesthetic and conceptual expression of a work to reach where it intends to reach. But I realize this all sounds kind of vague.

Additionally, I think that something I kept thinking about touch throughout these discussions is how I don't necessarily need to be touched for my sense of touch to be stimulated – often close-up images can have a similar effect, and I also think it's interesting how these ideas also intersect with spatiality, and thus with sight (forming a multi-modal experience?). Touch is one of the ways we can understand the world around us, so it's interesting the effects its involvement, either physical or metaphorical, can have.

Semantics of Sensing

I personally feel undecided on the usage of sensors in art. However, the idea of “sensors” has a great semantic load, with many implications going in different routes, so I think that an exploration of what they are and what their usage entails is important.

A “sensor” could simply be a form of device that captures activity from the “physical” world and encodes it in an electrical form. It's worth noting that this description also encompasses many different taxonomies.

A microphone, for example, encodes compressional sound waves by moving a metallic diaphragm between magnets, hence replicating the same pattern of change from one medium to another. Compressions of air in one direction represent changes in electric potential in that same agreed upon direction. Standardization is also an important component in forming digital or electric systems ( which sensors form part of) so ultimately these sensors need to conform to a protocol and standard to remain functional.

However, not all sensors have the nuance of a high-grade microphone, and I think this is kind of implied. Motion sensors are simpler than cameras, despite the fact cameras are also interfacing one medium with another, which is fundamentally what a sensor is tasked to do.

So what is a sensor, then? If there's a difference between an alarm sensor and a camera, then where does the distinction lay and why? Well, I believe that the distinction lays on various things, but partly on the scope and usability of an interfacing tool. It feels like what conceptually is seen as a sensor tends to either be very small (in a modular sense – a small component instead of a larger whole), very focalized ( single-purposed humidity detectors, for instance), or tends to not be as reliable as a larger tool such as a camera or a microphone.

So if we go out of the “functional” (yet highly opinionated, since they highly influence the data they convert) domain of sensors that include microphones and cameras, we find ourselves in finding more modular, single-purpose, or unreliable devices.

This ends up leaving me in the very highly opinionated corner of my blog. I feel like a lot of what people find engaging about sensors is the narrative they provide, and not the data they're alleging to communicate. If you strip the narrative of data capture away, the aesthetic value ends up (not always, obviously, but in my experience, more often than not) not being able to stand on its own feet as an artistic statement. The narrative component becomes the aesthetic focus.

Is this inherently negative? No, it depends on the case. Narrative-driven art can be powerful, especially when related to environmental concerns, or issues that can't be sensed by the sensitivity threshold of humans. There's really a place where it's understandable that the narrative is the focus of a project.

It all depends on the case. I will explore one that I dislike, which mostly pertains to how I perceive some explorations in the field of music to be directed towards.

For example, I think that people using brain-wave sensors to control music instruments aren't doing it because it sounds good or even interesting, they're doing it because it brings a narrative angle to their performance. In my opinion and experience, there's no reasonable or apparent way in which brain waves create interesting sonic or modulatory-useful patterns, yet people are drawn to them, because it fuels the narrative of the performance. In this case, the narrative is that of the power of the subconscious, mind control as it were. This has very cool implications, if you think about it. But if you hear the music it makes, you might as well be using a random signal generator, the façade of depth, mind control, or innovation falls apart. I dislike being negative, but I simply do not buy it. However, this narrative has its adherents, and those who find it engaging, so good for them.

I understand that the narrative that surrounds a piece is important, it's a great part of the experience that an audience member engages in. But I just can't bring myself to agree that the narrative of a project is more important than how a piece can stand on its own in the subtleties of what constitutes it beyond a gimmicky trick of what makes it. However it depends on the case.

Social matching algorithms

The Strangerationist experiment pushes an interesting question forward: how do social suggestion algorithms work? What values and principles inform them? What consequences do the properties of these algorithms have?

Firstly, by pointing out that these services utilize algorithms when making social suggestions, it evidences the undeniable truth that these suggestions have an algorithm driving them. It might seem a bit redundant to point this out, but a characteristic of many modern social networks is that their presence is so ubiquitous and their use so accepted that their mechanisms are hardly questioned. By pointing out how they work, it kind of lifts the veil of the machine that drives them.

This has a series of implications.

First, pointing out that friend suggestions utilize algorithms also points out that there is an agenda driving these suggestions. I don't mean to sound conspiratorial, as clearly the agenda of most social media is probably to “deliver the best product they can by fostering meaningful relationships”, by which they mean making sure their users stay on their sites for as long as possible. However, once the veil of the mechanism is lifted, users can begin to realize that these algorithms might not be as simple as they initially seemed.

Why?

Well, it can be useful to compare it with other automated systems. Consider something like stock management in large supermarkets or department stores.

How does a store know which products to keep in stock and which products to stop stocking? (As a disclaimer, I don't know for a fact that this is how things work, but I'd be surprised if it wasn't. )

The store probably has some sort of system that analyzes the store's sales and sees which products are being sold, how frequent, at what time of the year, etc. It makes sense, this way stores know that maybe people don't buy as much tofu, for example, in a given area as in another area. If it doesn't sell well, it might not be a good idea to stock as much tofu, given it's a perishable product, stocking it without a good indication it might sell will probably mean it'll go unsold.

This algorithm is so banal it is hardly alarming, but when dissected, it really isn't. When we try to mentally map the two scenarios(social media and supermarkets), trying to understand what is analogous between them, we might think people assume either the position of products, or the position of consumers. Both are right, and they both have disturbing implications.

The first is the fact that people's identity can be measured, qualified, quantified, stocked, and presented as discardable products. Besides the obvious issues – who determines what measurements are being done? Who determines what a “quality” person is? There are many biases on how people are evaluated that are implied with these algorithms.

And as a consumer, your data is being captured, pooled, and analyzed to serve you a product you didn't necessarily know you were consuming.

Which ultimately leads me to what I think the Strangerationist project was aiming to do: to create a highly visible system, where opting in is very evident, and where suggestions have hardly any biases in them, and people with interests in potatoes are pointed out. I think I like that person, I like potatoes too.

Computational Arts-Based Research and Theory: Journal Week 11

Context

In another class, I was reading some text that was referring to the way in which industrialization would influence the way people perceived sound, and thus change how people composed and related to music. This seems somewhat simplistic at first – is there any weight to a statement that affirms people would be drawn to noise because noise surrounds them?

If you look at the way in which music has evolved in the last century, I think it's rather hard to assert that there has been an entirely homogeneous progression towards the embrace of noise, but I do feel like noise plays a big part in how most music is being made nowadays, even if it's not always exactly deliberate or conscious. I'd also mention that not embracing noise turns certain types of music into breaks from the expected.

I must admit, I believe that “noise” is a deeply complex subject where semantics can get fuzzy (pun intended). Where the implications of how noise can manifest in music (and what music even is) can be time-consuming to describe with precision. My point in mentioning the progression of noise in 20th century music wasn't necessarily to have a fully coherent and deep discussion on noise, but rather, to point out other things.

The context in which things are done is a huge part of what those things mean. I believe that the usage of noise in music is a beautiful example of how contextualization shapes definitions and arguments. A random waveform with frequency components between 4k and 7k might be an undesirable hiss in one context, but it might be the right sound for another composition. Ideas and concepts can't be understood simply by what they are on their own, being seen as manifestations from readings from technical devices, but they must be understood by what they're surrounded in their containing structure. Furthermore, a single musical composition can't be understood on its own – it too needs to be contextualized. Soon, we realize that ideas are simply fractals of contextualized meaning.

I believe that this relates to notions of how we relate to concepts as well. I think that wanting to have an absolute and objective understanding of how things can be conceived is something that is simply untenable, and somewhat pointless.

This is why webs, networks (or whatever synonym you can think of for a collection of mutually interconnected and co-dependent elements that form a whole) are so important. If things can only be understood and explored by the context that they're in, then it's important to leverage the systems that enable the contextualization and relation of said elements.

An interesting, and somewhat lyrical, observation would be to point out that ideas connect and relate just like bursts of electrons through different circuits of neurons in the brain. Forgive my poor neurological description, but it's simply interesting to point out that this seems that just like definitions are like fractals of context, it seems like the way in which humans perceive and structure context as a concept might be derived from the way in which our brains operate in the first place.

The Social Practice of Science

I feel like science lives in a strange neighborhood in the city of human intellectual output. It would be so satisfying for its definition to be clean cut and simple, but I don't think it necessarily can be. At the end of the day, if you look at the evolution of science, its origins are in philosophy. A lot of practices we consider to be scientific nowadays originally came from philosophy, and to an extent, the drive that powers science came from ancient philosophers.

My understanding is that science refers to a field that is supposed to provide objective analyses of reality. Also, the practice of science is rooted on the basic understanding that no single individual can have access to objective reality, as every single person has perceptual biases and shifts that prevent this from being the case. However, through the co-operation, evaluation, and questioning that is supposed to be central in its practice through the implementation of the scientific method, to whatever extent possible, biases and errors that individuals might create might be ironed out.

However, this means that socialization is a key element of science. Since no single person can have access to “objective reality” as their perceptions are affected by their experiences and limitations, the practice of science requires multiple people. It is a social activity. And, as any other social activity, it is subject to the virtues and vices of human interaction. This is a big vulnerability in science – it can be skewed by humans to benefit their purposes. However, this is also a key feature of it.

Can we say that “physics is reluctant to change”? I don't know – it almost feels like it depends on what you mean by physics. Is “physics” a synecdoche that refers to its practitioners, or to its commonly accepted theories? Or is physics an abstract value that refers to the desire to find the truth about our physical world?

Personally, I think it matters to be clear on which one is meant. Physics in itself has no agenda. However, the academia of physics might, since it's a collective of people aiming to steer a field in a given way. Is it even possible to separate the body of knowledge from the container “academia” that houses it? Can one exist without the other?

It feels like this is where the talk of “ecologies of practice” begins to make some sense to me. If an ecology is a space (be it literally or figuratively) where there's an interaction between different parts of a complex whole, then it seems to make sense to think about how these different parts may interact.

I think that it's key to understand that science is a social practice simply due to the fact that it requires socialization to be articulated and developed. This means that on some level, there has to be a post-individual structure that evaluates the credibility of claims made in the field. This might be frustrating, but at the same time, it prevents the system from falling into incoherence. However, incoherence relative to what?

Perhaps there are other ways. Perhaps the centralization of power and its delegation to academia leaves a huge vulnerability. This gives a select few the power to decide the value systems present in the practice of physics. What if twisted or incompetent individuals get to steer the value system of a field? What effect will that have on humanity, and for how long?

It seems like a complex thing is that just as we as individuals don't have access to objective reality, it also seems that groups don't inherently balance each other out into a reasonable center point. If that were the case, then humanity would be a lot more peaceful than what it has been so far.

Steering fields of thought (and what their values are) is something that takes active thought, re-evaluation and clarity. It's not something that happens intuitively. This is also the case with governments, groups... any form of relationship, really.

Recursion

As a high school student, I remember being told that you can't, or at least shouldn't, define terms in a self-referential manner.

Having this pointed out at a young age made me realize how complex meaning can be sometimes. If not that, then at least how complex it can be to articulate concepts that at one point might've seemed to be self-evident.

You can't define time as 'the time it takes for things to happen', as the definition is, in some way or another, making reference to itself.

I know, perhaps this is not the best example to illustrate my point, but hopefully this generic example brings up better and more relevant examples of how this has happened in your own life.

Sometimes we're at a loss for words for defining things that we see every day, and feel an odd sort of gravity towards simply repeating the same word.

Probably the reason we tend to do this is due to a crutch. I think that to some extent we're delaying our responses in order to come up with an actually satisfying response. But we never do.

The reason that self-referential definitions don't tend to work in most cases is because they open an endless wormhole out of which there is no escape. If words are shorthand for their definitions (which honestly, I don't know if they are), then if a word is included in its own definition, we will keep going down into the definition over and over again. Following my not-so-good example:

'time is the time it takes for things to happen'

'time is the [is the time it takes for things to happen] it takes for things to happen'

'time is the [is the [is the time it takes for things to happen] it takes for things to happen] it takes for things to happen'

'time is the[is the [is the [is the ..... '

and so on.

This little example brings to light the fun game of recursion. In the definition of a term, this simply does not yield a usable result. Yes, we're going down an infinite wormhole, but we're also going nowhere, so it's not a satisfying result, so it's not an appropriate definition.

What about computers?

Recursive definitions get a bit more interesting when applied to computers. This is, in part, because computer languages have a different way of operating than natural languages.

Perhaps I'm just saying that due to my currently limited perspective, but certain syntactic elements of computer languages make recursion a manageable phenomena, instead of an illogical one. This is because of conditional clauses which allow us to contain the scope of self-reference.

Put in english and taking my original not-so-good example of a self-referential definition, this would look something like:

time is the time it takes for things to happen as long as you haven't defined time as a subset of time more than once.

The condition that starts after 'as long' makes sure that we don't define time as a subset of time more than once, which is the one time (no pun intended?) we're doing in the first sentence.

This might not make any sense in linguistic terms, but in computational terms, this is manifested with the usage of 'if' statements within a self-referential, recursive function.

What this means is that without an exit condition, a recursive function will simply not work. It will consume all of the computing power of a machine and it will inevitably crash. Insert reference to entropy here. For example, if we ask a computer to solve the following statement:

function x(int a) { return x(a + 1) }

Then, this would mean that when we call it, it calls itself, which it then calls itself, which it then calls itself. We can use an if statement (or any other type of conditional clause) to contain this from getting too silly.

Why does this matter? Why is this interesting?

To me, this is interesting because recursion exists in real life – in “nature”. Maybe it doesn't exist (or can't function) in our languages, but it exists in visual patterns that we see everyday: leaves, branches, cells, snowflakes. Fractals.

Language can't encapsulate self-reference in the same way that “nature” does because language is inherently limited by our human capacity to deal with reality. Perhaps language does handle self-reference in a perfectly logical way, but perhaps our brains can't conceive it. Perhaps we couldn't have come up with the idea of trees if we hadn't seen them first.

This is perhaps one of the reasons why fractals seem to be so mysterious and so fascinating. There is seemingly no hierarchy or order to guide our perspective. There is no bottom, there is no top, it just 'is'. And given that we have many limitations that contain us and most of the concepts and objects we interact with, interacting with a concept that is unbound and uncontainable is somewhat hard to fathom.

“Intelligence”

Circular reasoning can be a bit exhausting. It gets hard to tell what's true. It can even be hard to tell what the point of a discussion was.

I think that when discussing the concept of intelligence, people tend to find themselves in circular arguments – not necessarily out of malice or out of manipulative intentions, but simply as a result of the slippery slope we find ourselves in when discussing a term that frankly seems outdated.

We live in a world that uses terminology that was, in great part, developed by people who have been long dead. However, the world keeps changing in incrementally fast leaps. We're left trying to make sense of the world through the lenses of people from a different time, while at the same time looking forward to the future. We end up having a hard time finding a way to reconcile all these temporal differences and end up making a mess out of things.

Having this in mind... what is intelligence?

Is it the capacity to draw a reasoned conclusion from known data? Is it the capacity to correctly/successfully extrapolate? Is it the ability to adapt to a new context? Is it the ability to communicate? Is it the ability to think? To perceive?

Is it a uniquely human quality? Can animals be intelligent? Can machines be intelligent?

Each question opens a maze of arguments, assumptions, doubts, and new questions.

One can get lost in the spiral of assumptions, re-definitions, re-contextualizations... the list of conditions and considerations is seemingly endless. And inside this maze, it becomes hard to tell if a point is being bent for the convenience of an argument. Eventually, it becomes hard to tell if there is a point at all. The discussion becomes very convoluted.

I think that trying to define intelligence ends up being like an attempt to solve the unsolvable. Defining intelligence feels like trying to justify the existence of a term that was created when the body of knowledge handled in the west was more limited than it is today, and when the human experience was vastly different from what it is today.

I find this paradoxically ironic because the term would in of itself suggest a capacity for adaptability that our use of it indicates we don't have.

However, I believe it is crucial to understand that there is an inherently human way of thought and of understanding ourselves and our environment. It exists, and it's frankly beautiful.

To think that humans can make human tools is something deserving of awe and inspiration. Our tools can heal, feed, and protect people.

Our thoughts can describe reality to the best of our abilities, and our descriptions of reality can build physical manifestations that prove that our assumptions derived from observation were correct to at least some extent.

Our communications can heal the lonely, the alienated, the confused, the hurt. It can help the scared feel brave, and it can make our species stick together.

Or not.

Human our behavior is easily corruptible because whatever makes us human means that we're also inherently flawed.

Are our flaws part of what we define as “intelligence”? Is our history of genocide and murder... “intelligence”? Is our tendency for violence... “intelligence”?

No, it is not.

But it is part of what we are, and it is part of what human beings are.

My point is that “intelligence” is a limited and outdated term. It doesn't really encapsulate the human (or animal) ability to think or behave in a particular way. It feels like the term is more indicative of a fetish of Enlightenment.

I realize we can't just “break up” with the word “intelligence”, but it feels limiting and limited. I don't have a solution – however, I do think we can be a bit more specific when we discuss human thought and behavior, and I believe this will lead to not even needing the term.

Being good at math does not make you intelligent – it makes you good at math.

Being good at the saxophone does not make you intelligent – it makes you good at the saxophone.

Being good at “learning” does not make you intelligent – it makes you good at learning.

These are all skills that can be practiced, honed, and developed under the right circumstances. Some people display better capacity at some skills, some people are unfairly skilled, and others are unfairly unskilled, but most, by definition, are average.

I realize that saying “being good at 'learning' doesn't make you intelligent” might sound a bit odd, but my point here is that our conception of intelligence is quite possibly rooted on a broken premise.

In his own image

When I was a teenager, a question I found interesting was “did God create humans, or did humans create God?”

As far as I'm concerned, humans created God – given there's no proof any God actually exists, that's the only reasonable conclusion one can get to without resorting to non-logical arguments like “faith”.

(note – just because something is non-logical, it doesn't mean it doesn't have its benefits. I'm sure that some believers find some traceable benefit to their belief. However, these benefits don't justify the existence of a God)

Wait what? What does this have to do with Artificial Intelligence?

It seems to me that this is a moment when our kind is turning into Gods. Or at least, we're playing one of the roles that we assigned to the Catholic God (which is the one I'm the most familiar with).

We're bringing new manifestations of life – a new species, if you will – and we're making them in our image. While maybe “life” feels like too big of an assertion, we're certainly creating tools that exist in an entirely new category.

I think that since we have such a pathological issue with who “made” us and why they made us this way, we're having issues with bringing new beings into existence.

It seems absolutely absurd to me that we'd like to humanize computers. Be it hard-coded computers or computers that utilize machine learning, I don't need to anthropomorphize computers. I understand that these are tools that are beyond anything ever seen before, but I find the humanization quite off-putting.

On one hand, I find it off-putting because I find that I'm annoyed by the features that certain people chose to highlight in humanity.

I don't want a sexy robot hologram friend to fetch me my email. This simply doesn't figure in my interests, and I want my interactions with machines to be reduced to the smallest degree possible.

Silliness aside, I just think that the implications for creating a human-like being are just too complicated to be worth the effort. We can't create a human, so why bother?

Why not let this new thing be its own thing? It's not a human, it doesn't need a human face.

The automatization of specialist jobs

I find the parallels between the mechanically-focused industrial revolution of the 18th century and the data-focused industrial revolution of our present times to be very interesting, thought provoking, and somewhat terrifying.

Living through a technological revolution has made me develop empathy for the luddites. Often we look at the past as a series of events that happened to some people we're not very emotionally invested in. We know what happens at the end of their story, and we know their endpoints (somehow) lead to where we are now, but we often forget the obvious fact that they didn't know what was going to happen as it was happening to them.

A notable parallel I feel between these two industrial revolutions is the fear that new innovations bring to the working classes. I'm personally terrified by the idea of current job obsolescence. But at the same time, it makes me hopeful. It's a strange combination.

How will I provide for myself and look after people I care about in a future where there are no jobs my present mind can imagine? And furthermore, who will take the helm of the new world created by this revolution? Will it be ill-intentioned people? Will it be well-meaning but incompetent people? Or will it be the “system” itself that will take lead of the new world?

Despite the huge anxiety and fear that our possible futures bring, it also brings interesting questions. Not all changes are negative. Also, some changes sometimes have the opposite effect of what would've been expected.

I think about this specifically in the context of music. If algorithms can write believable pieces of text, then they can write believable pieces of music.

A very naive part of me is hopeful that once we realize algorithms can write “better” songs than anyone else, and that algorithms can mix “better” than anyone else, we'll come to the conclusion that it's ultimately meaningless to try to be “good” at music.

When technical achievements are easily manufactured by computers, individuality will have a different meaning.

Instead of feeling defeated by computers who are better at our jobs, we might find ourselves liberated from the meaninglessly mechanical routine of creation for a commercial marketplace, be it of art or other products.

We might remember that it is our weird and personal biases and views and inherent incoherencies and personal fears and experiences that really make interesting works of art.

We might lift the veil that technical proficiency has been providing for too long. It will no longer be special to be talented, but it'll be special to be honest.

Hit-composing algorithms might “break” the music business, but then again, maybe that's the best thing that could happen to creativity.

But alas, it is not that simple. And I did provide a disclaimer – I am being naive. And the old question comes back: how will I make money if a robot is replacing me?


The algorithm

It's interesting how sensitive language is.

In the beginning of Gillespie's essay, he refers to Raymond Williams' book, “Keywords”. Williams explains how groups of people often find themselves talking about different things while using the same term.

Gillespie uses this reference as a way of pointing out that what is implied by the usage of the term 'algorithm' is dependent on the context and those involved in its usage.

I think this is interesting, because it points out how obfuscated conversations can become when discussing abstract concepts. Communication breaks down when there is no common ground of agreement, despite having the impression of one. It is confusing, and can even turn people hostile.

It's interesting how the meaning one derives from a word functions as a form of mirror – what someone understands by a word reveals a lot about them. But what is even more interesting to me, is how easy it is to shift from one usage to another without being consciously aware of it.

I wonder if this shift in understanding is simply an empathetic mechanism in order to acknowledge the fact that language is somewhat fluid.

Personally, when I think of the world 'algorithm', I'm thinking very technically. I'm thinking of lines of code, of instructions... things like that. I try to understand how a given algorithm works and I try to go from there.

But when I talk to my mother, the “algorithm” might turn into something more abstract. My descriptions are more vague, and probably verge on sounding like “it's the thing computers do”.

To what extent do we shift definitions of words to our convenience?

It seems harmless to simplify a concept when talking to my mother – not because she's not capable to understand a technical term, because she is, it just seems like a lot of work for a brief point in a conversation.

However, what she understands by that word is not what the word meant at one point. Wouldn't it be unreasonable for me to expect her to understand the original meaning when we implicitly agreed its definition also included a simplified version of the concept?

Eventually, it will become a feedback loop.

A multi-leveled feedback loop with no point of agreement.

And we're somehow supposed to understand each other.


Writing from the perspective of an algorithm

MERGE SORT

I am given a list of numbers and my purpose is to organize it. It is expected that I will arrange numbers from smaller to larger, their size determined by their position in the number line. The number line is a convention agreed upon by the human race at the time of my existence.

I am fed the following list:

583712946

I take the list of numbers and divide them into their smallest possible grouping unit. In this case, the smallest possible grouping unit is a single digit. After dividing the list of numbers, I am left with the following individual units:

5 8 3 7 1 2 9 4 6

My main mechanism for sorting is called the merge. When I merge, I re-group numbers that have been separated. In this case I will turn single digits into small lists. However, I will not only merge, but I will also perform an evaluation to determine which number is lower in the number line, and I will place it first.

My new grouping looks as follows:

58 37 12 49 6

My main mechanism for sorting is called the merge. When I merge, I re-group numbers that have been separated. In this case I will turn small lists into larger lists. However, I will not only merge, but I will also perform an evaluation to determine which numbers are lower in the number line, and I will place them first.

My new grouping looks as follows:

3578 1249 6

My main mechanism for sorting is called the merge. When I merge, I re-group numbers that have been separated. In this case I will turn small lists into larger lists. However, I will not only merge, but I will also perform an evaluation to determine which numbers are lower in the number line, and I will place them first.

My new grouping looks as follows:

3578 12469

My main mechanism for sorting is called the merge. When I merge, I re-group numbers that have been separated. In this case I will turn small lists into larger lists. However, I will not only merge, but I will also perform an evaluation to determine which numbers are lower in the number line, and I will place them first.

My new grouping looks as follows:

123456789

I am done.