Computational Arts-Based Research and Theory: Journal 3

“Intelligence”

Circular reasoning can be a bit exhausting. It gets hard to tell what's true. It can even be hard to tell what the point of a discussion was.

I think that when discussing the concept of intelligence, people tend to find themselves in circular arguments – not necessarily out of malice or out of manipulative intentions, but simply as a result of the slippery slope we find ourselves in when discussing a term that frankly seems outdated.

We live in a world that uses terminology that was, in great part, developed by people who have been long dead. However, the world keeps changing in incrementally fast leaps. We're left trying to make sense of the world through the lenses of people from a different time, while at the same time looking forward to the future. We end up having a hard time finding a way to reconcile all these temporal differences and end up making a mess out of things.

Having this in mind... what is intelligence?

Is it the capacity to draw a reasoned conclusion from known data? Is it the capacity to correctly/successfully extrapolate? Is it the ability to adapt to a new context? Is it the ability to communicate? Is it the ability to think? To perceive?

Is it a uniquely human quality? Can animals be intelligent? Can machines be intelligent?

Each question opens a maze of arguments, assumptions, doubts, and new questions.

One can get lost in the spiral of assumptions, re-definitions, re-contextualizations... the list of conditions and considerations is seemingly endless. And inside this maze, it becomes hard to tell if a point is being bent for the convenience of an argument. Eventually, it becomes hard to tell if there is a point at all. The discussion becomes very convoluted.

I think that trying to define intelligence ends up being like an attempt to solve the unsolvable. Defining intelligence feels like trying to justify the existence of a term that was created when the body of knowledge handled in the west was more limited than it is today, and when the human experience was vastly different from what it is today.

I find this paradoxically ironic because the term would in of itself suggest a capacity for adaptability that our use of it indicates we don't have.

However, I believe it is crucial to understand that there is an inherently human way of thought and of understanding ourselves and our environment. It exists, and it's frankly beautiful.

To think that humans can make human tools is something deserving of awe and inspiration. Our tools can heal, feed, and protect people.

Our thoughts can describe reality to the best of our abilities, and our descriptions of reality can build physical manifestations that prove that our assumptions derived from observation were correct to at least some extent.

Our communications can heal the lonely, the alienated, the confused, the hurt. It can help the scared feel brave, and it can make our species stick together.

Or not.

Human our behavior is easily corruptible because whatever makes us human means that we're also inherently flawed.

Are our flaws part of what we define as “intelligence”? Is our history of genocide and murder... “intelligence”? Is our tendency for violence... “intelligence”?

No, it is not.

But it is part of what we are, and it is part of what human beings are.

My point is that “intelligence” is a limited and outdated term. It doesn't really encapsulate the human (or animal) ability to think or behave in a particular way. It feels like the term is more indicative of a fetish of Enlightenment.

I realize we can't just “break up” with the word “intelligence”, but it feels limiting and limited. I don't have a solution – however, I do think we can be a bit more specific when we discuss human thought and behavior, and I believe this will lead to not even needing the term.

Being good at math does not make you intelligent – it makes you good at math.

Being good at the saxophone does not make you intelligent – it makes you good at the saxophone.

Being good at “learning” does not make you intelligent – it makes you good at learning.

These are all skills that can be practiced, honed, and developed under the right circumstances. Some people display better capacity at some skills, some people are unfairly skilled, and others are unfairly unskilled, but most, by definition, are average.

I realize that saying “being good at 'learning' doesn't make you intelligent” might sound a bit odd, but my point here is that our conception of intelligence is quite possibly rooted on a broken premise.

In his own image

When I was a teenager, a question I found interesting was “did God create humans, or did humans create God?”

As far as I'm concerned, humans created God – given there's no proof any God actually exists, that's the only reasonable conclusion one can get to without resorting to non-logical arguments like “faith”.

(note – just because something is non-logical, it doesn't mean it doesn't have its benefits. I'm sure that some believers find some traceable benefit to their belief. However, these benefits don't justify the existence of a God)

Wait what? What does this have to do with Artificial Intelligence?

It seems to me that this is a moment when our kind is turning into Gods. Or at least, we're playing one of the roles that we assigned to the Catholic God (which is the one I'm the most familiar with).

We're bringing new manifestations of life – a new species, if you will – and we're making them in our image. While maybe “life” feels like too big of an assertion, we're certainly creating tools that exist in an entirely new category.

I think that since we have such a pathological issue with who “made” us and why they made us this way, we're having issues with bringing new beings into existence.

It seems absolutely absurd to me that we'd like to humanize computers. Be it hard-coded computers or computers that utilize machine learning, I don't need to anthropomorphize computers. I understand that these are tools that are beyond anything ever seen before, but I find the humanization quite off-putting.

On one hand, I find it off-putting because I find that I'm annoyed by the features that certain people chose to highlight in humanity.

I don't want a sexy robot hologram friend to fetch me my email. This simply doesn't figure in my interests, and I want my interactions with machines to be reduced to the smallest degree possible.

Silliness aside, I just think that the implications for creating a human-like being are just too complicated to be worth the effort. We can't create a human, so why bother?

Why not let this new thing be its own thing? It's not a human, it doesn't need a human face.