Andrew Kemendo

Segmenting the Intelligent from the Non-Intelligent


It seems we should first do binary classification:

Is this system intelligent? YES / NO

Then we can do [gradient based segmentation] within intelligence:

Where does this intelligent system lie on an evaluation scale? [Gf...Gc...Gf]

Radar Graph

So we lead to the questions:

  1. What are the innumerable variables that could be measured for any intelligent system?
  2. Is there a coefficient that scores each variable as additive toward a global variable or are variable coefficients always environmentally bounded?
  3. How do you compare two systems if they have different contextual environmental boundaries?

Maybe lets put this in plain terms:

Assuming you have a system that is deemed “intelligent,” what do you measure in order to determine that the measures are correlated with the system being able to act towards an outcome (either local or global), and further how would you compare these systems if they reside in different environments? This is the topic of Jose Hernandez-Orallo's book: The Measure of All Minds

The last portion would require a global action vector, or a generalized action vector for intelligent systems. Discovering if there is in fact a generalized action vector (AKA answering the question “What is life about”) would be a significant achievement.


Is intentionality of action a determinant of intelligence: If a system didn't want to do anything how would a system that did, rate in comparison?

Is the idea of a ratings system even coherent? Why would we want to rate or compare systems? We only do that to allocate resources to be efficient to some ends – back to the intention based evaluation.

Everything in intelligence evaluation seems to need an action vector:

Intention > Action > Result

Why should we make the distinction between intelligent and non-intelligent systems? What purpose does it serve to segment these two things?

Last-Mile Delivery and Autonomous vehicles


I've been working in and around the furniture industry since 2015 and one of the things you learn is that having firm control of last mile delivery logistics is what will make or break a furniture company.

It's not price or fashion or anything like that, though those are important. It's really delivery logistics. The reason American Furniture Warehouse is so successful is because it's a logistics company that also sells furniture.

The idealized transport system is on demand that can travel on surface roads. We've largely solved long range mass travel with public transportation. Where this falls apart is between the public transit stop and the home.

I would argue that whomever can solve the last mile problem for human logistics will win the market for autonomous vehicles.

I can imagine a “swarm” of autonomous vehicles that only operate within X miles radius of a metro stop. The “swarm” of these cars all communicate with each other and have a running map of the area. If they are being used they go to the riders destination within some boundary. When done they return to the station and get in queue. They pick up riders along both ways, up to X riders. Uber and Lyft already have enough information to know how to schedule multiple stops.

A couple of questions come up: What is the number of vehicles required to operate in the boundary to ensure there is burst capacity, and wait times are minimal?

My guess is that this is what Uber and Lyft want to eventually do: Own the public transport market. Everyone wants a piece of the government market.

That's a bad idea.

Governments should be investing in creating their own public autonomous last mile human delivery systems.


Other thoughts on transportation:

If you look at human transportation like all other logistics, what you find is that every household is running a logistics operation.

Some outsource all or portions of the operation to government entities (public transport, school busses). Some have formed cooperatives (carpooling, ride sharing). The majority of households though run all operations in-house and outsource the maintenance.

Artificial Intelligence and Privacy are Incompatible

Robot Butler

The Robot Butler has been a trope of science fiction since we could dream of computing. Today, Siri and Alexa serve as disembodied ancestors to the future robotic personal assistants we’ve always dreamed of. And boy do people like them. 43 Million Americans own some form of dedicated virtual assistants. That’s almost 20% of US households.

Besides hands free commands such as setting a timer or checking the weather, some of the fastest growing command categories are recommendations. Things like wine pairings, recipe suggestions and movie suggestions are all easy tasks that can inform your decisions in a friendly and conversational way.

And wouldn’t you know it, the best way to increase the accuracy of personalized recommendations is to give these assistants more information about your wants, likes, needs and behaviors. Doing so helps Amazon or Apple build a better profile that can “learn” from and serve the user better in future interactions, just like human personal assistants do. Everyday, 43 Million Americans are providing Petabytes of this private data to these companies through their personal assistants.

So maybe it’s worth thinking for a moment about what a “personal assistant” is anyway.

Putting the Person in…personal assistant

A personal assistant at their apex is a worker who has a deep understanding of every relevant attribute of their client, in order help the client improve their effectiveness and efficiency. Lets use two fictional examples of personal assistants:

Scene from The Devil Wears Prada Courtesy 20th Century Fox

In the movie The Devil Wears Prada, Andrea (Anne Hathaway) works as a personal assistant for Miranda (Meryl Streep) and spends every waking hour trying to meet her overly demanding needs. The amount of personal, private data that Andrea requires to predict and satisfy the needs and wants of Miranda is nearly equivalent to that of a spouse. Not only does she need to know buying preferences, but also medical needs, allergies, sexual preferences, work and sleep patterns, food likes and dislikes, family composition and personalities, location data, preferred activities, the list goes on and on. Without this information she can’t do her job successfully and the value of the relationship is diminished.

Scene from The Fresh Prince of Bel Air Courtesy Warner Brothers

In the Fresh Prince of Bel Air, Jeffrey the butler interacts with each family member in a unique, loving and personalized way because he has developed a personal relationship with each member over years of service. Jeffrey tailors his interactions with each person based on what he knows of them and the situation they find themselves in. He understands and internalizes their individual personalities, histories, even their secrets, and how they will respond to different styles of suggestion to make their lives easier or more productive.

In both cases, the assistant and butler have a deeply intimate relationship with their clients, one that goes well beyond anything that most people would have with anyone other than a family member. They are part of our private life.

If we expect that Amazon’s Alexa or Google’s Assistant will fulfill our personalized needs and desires in the same way a human assistant would, then aren’t we required to build a similarly personal relationship with Amazon or Google? That means sharing our preferences and behaviors with them is a required step to create the feedback loops and deep understanding that is necessary to provide personalized value. Can you trust Amazon or Google with the same information you would give Jeffrey the butler? Should you bring Google into your private life?

You can Trust Us

The number one dis-qualifier when looking for a personal assistant, butler, maid, plumber, babysitter — really any job come to think of it — is a lack of Trust. If you think a housekeeper will steal your jewels or your babysitter will ignore your child, you’re not going to hire them.

It’s not unreasonable to assume that a big part of why Amazon is leading in the smart home market is because Amazon is the second most trusted brand in the US. People trust Amazon already, so it’s easy to build a stronger relationship with them. However this assumes that the users are aware of the data they are giving away, how it is being used, and have made a calculated decision on how much personal information to share based on their level of trust with Amazon. Probably a bad assumption.

It’s unknown to what extent users understand how much data they are giving away everyday, so it’s hard to know if they are wittingly trusting companies with their data or if they are unwitting participants in the data game. How Amazon uses your data is all there if you want to read it, however most people don’t read it. Even if you did read it, it’s not exactly clear what is happening with your data, so most people either go with gut feel, or simply don’t think about it at all.

The European Union’s General Data Protection Regulation is taking a hard stance on this with the goal of having companies explicitly show users exactly how their data is being used, but I’m not convinced this will have the effect they intend it to. They want to force the question of trust into the spotlight, and that might work, but just like couples therapy you can only force the conversation so much before one side loses trust.

So trust is really what it comes down to. Can you trust a major company with your most intimate secrets? Chances are you probably already are.

Long term relationship with AI

Whether you know it or not, you are creating this personal private relationship with companies just by virtue of living in the modern world. Just search for “[Company] tracking/spying” and you’ll find hundreds of articles expressing concerns about privacy and how much data a company is collecting from users.

This behavior is not restricted to the “smart” tech corporations by the way. Even if you don’t have a smartphone, your phone carrier knows (generally) where you are at all times, because knowing your location is part of how cell service works. The Safeway/Kroger discount card you use, or the mileage rewards program you enroll in, or any other “loyalty” program you’ve had for two decades, all of these are trying to do one thing:

Predict your preferences and behaviors so that they can put the coupon/sale/product/content that matches your preferences in front of you at the right time and right place.

The Early Days of big data

The difference between now and 20 years ago is that users provide orders of magnitude more specific and persistent data in an easily digestible way. With advances over the last decade with Machine Learning and blended prediction systems, companies can process this data and get more and more accurate behavioral predictions. For example YouTube made their recommendations system magnitudes better using Deep Neural Networks. Amazon published their Destiny engine which uses Deep Neural Networks to build better recommendations from user behaviors. Both of these are in the category of “AI” if you aren’t familiar.

Better tools and predictions create better services for users, more tailored product offerings, more accurate recommendations and more efficient markets that keep you coming back to their services. As more companies use increasingly precise behavioral models to predict user actions with “AI,” the deeper these relationship will get. Organizations will seek to build ever more personal relationships with their users because they want to serve your preferences.

By default the recommendation systems that people are responding positively to require your personal input to be successful. Just like personal assistants do.

Wait, I don’t want to have a relationship with AI!

Actually, you probably do.

Remember those 20% of Americans that literally have an always listening voice based assistant in their home? They don’t keep those because they are forced to, they do so because the assistants really do provide value that people want.

Anytime you check in for your flight on your phone, search for something on google, like a post of facebook, create a board on pinterest, rate a product on amazon or leave a review on yelp, you are dancing with the system, leading with your left and the system is following with their right. You are giving them your private preferences for an increasingly tailored service. Your behavioral data is used to tailor the product to your wants, to personalize your experience, which theoretically keeps you happy and using it — or at least addicted to it, the sinister downside of giving people what they want. This is however what people have been asking for: better personalization. Congratulations, you’re in it.

I will make a prediction of my own: more deeply intertwined AI based services will increase the decision making capabilities for users over the next several decades to such an extent, that it will be considered irresponsible not to use them.

Ok, fine then we’ll do it offline!

I can hear you now: “Well fine, I value my privacy but I also agree that these services are beneficial. So we will just make systems that never touch Amazon, Google or Facebook or whatever mega corporation services. We’ll own our own data and I’ll just keep all my private data locally in my own home and have my open source Smart Speaker totally off the grid. Or maybe just send out relevant data where necessary. I’ll build my own DDPG based Deep Learning systems and teach it everything it needs to know!”

But will you? Will 100 Million Americans put the effort in to tweak and modify their own systems? Will 3 billion users worldwide? Is it feasible to think you can build a “smart” enough network with strictly compartmentalized data just from one person?

The recent lament of how we “lost our way” with the internet, is the latest proof that humans are great at consolidating power, so why would this time be any different? The cyber-utopian fantasy of egalitarian connectivity is in my estimation low probability, and I think it’s irresponsible to move forward with that as a thesis.

Practically speaking, robust information networks don’t work that way, they need to exchange information up and down to become better, faster and more accurate. I’m not just talking IT here, I’m talking about plant root systems, dolphin pods and migratory bird flocks. All nodes share information up the branches to make the system stronger, more responsive and more efficient. Whomever owns the most connected nodes has the power. We should recognize that as a natural law and build our social systems to compensate for that.

So how should we as individuals choose to enter personal relationships with organizations that provide services to us in exchange for our private data? Who should we trust?

The first step is to recognize what these relationships look like, what data is being shared, how is it being used and how we can mutually benefit from the relationship as both users and creators without blowing the whole thing up.

Second, We should acknowledge that you cannot simultaneously have systems which adapt to users behaviors while also keeping their behaviors out of reach

Third, we need to study the cost vs benefit for these systems. My gut tells me that the tradeoff is net positive, but we need more evidence to show that. This is especially important for engineers and designers as we have a duty to provide value and not just extract value.


Reposted from my original medium post here: https://medium.com/@andrewkemendo/artificial-intelligence-and-privacy-are-incompatible-5375035f15c0)

Relative Complexity and importance of Systems within Evolved Organizations


In any organization of systems that has adapted and survived as a result of selective pressure, can we assume that some systems are more complex or took more time to develop than others during co-evolution?

Is the modern human eyeball more or less complex, as a system, than the modern Central Nervous System? Which took longer to evolve, the modern eyeball or the modern Central Nervous System? Did their co-evolution make them inseparable to the point where the question is intractable?

If the goal is to replicate the function of an evolved system through planned engineering, and the sub-systems are going to be built in a decentralized manner, under the above assumption we can derive that some systems will be easier to build than others.

Should we then assume that the more complex systems will be more critical to the functioning of the whole system? Does it follow that the systems with a higher cost to develop are more important?

Intelligent Systems


A system, given defined physical boundaries, could be considered Intelligent if it does two things:

Sense: Accurately measures its environment Manipulate: Physically changes the orientation of the system and objects in it's environment

You can test for intelligence by asking: Is the system sensing and is it manipulating it's environment? Of note, only Manipulate is directly observable to an outside observer. We only infer Sense via system manipulation latency, Eg. Reflex tests.

The more precisely a system does these two things, the more “intelligent” it can be considered.

Within these measures, complexity seems to scale at a greater than linear rate. Eg. Sense includes undirected exploration, Manipulate includes abstraction with tool use etc... Some measures are more easily observable than others.

A third system seems to be required for increasingly precise intelligence:

Model: Maintain an accurate longitudinal representation of sense measurements

It is unclear how to measure the existence of this sub-system, as it's not directly observable. This system could also be called “memory.” Much has been written about attempts to tie physical structures within intelligent systems to the abstracted concept of memory.

This is insufficient to fully describe an intelligent system, as the criteria for the system to optimize manipulation is not built into the model. It's necessary to define a vector for manipulation criteria in the context of the model. Said another way, the system must determine the appropriate manipulation action given the ability to Sense, Model and Manipulate. Hence the need to:

Plan: Generation of a future model state

It is unclear how to measure the existence of this sub-system, as it's not directly observable. It's unclear through which process intelligent systems generate future model states and what the coupling is between manipulation criteria and Planning. Eg. what influences the proportion of planning which requires system manipulation of the environment, versus planning independent of the system manipulating the environment.

Stated in a solipsistic way: Planning intends to compare what the future world looks like without your input, versus what the future world looks like with your input.

The criteria for biasing planning to inform manipulation criteria still remains unclear.

I contend that Intelligent systems manipulate their environment with the purposes of reducing uncertainty in future model states. However this is unsubstantiated.

Finally, I contend that the meta-comparison of the precision to which a system can manipulate the model of the environment, via a precise catalog of it's sensors and manipulators, in conjunction with the ability to explicate the biasing criteria for planning, would be what we consider consciousness.