Andrew Kemendo

Recent influx of substack posts to Hacker News

I got the sense that there were a sudden influx of x.substack.com submissions to HN.

So, I did a really rough analysis of HN posts from the last week and then back 20, 30, 60 days to see what the daily post frequency was for the substack domain.

Turns out – yup. Huge influx this week:

Date Submissions ______________ 22-Jul 23 21-Jul 38 20-Jul 18 19-Jul 17 18-Jul 13 17-Jul 11 8-Jul 21 2-Jul 24 22-Jun 10 22-May 7

substack submissions

Kids thoughts on Evaluating Human Level Systems


I sat down my 9, 7 and 5 year old and asked them how they would determine if a “Robot” was as smart as them and these are their responses:

9 Year old Make Friends and Have Emotions Have Enemies Be Mean to People who are Mean to them Have Likes and Dislikes Have things they are good and things they are bad at Tell Lies

7 Year old Wear Clothes Talk like a human Do math puzzles Have legs Act their Age Be able to do things the wrong way, or make mistakes

5 Year old Eat Food Go Pee Smell Things Make people’s wishes come true Do the same work as me and when the teacher checks the work they did better Make Animals come out of a computer Tell you they are smarter

Minimum Viable Intelligence


Yann LeCun hates the term “Artificial General Intelligence” because, after all, what human capabilities are actually considered general? He prefers “Human Level Intelligence” and I tend to agree that this is a better threshold. I continue to be intrigued at what such a threshold for HLAI will look like.

Think of the range of “intelligence” of humans you interact with. Is the least capable human you know a good enough bar for HLAI? This isn't simply a thought exercise.

It's easy to explain why any particular human might consistently underperform any given task. Lack of education, opportunity, etc... yet they remain human, arguably with the ability to perform somewhere near the mean. Does this discount humans with disabilities as being possible minimum thresholds? That seems like a dangerous idea.

Unfortunately this variability in performance, not capacity, keeps the threshold for proof of “intelligence” very fuzzy. I reference here something I state often:

To the outside observer there is no functional difference between the ability and desire to act.

Given the above, we're still left holding the bag on the question: “What is the minimum viable intelligence that proves a system is Human Level?”

Nothing will happen with Climate Change until we have a Climate Pearl Harbor


In response to the article today from Joëlle Gergis: https://www.themonthly.com.au/issue/2019/august/1566136800/jo-lle-gergis/terrible-truth-climate-change

After reading the 2018 IPCC report for policymakers [1] the major thing I took from it was this:

The stated impacts of further warming aren't grave enough or so causally linked to climate change to justify nations taking the steps needed to mitigate the impacts.

For example, the more immediate and impactful conclusions from the report are these:

In urban areas climate change is projected to increase risks for people, assets, economies and ecosystems, including risks from heat stress, storms and extreme precipitation, inland and coastal flooding, landslides, air pollution, drought, water scarcity, sea level rise and storm surges (very high confidence).

Climate change can indirectly increase risks of violent conflicts by amplifying well-documented drivers of these conflicts such as poverty and economic shocks (medium confidence)

Most readers shrug and move on because these events are not tangible and it remains unclear that those events couldn't be mitigated or absorbed at the time of event.

Namely, “to restrict global warming to 1.5°C, global ambition needs to increase fivefold.” That implies that the current state of the global economy would effectively ground to a halt.

Said another way, it would take a MASSIVE and unprecedented shift to the average, and an increasing portion of the global populations way of living, to slow warming significantly.

So what people hear is: We must completely change our lifestyles to prevent some chaos somewhere unknown down the line. I think all of social science tells us that humans are bad at this kind of long term planning.

The challenge here is that there is a huge gap between what is meant by climate scientists with statements such as “the very foundation of human civilisation is at stake” and what people can “touch and feel” making the sense urgency seem overblown.

Even things like “Cyclone Tracy is a warning” fall flat because “there have always been storms” and the causal relationship isn't direct – it's statistical and that's not something that politicians and the public generally can grok.

Contrast that with something like WWII which the US was staying largely removed from, until Pearl Harbor happened. That was a direct, causally linked, explicit and objective event that pushed an entire nation to change their behavior – but it was also time limited.

I'm not sure what needs to happen, but my guess is that until there is a “Pearl Harbor” for climate change, which I don't think is really possible, very little substantial will be done.

[1] https://www.ipcc.ch/site/assets/uploads/2018/02/AR5_SYR_FINA...

Environment Model Alignment

If we assume that humans sense-model-plan-act, then it would be safe to assume that where senses are the same, eg. places where people experience the same actions or events, should result in a similar model. In fact that often doesn't seem to be the case.

There are 7 billion different models for how the world works. How much similarity is there between the different models of how the world works? What fills in the gaps matters perhaps. I think of this concept of noble lies filling in truly unknown, or unknowable questions, such that there is commonality is important.

Maybe there is such a thing as a whole person intelligence test. A Maze, mechanical puzzles, eliciting information, active listening, make a song, recreate an image, describe a fundamental concept. How would you make it generalizable or scalable? The longer is goes, the more granular, and maybe more intelligent?

The Global Crisis of Meaning


It seems as though the world is going through a slowly evolving existential crisis.

Starting around the end of World War II, the European West had been effectively razed and set back to an infancy. The Nazi terrors and subsequent War had so effectively blanched any liberal idealism from the collective European West, that it was left at the philosophical starting blocks.

Lets consider this the “birth” of the modern philosophical world

The head start that the American west had coming out of WWII, as the named victor, cemented that American philosophy became the philosophical “north star” for the western world, if only primarily because of the vacuum left in the wake of the war. American style vocal liberal Democracy and the extreme fetishization of Markets – with a protestant bent and casual racism thrown in as a mixer – became the notary stamp of progress.

The philosophical counterbalance in the post-WWII period, of communism, was in-retrospect a competitor primarily in name. While in name, communism (in reality soviet-allyism) was spreading, the reality of liberal trade and democratic institutions was a low noise floor of how people actually behaved in nearly all of these communist countries.

The 60s and 70s served as somewhat of a pre-teen period of philosophical chaos, with a active and loud general questioning of authority and challenging fundamental assumptions about the structure of the world. Technological progress – driven by the classic liberal philosophical concept of democratizing knowledge – helped catalyze these challenges, but in the end drove the majority of people to consumerism, reinforcing those fundamental roots of extreme markets and liberal democratic ideals.

The largely un-restrained economic growth in the west from the 70s until the late 2000s unwittingly promoted the philosophy of consumerism riding on top of extreme markets and choose your own adventure liberal democracy. The dissolution of the Soviet Republic, only served to notarize the only other philosophical competitor with the western philosophical stamp.

The consumerism train started going off the tracks in the early 2000s however. Push back on consumption as a habit and an increasingly growing proof of consumer driven anthropogenic climate change started to grow from a whisper to a chorus. The Great Recession in 2007 officially ended the consumerism party and by 2016 the US elections and BREXIT typified the death of “spend more live better” philosophy.

The Philosophical north stars of the past have vanished

“Country First”: Nationalism is Nazism “God First”: God is long dead “Spend and Prosper”: Consumption will lead to collapse

Collectively, we are ridding ourselves of the leaps of faith that were the north stars of our communities – not that everyone always believed them, but at one point we all at least played nicely in systems that assumed those truths were the underlying bedrock. Nobility of Monarchs is a wink and nod, deity appointed rulers are tongue in cheek at best etc...

We've continued to whittle away at the mysteries of the world, and the collective hallucinations we shared to keep communion and community, are becoming widely recognized for what they are – as baseless in fundamental truths as any other thing. We're slowly working our way down the abstraction levels asking the basal ganglia where we should find our philosophy

We still as humans need some north star, and it seems like “happiness” is trying to take the lead. The idea of the “balanced” life seems to reflect this. The concept of a diversity of activities and goals that keeps the brain engaged but not obsessed. Alternating between challenge and comfort so as to not push the system into over-extension. Anxiety is the enemy, and we prove it because it reduces longevity. Optimize the Work-Rest cycle, to extend the exploration phase of life and maximize dopamine pumps.

We fetishize parenting because we have no other fundamental north star to guide our lives. The biological response to family make it impossible to prove as hollow because it feels important – you can't disprove that you feel a certain way about your children – it's just there and you can taste it. In that sense, it's no different than hedonism.

There is no further purpose than for the feeling of peace and harmony that is unique to communion with others – and most powerfully with an offspring. It's as empty as any other – but feels fulfilling. This philosophy sits pretty firmly at the monkey brain level of abstraction – the lowest common denominator as a singular vector. I'd worry though that it simply leads to hedonism – but maybe that's ok.

Which brings us to Camus. We have a choice at the end of the day to find meaning. You can ignore the absurdity of the lack of objective meaning, by removing yourself from the equation: Suicide. However we must take it head on – though I'm still working out why this is necessary – something about turtles I'm sure. In which case you must either take a leap of faith, or you must choose what to make your meaning about.

My singular vector is collective understanding – helping us wiggle toward a conscious universe. But what's behind that? Unknowable, but that's the choice I make.

Synth Setup Part 1

I decided to start actually making music after 20 years of just editing and DJ'ing. I realized that in order to make the music I wanted, I needed to get a keyboard synthesizer, but I didn't know where to start.

I had a chance conversation with a semi-pro musician and he said “Just go to Guitar Center and play around with the synths.” So that's what I did.

A day later I had a $230 used microKorg.

In order to effectively use this to make music, I realized after playing with it, that I needed a sampler and recorder. Instead of buying new hardware, I figured there were probably good software samplers, so the next step was to feed my Korg to my computer. However modern laptops don't have sound cards that you can plug a 3.5mm into like I used to have on my desktop. I assumed that would be with a MIDI out to USB controller, but that wasn't the case, turns out you need a USB audio interface.

So I bought the entry level Focusrite Scarlett Solo.

I also re-downloaded Audacity for the millionth time, which is a great open source music editor. The Scarlett Solo comes with some free sample packs and things so I think those will be interesting to play around with.

This is what the whole setup looks like:

setup

Artificial General Intelligence Hypothesis


Disclaimer: This hypothesis is not going to have the rigor that is appropriate for this topic nor will I cite or support it in this format. I just need to get it down somewhere.

A system will have to follow the below process, either faster or more accurately than humans, across most domains of human action, in order to qualify as AGI:

Sense > Model > Plan > Act

We will achieve the goal of Artificial General Intelligence only when we can build an independent coordinated system of systems, with measurable boundaries that can follow this model.

Sensing: Human Level systems will only appear when sensing is as granular as human sensing. That means an AGI must achieve parity along all forms of human sensing: visual, auditory, tactile, and chemical sampling (taste, smell). Superhuman AGI would exceed human sensing capabilities into non-human sense ranges such as X-Ray, nano-scale tactile etc...

Modeling: Human Level systems will only appear when modeling is as accurate as human modeling. That means AGI must achieve parity with humans along physical modeling (navigation and spatial awareness), longitudinal modeling (change of physical systems over time, causal mapping) and conceptual modeling (social mapping, emotional mapping, ontological mapping). Superhuman AGI would exceed the specificity and granularity of a human model in each domain that it has sensors.

Planning: Human Level systems will only appear when planning can repeatedly give as sustainable of options as human planning. That means an AGI can create predictions of the state of it's physical, longitudinal and conceptual models both with and without the AGI's input as accurate as human predictions. Superhuman AGI would be able to more accurately or more granulary predict a future world than a human could.

Acting: Human Level systems will only appear when acting on the environment can be as granular as human actions. That means an AGI must be able to show that it's effectors can change the environment in which it acts in such a way that is consistent with it's planning capabilities, to the level of granularity of human actions. In simple terms this means, that when the AGI acts the outcomes of it's actions align with the intended outcome of it's planning, based on it's current model of the world. A Superhuman AGI would be able to effect the environment in a more granular way than a Human could given the same (or improved based on it's own design) tools.

AGI is not possible without the AGI having direct control of environmental sensors and effectors without outside influence. That means there is no such thing as an AGI with a human in the loop. Superhuman AGI would be made worse by the existence of a human in the loop, as a human would introduce a less granular model, planning and effecting capability than the AGI would.

Identifying Cranks


On their own, many of these are normal things that all researchers and scientists face at some point. However the more of them apply, the more likely it is somebody is a crank.


They have been working on the same problem for decades with little progress

Experts in their field ignore their work

They publish in alt-journals

They start their own journal

Their production quality is consistently low

They quote themselves

They don't have many peers who are respected in their field

Anyone successful in their field is “doing it wrong”

The successful people in their field are “just doing what they did years ago”

Nobody is quite sure how they make a living

They are the primary person promoting them

Lifestyle Company


A few years ago Noam Wasserman came up with the Rich vs King Concept:

https://hbr.org/2008/02/the-founders-dilemma

The simple version is: if you start a company, you need to know if you care about monetary rewards or power to control the company. In almost every case you can't have both.

Around the same time, the popularization of a “lifestyle business” appeared, in contrast to the cult of massive growth startup. The 4 hour work week is the spirit guide to those who are inclined to the lifestyle business.

Within the VC/Growth startup world calling a company a “Lifestyle business” is usually done dismissively, inferring that whatever the company and it's founder are doing is trivial or not worth considering as serious.

Having been around a lot of startups and founders over the past 7 years in many different places, these two ways of thinking about business: Rich vs King and Lifestyle vs Growth highlight the major difference in how people who go into business see the world. So I came up with a new set of categories:

True Believers These people start a company to promote an Ideology. They are ideally Kings of Growth Companies

Hedonists These people start start a company to take control of their wealth. They are ideally Rich and Work as little as possible

True Believers either get huge and start massive movements, or they implode, often in a spectacular fashion. They are flashy and draw a crowd. They care more about “changing the world” than getting rich. These are the home run champs like Mark McGuire or Barry Bonds.

Hedonists typically can build something that has good numbers consistently, and their failure mode is pretty low impact. They are steady and usually considered more predictable. They care more about growing their pie than long term social impact. These are consistent base hitters like Cal Ripken Jr or Craig Biggio.

Obviously these are broad generalizations, but it seems like the majority of business people fall into the Hedonist category, while the majority of “startup” people fall into the True Believer category.

I think though, business might be the wrong place for true believers. I'm not sure where they do fit – and I'm trying to work that out cause I am one – but it might not be as a company founder. At the end of the day, making money is only in service to the ideology, not an end itself so there is a conflict there.