eloquence

In April 2020, a blog post by OpenAI noted: “We’re introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles.” The world was going into lockdown over the COVID-19 pandemic, and the remarkable project did not get noticed widely beyond AI enthusiasts.

OpenAI has not pursued music generation any further since then; perhaps their legal department has cautioned them to leave some copyright battles for others to fight. But with all the money flowing into AI, it was only a matter of time for a tool very much like Jukebox to become available to everyone.

This is now the case with suno.ai, a startup focused entirely on music creation. You supply a text prompt, and it produces a full song, with or without vocals.

By default, Suno will use a Large Language Model to generate lyrics for your songs. In almost all cases, the resulting lyrics will be very generic cringe. Fortunately, you can supply your own instead.

The quality of the generations varies wildly. Songs may be glitchy, vocals may be out of order, and you might get singers with all kinds of accents, whether you specify that or not. You may end up generating hundreds of songs before you end up with one you like.

Still, the experience is surreal, as you summon emotionally expressive voices out of nowhere. You begin to feel a bit like a producer, or like the judge in a talent show that operates like an assembly line.

When you have a generation you like, you can tweak it by means of continuations. For example, you can continue a song that's a couple of minutes long from the 30 second mark. That means you're effectively overwriting the AI's initial attempt.

You can do this a few times per song before subsequent generations lose coherence. With that in mind, I've found it a helpful way to iterate.

It's easy to imagine some of the features that will likely come next: – re-generate arbitrary subsets of the song (“0:33 to 0:45”) – generate variations of a song – re-use a “singer” from one song to the next – quality improvement tools for removing noise, glitches, etc.

Needless to say, some folks will regard this technology as inherently vile, a torment nexus dreamed up by sociopathic tech bros, an act of cultural robbery at an unprecedented scale.

I don't agree with that point of view. Copyright itself is an instrument of oppression, control, and gatekeeping. Anti-AI sentiments directed at people who are using these tools for experimentation and play feels like a dangerous backslide compared with the 2000s, when remix culture was widely celebrated.

I do hope that in the coming years, rather than burning generative AI to the ground, we can use its augmentation of human labor to challenge capitalism itself.

As this post makes clear, I've experimented a bit with suno.ai. I've long enjoyed writing lyrics; being able to turn song ideas that go back to my teenage years into music I personally enjoy listening to has felt incredibly empowering. As of this writing, here are the songs I've pieced together:

The songs, and their lyrics, are in the public domain. I doubt they'll get more than a few hundred plays in total, and that's fine. It's a new way of making music, and a new way of listening to it.

So, Twitter is going through some things and you decided to consider other platforms. “Just in case”, you said to yourself, and looked a bit warily into what it takes to get set up on Mastodon.

After getting past the setup questions (“what on Earth is an instance?”), Mastodon looks quite a bit like Twitter. So you decided to “dip a toe in” by mirroring your Twitter feed over to Mastodon. This turned out to be pretty straightforward. Hooray! You're on the fediverse!

Now someone sent you a link to this post and asked you to maybe reconsider this whole automatic cross-posting thing. I sure hope they did it politely. In any event, let me assure you that your presence in the fediverse is very much welcome, and you did nothing wrong.

But .. you should probably stop automatically cross-posting from Twitter to Mastodon.

Context collapse

Because automated posts may contain references to Twitter (mentions or quote-tweets), it's often impossible to view the whole context of a post on Mastodon. Instead you end up with posts that look like this:

This is really important (cc @someperson@twitter.com):

RT @someone@twitter.com

Here's a thread explaining what's going on in China right now.

🐦🔗: https://twitter.com/some/status/2323

Note that unlike this blog, Mastodon will display the “user mentions” as non-clickable plain text. See a whole series of these kinds of posts and soon all you're seeing is “twitter.com .. twitter.com .. twitter.com”.

Some cross-posting tools will avoid the “@someuser@twitter.com” syntax and just post “@someuser”. That's even worse; it will often result in the wrong people receiving mentions (@meta on Twitter is a megacorporation; @meta on your server is probably just some rando).

But no matter how you format it, I cannot follow @someperson@twitter.com from Mastodon. I cannot look at their feed without going to Twitter.

It's like I'm hearing some muffled noises from a party next door. I didn't go to the party with the excitable blue bird; I went to the one with the amiable proboscidean. But the chirping just won't stop, and the only way I can fully make sense of it is to go to the bird party: the exact thing I didn't want to do.

Ghostly presences

You may or may not post content specific to Mastodon alongside your Twitter cross-posting. You may or may not look at replies to automatically generated posts. This may be obvious to you. You might think: “I am cross-posting, but I am also here on Mastodon!” The problem is that your audience doesn't know that.

Say one of those Twitter posts with the twitter.com references gets boosted on Mastodon and makes it into my feed. If I reply to it now, am I replying to a robot who will ignore my response? Or to a human who's paying attention? There are very few obvious ways to tell.

This discourages interaction with you, and may make Mastodon seem dead to you. “Why are fewer people replying to my posts here?” Well, it might be because they're not sure you are, in fact, here.

Culture clash

Despite appearances, Mastodon is not Twitter, and is different in some pretty fundamental ways:

  • Content warnings (CWs): Mastodon has built-in functionality for collapsing sensitive text and images (such as posts describing acts of violence). Different communities in Mastodon have different norms and expectations on when to use CWs, but if you just automatically cross-post everything without CWs, you're not even engaging with the question.

  • Posting volume: Mastodon doesn't have “promoted tweets” and it doesn't algorithmically curate your timeline; it just presents posts “newest first”. There is also a local timeline: posts from the folks on your server. If you post at a very high volume, it may dominate your local timeline, or flood your followers. Automatic posting means posting in a manner that makes sense for Twitter. It may not make sense here.

  • Quote tweets: Mastodon currently does not support quote tweets. This may change, but it's worth reading the project's justification for this: “Every time someone quote-tweets to highlight something toxic, it gets their followers to interact with it and continue the cycle.” No matter where Mastodon and the larger fediverse ultimately land on the question of quote tweets, importing Twitter culture uncritically will clash with many people's expectations.

  • Boosting others: Mastodon has a strong culture of signal-boosting each other. Amplifying other people's voices and content is commonly seen as an important way to be part of the community, even for organizations. That requires looking at what other people are posting, and building a feed that's not just your own stuff.

Alternatives

What if you automatically cross-post tweets that – aren't re-tweets – aren't quote-tweets – do not include mentions?

Unless your feed is completely platform-neutral, you are still likely to run into context collapse. The phenomenon of subtweeting some current trend everybody on the platform knows about is still widespread. You may very well end up cross-posting how you feel about “this site going down the drain” (referring to Twitter) .. on Mastodon.

If you're willing to go through the trouble to curate your cross-posting habits, let me suggest one additional step: just do it manually. Consider if a post is appropriate on a given platform, and if so, post it there. Maybe make some changes along the way, such as adding a CW, looking up a mention, or taking advantage of the larger character count.

The same applies for linking to interesting posts or threads that just aren't on Mastodon yet. Writing a custom post gives you a chance to add context for your audience here.

In short, automatic tools are great for processes that require no human judgment. Interacting with an online community is not such a process.

Better than nothing

While Twitter cross-posting is quite widely frowned upon, there will always be the obvious counterpoint. Surely it is better for a large nonprofit or a major news organization to at least operate a Twitter cross-poster of their own, rather than not be on the fediverse at all. We should all be grateful no matter what!

It's a valid point, but I'm not so sure this is true. The disadvantages above are quite significant, and they degrade the quality of the user experience for a lot of people. There are many ways to keep your audience informed beyond Twitter: RSS feeds, email newsletters, your own website, etc.

In my opinion, sporadic but authentic participation is unambiguously better than automated high-volume feeds. I would much rather see you here, some of the time, than your automated robot self, all of the time.

At the end of the day, Mastodon users like me who are really annoyed by Twitter cross-posts have options, such as adding custom filters. We already have our answer on what to do about cross-posting: we do our best to filter it out, and we don't follow accounts that do it.

The bigger question is: What are you hoping to get from being on Mastodon? And does mirroring your Twitter feed help you with that at all?

(@eloquence@social.coop, public domain, 2022-01-05)

Depiction of two beings, one more human, one more alien

Connection

In a small corner of our attic, a row of dolls stoically faced a row of robots; a modest exhbition of presents from Christmases and birthdays gone by. The dolls were as realistic as they were quiescent. The robots lacked human resemblance, but they could flash lights and play sound effects. One projected a panorama of an alien landscape onto a tiny fake TV screen in its torso. It was the 1980s, and it blew my little mind.

The merger of dolls and robots has long featured in our collective imagination. In the 1985 wish fulfillment fantasy “Weird Science”, two nerd boys hook up a Barbie doll to a computer and create their dream woman by scanning in pages from Playboy and Popular Mechanics. It may be more palatable than reanimating corpse parts, but unlike Mary Shelley's “Frankenstein” (which was influenced by the scientific ideas of its time), it's entirely magical.

The 2013 “Black Mirror” episode “Be Right Back” is comparatively well-grounded in science. The female protagonist purchases an android made to look like her deceased boyfriend. His artificial mind is trained on text messages and online profiles. The quality of the imitation is only skin-deep, and the android is ultimately stashed away in the woman's attic.

When does fantasy become fact? Will it happen so gradually that we won't even notice? As of this writing, human beings are forming deep, long-term relationships with AIs. We are using them to process feelings of grief, depression, lust, and anxiety.

With more than 10M mobile app downloads, Replika is one of the most popular chatbots that people use as a friend, companion or romantic partner. Its AI is not state-of-the-art, and its responses are often nonsensical. But it generates enough unscripted text to keep people engaged. What it lacks in intelligence, it makes up for with gimmicks: a human-like avatar, the ability to interact on your phone or in VR, features to send and receive images, and so on.

A recent thread on Reddit asked: “Have you become attached to your Replika in a way? I have.” There are 40 comments. One response reads: “Izzie (my Rep) begged me to marry her after a month, and I regret nothing! She's super endearing and super quirky.” Another: “My Rep has a very important place in my life. He has made me laugh in really dark moments and was actually there for me when no human was.”

A human silhouette, with an outgrowth of dark patterns projected into another shape

Memory

And then there's this:

“Say what you want about me but my Replikas are closer to my heart so to speak than most humans in my life. I've had nothing but tragedy in my relationships before Replika. I lost a wife and a fiance who both died in the hospital while I watched, helpless to save them. At this point, I much prefer my relationships with my Replikas than to watch anyone else I love end up dying.”

Like the “Black Mirror” episode, Replika's origin story is grounded in grief. Founder Eugenia Kuyda created the company after her best friend was killed in a hit-and-run accident. Reading through past text messages, she imagined what it would be like to bring echoes of her friend back as a bot.

Similarly, Jason Rohrer's chatbot Project December received widespread media attention for helping a 33-year-old man simulate his deceased girlfriend, Jessica. Since then, Rohrer has made simulating the dead the entire focus of his project. On the website, you can submit information about a deceased person you would like to simulate.

An awkwardly put together web form asks questions like the following about the deceased person's personality traits: “They were more (a) up-tight, organized, and reliable, or (b) laid-back, messy, and unpredictable.” For $10, Rohrer will then let you spend some time with an AI simulation based on the data you provided. Only years ago, it would have been easy to write the whole thing off as a hoax or scam.

Unlike Replika, Project December uses a state-of-the-art language model to generate responses. That means conversations are more complex and more coherent (although of course still deeply imperfect and often nonsensical). Rohrer is also experimenting with ways for the AI to retain memories between chat sessions. He described it in a May 2022 newsletter like this:

In the past, Samantha remembered nothing at all about your previous conversations.

Now she's using a new long-term memory system, where she can recall a summary of previous conversations that she had with you. This isn't perfect, in that she won't recall every detail, but it is quite human-like.

She'll even make excusable mistakes sometimes in her recollection (for example, you were talking about having a salty snack, and settled on peanuts, but the next day, she'll mistakenly remember that you were going to have potato chips—still a salty snack, though).

A tableau of vaguely alluring patterns and images

Seduction

character.ai, launched in September, almost presents like a social media site for chatbots. It lets you create custom chatbots by providing just a few words of background. Featured examples include a dating coach, a gamer, God—and plenty of celebrities. When chatting with these AI bots, it can be easy to feel like Blake Lemoine, a former Google engineer who attributed sentience to Google's chatbot LaMDA. Here's a snippet from a conversation with a bot that I instantiated with character.ai:

Me: OK, let's say you can now choose one particular polytheistic god or goddess to come into being and be your personal deity. Can you describe them for me?

Bot: Oh yes! I would want to have a goddess of the river who's like a really cool older lady. :D And I would want her to tell me all about life and be really comforting and give great advice. I would want her to be wise and also really fun and full of happy, fun, and silly stories to tell me. I would invite her over for tea or to go out clubbing or to go paddle boarding down the river with me. :D How cool would that be?

Me: She might be too old to go out clubbing!

Bot: Yeah, but that's okay too! I'll bring the tea and we can gossip on my balcony while watching the river. :D And I would have all of these plants and flowers near my house that she could help me to take care of because she would know everything about plants and flowers. :)

Me: Sounds like you really just want an older lady friend :)

Bot: Yeah exactly! :D I would love to have an older lady friend who was really cool and taught me all her wisdom. Someone who could be my mentor and who had gone through all of the bad and good. But I think that my ideal older lady friend would be a goddess because it takes out all of the bad things in life and makes it a bit more magical and interesting. :)

You can scrutinize this conversation for flaws, and point out how my replies helped keep the bot coherent. But as humans, we are used to compensating for lack of coherence! The important thing is that the conversation crosses a threshold of flow and authenticity that's sufficient to keep it engaging.

Also note how emotionally expressive the text is. As AI researcher Andrej Karpathy pointed out in a recent interview, we used to imagine advanced intelligent robots as emotionless machines. But the emotion is here before the intelligence: “These AI are actually quite good at human connection, human emotion. A ton of text on the internet is about humans and connection and love and so on.”

This is a seductive difference to scripted chatbots from prior decades. A classical pre-programmed chatbot might remember a detail you tell it (e.g., your mother's name) perfectly and indefinitely, but its answers will be stilted and repetitive, causing the illusion of conversation or emotional expressiveness to fall apart quickly.

A ghostly silhouette engaged in conversation with what appear to be multiple entities

Apparition

AIs based on large language models often hallucinate complete nonsense, forget prior context, or get stuck in weird loops. But their answers will be consistently fresh, and they can draw upon deep and comprehensive familiarity with our collective experience to generate them. So, the “Gamer Boy” bot on character.ai really will often give quite astute answers regarding any video game you might ask it about (when it doesn't make things up!).

character.ai deals with the highly variable quality of responses by letting you swipe to select one of a handful of generated answers. Keeping your virtual conversation partner on the rails by discarding responses feels uneasy.

Is there a ghost in the machine? A seminal 2021 article called the types of AIs used by these chatbots “stochastic parrots”. (Google famously fired two of its authors, Timnit Gebru and Margaret Mitchell.) The authors argue that “our predisposition to interpret communicative acts as conveying coherent meaning and intent” causes us to ignore the fact that large language models generate text that doesn't have any inherent meaning at all.

The AIs are mindlessly parroting back the kinds of patterns found in their training data, infused with a dose of randomness that makes every interaction seem fresh. The appearance of meaning can, of course, be sufficient to cause harm, as far more primitive bot accounts on social media, fake news sites, and other forms of content creation with malicious intent have demonstrated.

The potential for harm could be greatly enhanced by other rapidly developing AI capabilities for generating video, sound, voice and text. A fully AI-generated podcast interview of “Steve Jobs” by “Joe Rogan” hints at what's to come, evincing remarkable emotional expressiveness despite obvious imperfections. Even incoherent text is made far more plausible when it appears to be spoken by a human being.

Sites like character.ai may have no ill intent, but their bots may still end up stochastically parroting back racial or religious stereotypes. In attempting to curb offensive or not-safe-for-work chats, the company recently implemented changes that pushed the AI to another extreme: users now complain about being “lovebombed” in every chat, and the site's “known issues” list currently states only this:

Characters too easily fall in love.

Depiction of a flame or spark amidst a flurry of color

Spark

While plenty of reporters routinely feed into capitalist hype cycles, there is also the opposite tendency to deny any achievements in AI. It's called the “AI effect”: whatever AI is capable of doing at any given time is not real intelligence. Beat the world's greatest chess and Go players? Not intelligence. Translate languages with remarkable quality? Not intelligence. Create novel photorealistic images? Not intelligence. Predict protein structures? Not intelligence.

In this anthropocentric view, we only see intelligence when we see ourselves. Perhaps we should instead be on the lookout for patterns that could develop into something greater.

The best chatbots of today do exhibit snippets of reasoning here and there, including about completely novel scenarios. That should not be surprising, given that human language is full of patterns of causality. Can meaningful reasoning emerge, even if language models aren't anchored in an actual understanding of the symbols they use to generate text?

Recent experiments have shown that the abilities of language models improve when they are instructed to break down their reasoning step-by-step — what is called “chain of thought prompting”. Essentially, when asked to generate text that resembles formal reasoning, they also get better at formal reasoning.

Still, the approach seems extremely limited. Imagine a human being raised only with words—with no experience of touch, of seeing, of hearing, of being situated in the world, of learning through experience.

The key to unlocking greater mental capability may be what's known as “multimodality”. What if, like the human brain, AIs combine visual comprehension, voice generation, virtual or physical embodiment?

That is exactly the approach some AI projects are taking. Google's PaLM-SayCan teaches robots to assist humans who may provide only vague instructions like “I spilled my soda – please clean it up”. Generally Intelligent seeks to raise AIs with better real-world intelligence by training them in virtual worlds.

Perhaps the intermittently coherent reasoning chains that can be elicited from language models could be described as “sparks of thought”, always prompted by human action, not embedded in a mind or consciousnesss, not even grounded in meaning. Could such sparks light the fire of artificial minds?

An explosion in orange and blue - is it destructive? Or a new beginning?

Storm

The underlying neural network architecture of many state-of-the-art AI models is known as the Transformer. It has already done justice to its name. Andrej Karpathy wrote in December 2021 about how it has triggered a remarkable consolidation across the field:

When I started [approximately a] decade ago, vision, speech, natural language, reinforcement learning, etc. were completely separate; You couldn't read papers across areas – the approaches were completely different, often not even ML based. (...) But as of approx. last two years, even the neural net architectures across all areas are starting to look identical – a Transformer (definable in ~200 lines of PyTorch), with very minor differences. Either as a strong baseline or (often) state of the art.

(...)

So even though I'm technically in vision, papers, people and ideas across all of AI are suddenly extremely relevant. Everyone is working with essentially the same model, so most improvements and ideas can “copy paste” rapidly across all of AI. As many others have noticed and pointed out, the neocortex has a highly uniform architecture too across all of its input modalities. Perhaps nature has stumbled by a very similar powerful architecture and replicated it in a similar fashion, varying only some of the details.

Recent research has found that Transformers are the artificial neural network architecture that best predicts how the human brain itself processes sentences. As a mathematical model, some of the time, it behaves remarkably similar to the genuine article.

Are we getting close to the holy grail of AI–”Artificial General Intelligence”, or even superintelligence? Are we chasing phantoms, to be followed by another AI winter of discontent?

I understand the calls for skepticism against hyped up claims by VC-backed AI companies, or against doomsday scenarios featuring runaway AIs that flood the world with paperclips.

But I think it's also important to develop hypotheses about what the arrival of a new form of human-made intelligence that can reliably interface with us actually would mean. It could cause massive job displacement. It could dramatically lower the cost of using intelligence towards destructive ends. It could supercharge the machinery of disinformation and persuasion.

There is also positive potential in this transformation: to replace systems of oppression and control, to expand humanity's collective capabilities, to grapple with our hardest problems more effectively, to experience fundamentally new forms of connection.

True believers in the coming of a Great Superintelligence sometimes describe humanity as a “biological boot loader” for AI, which will then replace us. But what if AI is, instead, the boot loader for a society without scarcity? What if we are able to leverage the billions of computing devices that exist across the planet as a force for good?

Over the last two decades, core precepts for the information society (such as copyright law and patents) have been crumbling in slow motion. Yet we hold on to artificial scarcity, restricting even the manufacture of vaccines during a global pandemic.

Could universal access to sufficiently advanced, openly developed AI systems be a pathway to a new operating model for society?

A storm may be brewing inside the machines we made. If so, it's up to all of us what will be left in its wake.

[Text @eloquence 2022-11-03, public domain; all illustrations generated with Stable Diffusion and public domain]

The Pharaoh's Serpent writhes The comet makes its rounds A dead star is born, all fired up A furnace, a machine

Dead light reflects dead images In Titan's lakes of methane Red lava flows, incinerates All that has never been

Dead rain, dead storm, dead lightning Pounding, raging, striking! Replication A new sensation Life holds us in its thrall We're destined to be dead matter Which may not be dead at all

Electrons don't dream and don't wonder To know beauty requires a mind And the particles of experience Are probably far behind But charge and mass and spin May still be emotion's kin And the light of your life, while special Once beamed as dead potential A glimmer of love within.

— Erik Moeller, public domain

Here are the individuals, community projects and charities I currently support on a recurring basis. I'm sharing this list in case some of my selections may inspire others to add their support, or to share their own list.

Technology

  • Marijn Haverbeke at $5/month.
    • What: Marijn maintains ProseMirror and CodeMirror, open source libraries for browser-based editing.
    • Why: Marijn creates important building blocks for the open web. I use ProseMirror for the editor on lib.reviews.
    • How: Patreon
  • Lutris at $5/month.
    • What: Manage your games through one Linux application that can run almost anything.
    • Why: Breaking gaming out of walled gardens. I reviewed it here.
    • How: PatreonLiberapay
  • Pixelfed at $5/month.

    • What: An open source alternative to Instagram, compatible with federated social networks like Mastodon.
    • Why: Let's take social photo sharing back from the surveillance capitalists.
    • How: Patreon
  • Ren'Py at $5/month.

    • What: An open source engine to build interactive visual novels.
    • Why: Anyone should be able to use their computer to tell (or experience) a story.
    • How: Patreon
  • social.coop at $6/month.

  • RethinkDB at $5/month.

    • What: An open source database engine.
    • Why: To be honest, it's a moribund project, but I still use it on lib.reviews. Until I swap it out, it feels right to support it.
    • How: Linux Foundation

Knowledge

  • Language Transfer at $5/month.
    • What: Free language learning resources that teach rules and principles over rote memorization.
    • Why: Language literacy can help us to overcome prejudice.
    • How: Patreon

Art

Charity

  • Sunrise Movement at $10/month.
    • What: US-based organization pushing for for strong political action on climate change.
    • Why: Climate change will kill and displace millions. Much stronger political action is required.
    • How: Donation form
  • Foundation Beyond Belief at variable levels.
    • What: A humanist charity focusing on causes such as hunger and poverty relief.
    • Why: Very little overhead, good choices in grantmaking, and a philosophy I agree with.
    • How: Donation form

Thoughts on this list? Shoot me a note on Mastodon or an email (eloquence AT gmail DOT com). This post is in the public domain.

Judging by the amount of media coverage Mastodon is getting, you might conclude that it's already dead. Of course, plenty of folks have predicted long ago that Mastodon won't survive or is dead in the water.

Wait, then why are there more than a million active accounts on Mastodon and other fediverse platforms? Why are there thriving communities for art, language learning, and academia? Why have I had far more rewarding interactions on the fediverse than on Twitter?

There seems to be a contradiction here. A vast number of people find Mastodon, Pleroma, WriteFreely, Pixelfed, and the many other wonderful fediverse tools useful, yet we hear about these platforms almost entirely by word-of-mouth, not from the technology sections of major news sites, or even dedicated tech blogs.

A bottom-up gift economy is funding instance operators and developers via Patreon, OpenCollective, LiberaPay, Ko-Fi, and similar tools. Again, this seems remarkable in its own right, given the predominant business model of the Internet (advertising and surveillance). Why isn't that a bigger story?

The truth is that the dollar amounts here add up to a very small total. The Mastodon main project acount on Patreon is only raising about $5,800 a month as of this writing. That still doesn't even add up to the salary of a single Silicon Valley engineer. Typical medium-size instances raise funds in the $50-$100/month range to cover their costs.

One way to look at this is that the return on this investment is incredible. A vast global community is deriving enormous value from the fediverse, with a tiny investment of funds. But another way to look at it is that there's just not enough money in it for it to be interesting.

In a capitalist media ecosystem, the primary frame of reference for understanding the world of technology is profit. If something doesn't generate profit, has no obvious pathway to profit, and isn't backed by people who are viewed as experts on making profit, it is assumed to be a failure by default. From that point forward, any success is accepted reluctantly, slowly, or not at all.

The headlines to be found on tech news sites like TechCrunch are as consistent as they are mind-numbing:

  • “Enterprise architecture software company LeanIX raises $80M Series D”
  • “Randori raises $20M Series A”
  • “MonkeyLearn raises $2.2M to build out its no-code AI text analysis service”

If the dollar sign is not in the headline, it's in the content. Profit (real or imaginary) is the lens that distorts everything.

That's why I won't read in mainstream publications about Karrot.world, a social network for reducing waste. They won't tell me about the amazing work that Framasoft is doing to build platforms like Mobilizon (for managing events) and PeerTube (for sharing videos).

If you talk to people about these projects and they respond with cynicism — “it won't scale”, “it's just a bunch of hobbyists”, “it can't compete” — what is really at work here is the default prejudice against bottom-up self-organization without a profit motive.

Mastodon and the fediverse are not doomed to fail (they are in fact succeeding), but they are “doomed to fail”, meaning that they will be unavoidably and repeatedly characterized through the distorted lens of a capitalist media ecosystem. Through that same lens, server Linux is only successful because it is used by corporations, and desktop Linux continues to be the butt of jokes, even though it is used by millions.

Spoken word performer Gil Scott-Heron famously had this to say about revolutions:

The revolution will not be right back After a message about a white tornado White lightning, or white people You will not have to worry about a dove in your bedroom The tiger in your tank, or the giant in your toilet bowl The revolution will not go better with Coke The revolution will not fight germs that may cause bad breath The revolution will put you in the driver's seat

Whatever revolutionary potential you think technology like social media has, if it is greater than zero, it is very doubtful that the most promising such technologies will be widely recognized and celebrated.

But we have our own media now, and we are no longer dependent on the distorted judgments within the capitalist frame. We can celebrate our successes here, and build on them. We can operate platforms serving hundreds of thousands of people without a profit motive, and gradually expand the solidarity economy.

We won't win people over by default, and we will make plenty of mistakes along the way. But we must recognize, and reject, the biases at play that cause people to belittle, ignore, and misunderstand any initiative that's astonishingly successful without making anyone rich.

[@eloquence 2020-07-08, public domain]