As we make iterative technological jumps, what do we lose in the process? I'm thinking about this with Snap.as. It's cool what Google does with machine learning, but what do we lose when the machine put our albums together instead of us? Or when a good friend does? What connections do we miss out on? What is more important: connecting to an illusory intelligence, or another human, animal, the natural world?
One thing we don't see very often in the software world is a project being finished. Why does some piece of software need a visual refresh every year? Why are there more freaking menus here when I could navigate the site just fine before? Why can't we be content with something we built that's just really, really good as it is?
The answers, in many cases, are: it doesn't; no good reason; we can.
One of the things I'm happy to have accomplished, that I noticed the other day writing my last post, was that this thing I built does exactly what I need it to and no more. Are there some places that could be smoothed out? Sure. But overall, it's pretty damn good, and has remained that way for years.
The exciting part for me, the guy building the thing, is that the software is still improving, but it's not disturbing the user. I, the writer, am happy that the publish button is in the same place it was yesterday — but oh by the way, now this cool thing will happen where this post will go out to followers in the fediverse as soon as I publish it.
Build something to be good and as close to perfect as possible — not for endless improvement.
After you've been a tourist abroad once, the second time there feels different in some ways. You're no longer bouncing from place to place, riding planes and trains to find a point of interest, snap your selfie, and carry yourself and your fanny pack to the next place on the brochure. You walk slower, notice the sidewalk and the people, the birds and the smells; and really, you could be anywhere in the world.
Humans are humans, and society is full of good and bad actors. Technology, at the most fundamental level, is a neutral tool that can be used by either to meet any ends. There is nothing inherent in technology or the internet that says it must be used for noble causes, just like there is nothing inherently evil about technology — it is what its users decide it is, through usage over time.
Still, I and many others believe the internet should be used for good, and more importantly, that it's not exceptionally difficult to do. In my mind, it requires a few things: first, an alignment of incentives between the makers of technology and the users of technology, starting with the business model. Second, a higher regard for professional ethics in the entire industry, at all levels.
I signed up for Facebook in 2006, while I was still in high school. I “deleted” my account for the first time in 2008. Since then I've seen it evolve from chronological feed to platform for FarmVille, et al. to sprawling ad-spewing machine hoping to infect every device you live on.
Today I care enough about privacy to take a principled stance on it, and after dropping maintenance for the Write.as page I got rid of the last vestige of Facebook on my phone — the Pages app. Otherwise my profile sits there, happily populated with “Likes” I don't actually like and a Timeline featuring a life of adventure, like graduating college 5 decades before I was born, and living in Antarctica for a short period of time. I don't know if obfuscation like this completely works, but I like to think it helps.
Still, I occasionally hear about events and certain pages that are only available on Facebook. But with their cookies blocked on all my computers, I get this wonderful experience:
Like any other service that starves without trackable human attention, Facebook is happy to degrade their product to this point if it means annoying non-users enough to make them sign up. But the web is beautiful because users have control.
So I took back some control. I made a small browser extension that hides all of the annoying sign-up and log-in prompts, so you can safely click that Facebook link without being assaulted upon your arrival. What you get is something like this:
Even if you haven't deleted Facebook, my hope is that this will make it a bit easier to log out, uninstall, and step back from the platform for a bit.
This got me thinking about the similarities between architecture and building digital products that people inhabit. Digitally we deal with different laws of physics and movement, but building digital spaces is just as much about realizing how someone will move through it. Our software's boundaries and pathways determine what can or can't be done, and how that makes people feel within them, in the same exact way that physical walls do.
We have to think of the exact activities that will take place within our spaces. Are we a public square that will limit private influence, and breed open discourse and free use by all? Are we an office — and what work will be done there, and how freely will people move through our “building” — and is an office even worth building? Are we a studio, where we need to let enough light in and be able to see the trees outside, and the space itself needs to inspire us?
Your identity online is a question of who you want to be.
Do you want to provide a recorded history of your thoughts and life to the world? Or only those finalized, edited, and made perfect to associate with your static identity? The internet provides both unparalleled opportunities to socialize and unprecedented degradation of the ability to be human in a fluid, ever-moving fashion (that is, where history is relegated to the participants' imaginations and not primary sources in digital history books).
Our selves are both formed and built by our interactions with the world — it's why social media can depress us; when we lose ourselves, I'd go as far as to say it's from fabricating and curating a personal image. And these platforms can encourage that.
The physical world affords us plenty of opportunities to build a fake persona, but also natural opportunities to form a self as we'd like. There are quiet places we can go in the real world to not be judged for our thoughts and actions — at the least, within the walls of our own homes. But the design of much social software makes this more difficult to do; raises the stakes on every interaction; makes it harder to act naturally.
If you used Write.as between February 2015 and June 2016, you'd remember a totally free service where you could publish individual “blog” posts without signing up. I had launched by building a simple website that showed some nice-looking text, along with some apps to publish that text from whatever device you wanted. It was all anonymous — you couldn't sign up even if you wanted to — and it struck a chord with all kinds of people who just wanted a simple digital tool to write with.
As I launched on our final native platform (iOS) five months in and wrapped up the web application, I started thinking about how to make it sustainable.
I love that Mastodon has risen as a viable alternative to Twitter. It's done almost everything you need to take on a major centralized service where the major sell is some technical aspect:
Providing a friendly / familiar UI
Making the underlying tech as invisible as possible
Providing comparable functionality, with incremental improvements
Having nice-looking, solid mobile apps
Growing quickly as a network to make socializing interesting and useful
Since I saw how fast it was taking off earlier this year and I had a good domain name in mind, I decided to start Writing Exchange as a Write.as-supported instance. The other week it actually saw a sudden influx of new users (around 200), which has been awesome to watch.
While I know that decentralizing protocols like the ones behind Mastodon are the future, and I'm overjoyed Mastodon's high adoption means that so many social interactions will happen without corporate surveillance, I'm still concerned with copying over the addictive features of social media.
My main gripe at the moment is with favorites, and how favoriting a post/toot sends a signal back to the author, instead of it being a private action. I have a problem with favoriting and not boosting because boosting carries weight — by placing another's words on your own profile — and a more specific meaning. This weight means it can't really be used just to make the author feel good, and with time keep them fiending for that little hit of Mastodon-sourced dopamine.
Favoriting, on the other hand, is inexpensive and vague when it's a public signal — social, psychological candy. Maybe you favorite something because you genuinely love what someone wrote; maybe you just want to save it for later; maybe you're silently agreeing; maybe you want to let them know you saw their message; maybe you want to politely put an end to the conversation; maybe you want to offer moral support; maybe you don't want to converse. But no matter your nuanced response, the author sees only a star. Maybe they stop what they're doing in life to pull their phone out of their pocket, click a button, and see that you “favorited” what they posted. Then there's nothing for them to do but feel “good” because someone liked (or somethinged) their post. There's no bridge into a larger conversation, no social introduction, no cue to interact. Just a piece of candy.
It's a minor gripe in the grand scheme of things, but an important one to me. If we're going to build the web world we want, we have to constantly evaluate the pieces we bring with us from the old to the new. With each iteration of an idea on the web we need to question the very nature of certain aspects' existence in the first place, and determine whether or not every single old thing unimproved should still be with us. It's the only way we can be sure we're moving — if not in the right direction, at least in some direction that will teach us something.