Devin Prater's blog

This log will detail my search throughout Linux for accessible games, besides the audio game manager, and my reaching out to developers, and their responses. Hopefully, this will motivate me to keep going in the face of, undoubtedly, much failure.


Because I’m weird. I can’t just start with any old app category, oh no. ToDo managers? Pomodoro timers? Text editors? No, I choose to bang my head against games. And because I want new blind Linux users, when they join Linux, to have some games outside the Windows audio games, to play. Because it’s like… a sighted person coming to Linux and finding out that all there is to play is Windows games. And yeah there are a good many games made for Linux. So why not? Hopefully I can get at least one game made accessible, or find that one already is accessible. If I can do at least that, then that’s one more success story of the open source community actually giving a crap.

Testing the games

I test each game using the Orca screen reader, version 3.38.2. I run the Mate desktop (version 1.24.1) on Arch Linux. My PC has an Intel Core i7-6500U CPU at 2.50GHz and 8 GB of RAM and a Skylake GT2 [HD Graphics 520] graphics card. At least, I think that’s the graphics card. 😊

Game list

I am getting the list of games from the Arch Linux Wiki. It’s separated into game genre headings, so that’s great. At a fellow Mastodon user’s suggestion, I’m going to go with casual games first. Arch Wiki List of Games

So, from here, I’ll have the game category, then the games, their accessibility, and contact with the developer.

Casual Games

Aisleriot (version 3.22.12)

Upon starting the game, I hear “Klondike, drawing area.” The “Drawing area” is what the Screen Savers use as a “frame” to show the picture. But in this case, I assume a game has started, so this should be filled with cards. Whenever I press Tab, I hear “new game, button, start a new game”, and when pressing it, the drawing area stays the same, so that’s why I assume a game has already started.

When pressing Tab after the “new game” button, I’m placed back onto the drawing area. If I use the Right Arrow while on the “new game” button, I find the other buttons on what I assume is a toolbar: “Select game,” “Deal” and “hint”. If I press Enter on “select game,” I am able to choose another game type to play. Even so, the Drawing Area is still there. If I press the “Hint” button, I am given an accessible dialog with a hint on what to do next, like “Move the ten of hearts onto the ten of clubs.” I can dismiss the hint with the Enter key. If I press the “Deal” button, back in Klondike mode, nothing is reported, but two new buttons, “undo move” and “restart” appear.

When I press F10, to open the menu bar, that part is accessible. Pressing “New game,” “restart,” entering the “recent games” menu, and closing work, in that I can perform those functions. The statistics screen was much more accessible than I expected, with textual labels for each field, along with the number associated with them. There is also a button to reset the statistics, and one to close the window. None of the items in the “view” menu affect accessibility, although the removal of the “tool bar” hides the buttons “above” the drawing area. Nothing within the Controls menu affects accessibility, neither does anything in the Klondike menu. In the help menu, there are keyboard shortcuts, but none regarding accessibility.

In short, everything is accessible except the cards and ways of moving and controlling them. I don’t know much about Solitaire, but I do know there are supposed to be cards, and from the hints, they can be moved. [[][Gitlab Issue


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

During the month of… November? December? Something like that… I found myself being called by Linux again. I just can’t stay away. I go to Windows for a while, and then something happens. VS Code became sluggish and unreliable, and I just… just couldn’t deal with crap anymore. Sure everything else worked well enough. I could play my audio games and Slay the Spire (using Say the Spire), but gosh darn it I missed freedom.

So, I thought about it. People on the Linux-a11y IRC use Arch. Because they’re all pretty much advanced users. Other blind people use Ubuntu Mate, or Debian. I tried Fedora, and found that I couldn’t even run Orca, the Linux graphical screen reader, within the Fedora installer. I tried Debian in a virtual machine, but the resulting system, after installation, didn’t speak. I tried Slint Linux, a Slackware based distribution, but there were sound issues, and they weren’t something I could deal with.

The Need for Speed

So, I thought about different Linux distros. There priorities, their values, and their priority of keeping packages up-to-date or not. I like distros that keep packages up-to-date. Not doing so, to me, feels like a slap in the face of developers, the distro maintainer saying: “We don’t trust that you can write good enough software, so we’re going to leave your software at this version for >= 6 months. And then, when we release a new version of our distro, we’ll go into your code and “backport” things into your old version.”

Another issue is that older software isn’t necessarily better. It definitely isn’t necessarily more accessible, and that is my main concern, and is, I suspect, why most “power user” blind Linux folks go with arch. They already have GTK4 in their repository. Can Ubuntu, or even worse, Debian, say the same?

Now, I know that there is Flatpak, Snap, and probably a lot of lesser-known protocols. But I see them as add-on package managers, supplementing the system package manager. Also, they wouldn’t be necessary if Ubuntu and Debian would package up-to-date software. Snap and Flatpak are solving a problem that Ubuntu themselves created. Isn’t that nice?

Choosing Arch

So, I looked around. Ubuntu, Debian, all the main distros were all fixed releases, all stale, and I like to explore. I use my computer more than just for simple stuff. I mean, I can’t have old, out-dated packages. And it’s so sad that Youtube-dl, and even Mycroft, have to explain to users how to install from Pip, or from a git repo, just to keep the package up-to-date. But enough about that. A person on the IRC channel suggested Anarchy, which is an “easy installer ISO” of arch. So, I took a look. => Anarchy Installer (HTTP)

Since late last year, the base Arch Linux distro has come prebuilt with accessibility stuff. Just press Down Arrow once while booting, then Enter, and the Arch Linux ISO will come up talking. So, maybe Anarchy would do the same.

I got the ISO, flashed it to a flash drive, and booted it, doing the steps to boot in accessible mode. And it worked. The command line interface was pretty easy to use, and left me with a system that, while inaccessible (there were no settings in the installer to configure that,) I was able to chroot in, from the command line of the ISO, and set things up.

Setting up my New System

First, I enabled espeakup.service. This runs the Speakup screen reader with the ESpeak synthesizer. That was enough to give me speech at the console. Then I installed Yay, the AUR package manager thing. I later switched to Paru. Then, I installed the Mate Desktop, as they’re currently the only ones that have accessibility well enough for easy usage for now. Hopefully Gnome gets back into the game with Gnome 40, but I’m not holding my breath.

Then, I added these lines to my .xinitrc:

export GTK_MODULES=gail:atk-bridge

exec mate-session

And so then I could get going. I started the X-session (startx), and ran Orca from the run dialog (Alt + F2). But still, some programs weren’t accessible. So, I went to the System menu, down to Preferences, then Personal, then “Assistive Technology”, and checked that box, and things were pretty smooth after that.

My Experiences so far

I don’t think I’ll be going back to Windows any time soon. While there are problems: Alsa sends ESpeakup through my speakers even if headphones are plugged in, I need to learn more about Pulse so that I can add more than one Ladspa effect at a time, and add them to whatever sound card I’m using, not just making a new sound card, and I do miss the sound packs created for MUD’s that only run on Mush-client. But there are things about Linux that I do love:

  • Emacspeak: The more I use it, the more I love it.
  • GPodder: A Podcasting client that not only is accessible, it even allows me to get Youtube channels as podcasts! I mean, that’s amazing!
  • Mutt: I’m really starting to like this simple Email client. Sure, the hotkey list bar at the top is a little annoying and I wish I could just make that go away and just reference keyboard commands when I need them, but overall I love it and wish I could use it with more accounts.
  • Audio Game Manager: I probably wouldn’t be on Linux for this long without this tool. It brings audio games from Windows to Linux with Wine and preconfigured settings.
  • Retroarch: Now that it’s accessible, I love playing Dissidia Final Fantasy on it. Although, trying to “record a movie” on it really slows things down. I wonder if streaming would do the same.
  • BRLTTY: This has saved my butt on multiple occasions when Alsa couldn’t find any audio devices or something and I had to fiddle with Pulseaudio to fix it. I don’t know much about audio on Linux really, I just revert any change I made on behalf of something like Mycroft or whatever. Oh, BRLTTY is basically a screen reader for braille displays, meaning I don’t need audio to use it.
  • Emacs: What can I say? Most of my work is done inside Emacs. Most of my play is done inside Emacs. Nearly all of my writing and reading is done inside Emacs. I’m considering having my window manager inside Emacs. One day, my brain will be inside Emacs. No Microsoft text editor can compare with Emacs and Emacspeak’s ability to give as much information as possible, even syntax highlighting, bold, italics, just everything!
  • The command line: Sure, we have this in Windows, but it’s more of an afterthought, and a bolted on feature at this point. In Linux, it’s a first-class citizen. I’m not a power user in any stretch of the imagination, but I can navigate the file system, run commands with arguments, all the basic stuff. I can do this in audio and braille. I can use nano a bit to edit files, and know the general layout of config files, and am not as scared of them as I used to be, although I need to learn to read the manual before I dive into them.

Also, in my experience, Linux breeds creativity. You could use it as a regular desktop user, but if you dig just a tad, you see the building blocks. And it makes you want to learn about them, to play with them, to maybe break them a bit but then try to fix them. And some things you can’t make work: like the fact that my laptop, having just a USB C port, can’t display video over Thunderbolt. (I have a Thunderbolt dock at work connected through Display port to a monitor.) But some things you can do. You can script things using Python, then put them in your bin folder to run from anywhere. You can make your own programs! You can turn your Linux machine into a Bluetooth speaker to listen to books from your iPhone on your laptop! There is just so much possible with Linux, and even more possible with coding knowledge.

Flaws in the Utopia

This isn’t to say that Linux is perfect. It is made by people, and mostly hobbyists at that. This isn’t to say their code is sloppy, or that they don’t care. It does mean that they aren’t held to any kind of company standard, especially regarding accessibility. Linux is more of a community effort, so users will need (me included) to interact with the community to get things fixed, or even just to remind them that blind users actually do exist. We do have our own IRC server, a little corner of the Internet, but we won’t get anywhere by just staying in that corner.

  • The graphical interface can be tricky to use, like remembering that you have to press Control + Tab to reach some parts of the fractal program, and there are still unlabeled buttons in official Gnome apps, like Fractal. But there will be a complete rewrite of the interface, so hopefully accessibility is considered in the process.
  • There are less games, and much less accessible games, on Linux. I’ll begin to reach out to developers of games to see if anything can be done about this. In the meantime, there is the Audio Game Manager for playing Windows accessible games.
  • You’ll have to Google things, a lot: There aren’t many blind people who use Linux. That number grows by one or two per year, and the Audio Games forum has a few members who use Linux, but there aren’t many outside that.
  • Sound isn’t as convenient as on Windows, where you have enhancements, bit rate and format control, all that in one place. And the Pulse-effects package makes things very laggy, whereas loading a Ladspa module directly produces no lag.
  • Sound can be slightly rough when first booting up the computer.

Looking Forward

I’ll probably stick with Linux, as long as this laptop survives. I’ve had it for about five years, and it’s still pretty well up to the task. It performs well, has a good enough keyboard with a number pad, but a first ports, especially the headphone jack, are becoming loose. I’ll have to see about getting a USB sound card or something, unless ports can be tightened. And a new battery would be good too. I ordered one, so we’ll see if it can be replaced.

I’ll still reach out to developers, to see if accessibility of apps can be improved. Hopefully, indie game developers will be more receptive as well. Eventually, I’d love to have more blind people come to Linux, and not just then jump into the blindness servers and moan and grown, but continue to push for greater accessibility, on Matrix, on IRC, on Forums of desktop environments and graphical toolkits like GTK. Linux makes me feel passionate about technology, about open source, about what’s possible, whereas Windows just felt contrived, the accessibility team preaching and preaching on their Twitter account, saying all the right things. Saying all the right words. But when it comes time to deliver, well, they fall short. Windows 10’s Mail app still is a pain to use with screen readers, still having the issue of when I put keyboard focus on a thread, it automatically expands, and then I have to close it just to quickly move down to another message or thread, and when I press Control + R to reply, nothing is spoken to let me know the command succeeded. Not even Thunderbird, even though it locks up every few minutes, has those kinds of problems. And the only other good email client is Mutt.

So, Linux feels more “real” to me. It doesn’t try to hide its accessibility issues with warm words and “we hear you!” tweets. It could do better. Earlier today I suggested to people involved with the Pine Phone that accessibility could be a greater focus, and essentially got back “maybe you can focus on Linux desktop accessibility first.” I guess I’ll have to. I’m not a developer, but if that’s what people want, sure. Why not. Maybe I’ll even learn to enjoy it. But I’m more of a writer, for now, not a programmer. I’ve made a script, one script that is used in “production”, and find it easier to learn and enjoy now, but it’ll be some time, a lot of time, before I’m able to deal with low-level stuff in Linux.

But, until then, I’ll keep exploring, learning, and trying my best to get the word out, to keep people cognizant that accessibility is an issue, and that they don’t have to be an expert to help.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

It’s been a while since I’ve written a blog post. But, my entry into Gemini space prompts me to finally write about what’s been going on with me. The simplicity of writing in Gemini, and the “cool new thing” feel is quite inviting. And, because the people at have given me a space to serve this, I have direct access to the files, processes that go into how things look, everything.

Static Site Generators and my disillusionment from them

I like Hugo, I really do. But a theme problem got in the way, leaving me unable to actually build the site. So, I looked for another one, finding Nikola. It worked well, but I couldn’t customize it that much. It had a great plugin that took the text of an article and made the whole blog into a podcast using ESpeak to speak the articles, but I had no idea how to customize the theme, put in my usual “reading time” functionality, or any of that.

So, I just left the blog as it is, a basic Nikola site on Github Pages. I didn’t want to mess with it anymore. I didn’t want to have to deal with config files, running scripts, all that. Besides that, I’ve been very busy with work-related stuff.

Python for lunch!

For a while now, I’ve wanted to write a script that grabs the lunch menu from my job’s Moodle page, gets the menu for today, and shows it, or speaks it, to the user. A few weeks ago, I completed it. What I’ve learned:

  • Python is easier for me when I have a project to work on. I’ll start using the Automate the Boring Stuff book more for this.
  • I learned about the “try” and “except” functionality easily, lending credit to my idea that I learn best with projects.
  • Emacs’ Python mode is pretty great, and voice-lock-mode of Emacspeak has gotten me out of a few situations I wouldn’t have found easily otherwise.

Entry into Gemini space

So, Gemini is this cool new thing that is like the web, but with simple “Gemini files” instead of HTML, JavaScript, and CSS. There are only headings, lists, links, paragraphs, and preformatted blocks in Gemini, and no CSS and JavaScript. It’s basically just the information of the web; no web apps, no need to control looks and reactions, just sweet, simple, plain text.

At first, I was afraid that there would be lots of ASCII graphics. These never are understandable to screen readers. And there are some, but not as much as I’d feared. Then I found a Gemini browser for Emacs, called Elpher which is pretty good. It isn’t optimized for Emacspeak use, and it doesn’t show the Alt text of preformatted blocks, but it’s good enough for my use.

So, I jumped at the chance to host my blog in Gemini space. Plain text, no need for a static site generator, since everything is in plain text, human-readable text, no JavaScript or CSS required. Everything is in directories, and the Index file is plain, with links to whatever you want to show. And for drafts, I’ll just work on them, and when they’re ready, link to them from the Gemlog index. I think, finally, that I’ve found my home.

Switching back to Emacs

A while back, I wrote an article about “Switching Tools”, where I talked about switching from Mac and Emacspeak to Windows and VS Code. Well, turns out that VS Code being a memory-hogging Electron app, and it really just being another edit field, made that kinda fall through. Now, I’m on Linux (I’ll write about that, I promise), and using Emacspeak again. Reasons include:

  • VS Code on my laptop was quite unresponsive. Emacspeak on my (now Linux) laptop is snappy.
  • It looks like VS Code won’t be using sounds for events, like reaching a line with an error, any time soon.

So, since I had nothing more to lose, and because Linux was calling my name, I switched, and I’m pretty happy with it now, actually. I don’t know if it’s my continuing maturation, or Linux accessibility improvements, but I’m finding that I’m mostly able to do anything from Linux, and even more, since there is an actually good podcast client for Linux, GPodder.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!


At the launch of the iPhone 3GS, Apple unveiled VoiceOver on the iPhone. Blind users and accessibility experts had been used to screen readers on computers, and even rudimentary screen readers for smart phones that used a keyboard, trackball, or quadrants of a touch screen for navigation and usage. But here was a screen reader that not only came prepackaged on a modern, relatively inexpensive to the competition, and off-the-shelf device, but it also allowed the user to use the touch screen as it is, a touch device.

This year, VoiceOver added a feature called “VoiceOver recognition.” This feature allows VoiceOver to utilize the machine learning coprocessor in newer iPhone models to describe images with near-human quality, make apps more accessible using ML models, and read the text in images.

This article will explore these new features, go into their benefits, compare VoiceOver Recognition to other options, and discuss the history of these features, and what’s next.

VoiceOver Recognition, the features

VoiceOver Recognition, as discussed before, contains three separate features: Image Recognition, Screen Recognition, and Text recognition. All three work together to bring the best experience. In accessible apps and sites, though, Image and Text recognition do the job fine. All three features must be downloaded and turned on in VoiceOver settings. Image recognition acts upon images automatically, employing Text recognition when text is found in an image.

Screen recognition makes inaccessible apps as good as currently possible with the ML (Machine Learning) model. It is still great, though. It allows me to play Final Fantasy Record Keeper quite easily. It is not perfect, but it is only the beginning!

Benefits of VoiceOver Recognition

Imagine, if you are sighted, that you have never seen a picture before, or if you have, that you’ve never seen a picture you’ve taken yourself. Imagine that all the pictures you have viewed on social media have been blurry and vague. Sure, you can see some movies, but they are far and few between. And apps? You can only access a few, relative to the number of total apps. And games are laughably simple and forgettable.

That is how digital life is for blind people. Now, however, we have a tool that helps with that immensely. VoiceOver Recognition gives amazing descriptions for photos. Not perfect, and sometimes when playing a game, I just get “A photo of a video game” as a description, but again, this is the first version. And photos in news articles and on websites, and in apps, are amazingly accurate. If I didn’t know better, I would think someone at Apple is busy describing all the images I come across. While Screen Recognition can fail spectacularly sometimes, especially with apps that do not look native to iOS, it has allowed me to get out of sticky situations in some apps and has allowed me to press the occasional button that VoiceOver can’t press due to poor app coding and such. And I can play a few text-heavy games with it, like Game of Thrones, a tale of crows.

Even my ability to take pictures is greatly enhanced with image recognition. With this feature, I can open the Camera app, put VoiceOver focus on the “view finder,” and it will describe what is in the camera view! When it changes, I must move focus away and back to the View Finder, but that’s a small price to pay for a “talking camera” that is actually accurate.

Comparing VO Recognition to Other Options

Blind people may then say “Okay, what about Narrator on Windows? It does the same thing, right?” No. First, the photo is sent to a server owned by Microsoft. On iOS, the photo is captioned using the ML Coprocessor. What Microsoft needs and Internet connection and remote server to do, Apple does far better with the chip on your device!

You may then say “Well, how does it give better results?” First, it’s automatic. Land on an image, and it works! Second, it is not shy about what it thinks it sees. If it is confident in its description, it will simply describe it. Narrator, and Seeing AI, always say “Image may contain: ” before giving a guess. And, with more complex images, Narrator fails, and so does Seeing AI. I have read that this is set to improve, but I’ve not seen the improvements yet. Only when VoiceOver Recognition isn’t confident in what it sees, it says, “Photo contains,” and then gives a list of objects that it is surer of. This does not happen nearly as frequently as Narrator/Seeing AI, though.

You may also say “Okay, so how is this better than NVDA’s OCR? You can use it to click on items in an app.” Yes, and that is great, it really is, and I thank the NVDA developers every time I use VMWare with Linux because there always seems to be something going on with it. But with VoiceOver Recognition, you get an actual natively “accessible,” app. You don’t have to click on anything, and you know what VoiceOver thinks the item type of something is: a button, text field, ETC., and can interact with the item accordingly. With NVDA, you have a sort of mouse. With VO Recognition, you have an entire app experience.

The history of these features

Using AI to bolster the accessibility of user Interfaces is not a new idea. It has been floating around the blind community for a while now. I remember discussing it on an APH (American Printing House for the Blind) mailing list around a decade ago. Back then, however, it was just a toy idea. No one thought it could be done with current, at the time, Android 2.3 era hardware or software. It continued to be brought up by blind people who dreamed bigger than I, but never really went anywhere.

Starting with the iPhone X R, Apple began shipping a machine learning Coprocessor within their iPhones. Then n iOS 13, VoiceOver gained the ability to describe images. This was not using the ML chip, however, since older phones could take advantage of it. I thought they may improve this, but I had no idea they would do as great a job as they are doing with iOS 14.

What’s Next?

As I’ve said a few times now, this is only version one. I suspect Apple will continue building on their huge success this year, fleshing out Screen recognition, and perhaps having VoiceOver automatically speak what’s in the camera view when preparing to take a picture, and perhaps adding even more than I cannot imagine now. I suspect, however, that this is leading to an even larger reveal for accessibility in the next few years, Augmented and Virtual reality. Apple Glasses, after all, would be very useful if they could describe what’s around a blind person.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

This is basically a test post. I’ve switched from Emacs to VS Code, and I’ll detail why below. The gist is that Emacs is unhelpful, only easy to set up on Mac and Linux, and Emacs packages are not standard, and Emacspeak, the speech extension for Emacs, just can’t keep up with extensions like LanguageTool, and probably won’t because coding is Emacs’ main use case, not writing.

Why I used Emacs

Emacs has been my work tool for about a year now. I went along with its strange commands, and even got to liking them. I memorized strange terminology in order to get the most of the editor. Don’t get me wrong, Emacs is a wonderful tool, and Emacspeak allows me to use it with confidence and even enjoyment.

Before the end, I was writing blog posts, both here and on a Wordpress blog, using Git and GitHub, and even reading EBooks. I also adore Org-mode, which I still find superior to anything else for note taking, compiling quick reports, and just about anything writing-related. Seriously, being able to export just one part of a file, instead of the whole large file containing every bit of work-related notes, is huge, and I’ll now have to use folders, subfolders, and folders under those to come close to achieving that level of productivity. And no, the Org-mode extension for VS Code doesn’t have a third of the ability of the native Emacs Org-mode.

But, Emacs was founded on the do-it-yourself mentality, and it’ll stay that way. If you don’t know what to look for, Emacs will just sit there, without any guidance for you. I’ll get more into that as I compare it with VS Code.

Making Good out of Bad

One day, my MacBook, which is what I run Emacs on, ran very low on battery. It was in the morning that day, and I have a Windows computer also, so I decided to see if I could get things done on it. I’d tried writing on it before, using Markdown in Word, or even VS Code before. But my screen reader, NVDA, wouldn’t read indentation like Emacspeak did, or pause between reading formatting symbols in Markdown like Emacspeak did, play sounds for quick alerts of action like Emacspeak did, or even have a settings interface like Emacs did, and definitely didn’t have a voice like Alex on the Mac. Those were my thoughts when I’d tried it before. I’ll tackle them all, now that I’ve used VS Code for almost a week.

So, I managed to get Markdown support close to how I used it in Emacs, minus the quick jumping between headings with a single keyboard command. I still miss that. The LanguageTool extension works perfectly, although I had to learn that to access the corrections it gave I have to press Control + . (period). Every extension I’ve installed so far has worked with NVDA. I cannot say that for Emacs with Emacspeak. Since the web is so standardized, there isn’t too much an extension could do to not be accessible. Sometimes I wish the suggestions didn’t pop up all the time in some language modes, but I’ll take that any day over inaccessibility.

So, on with debunking the problems I had at first. Hopefully this will help newcomers to VS Code, or those who are cynical that basically a web app can do what they need:

NVDA doesn’t read indentation!

Yes, it can. It can either speak the indentation, or beep, starting at, I believe, low C for the baseline and moving up tones. Sometimes I have to pay a bit of attention to notice the difference between no space and one space, but that’s what having it speak is for.

NVDA doesn’t pause between formatting symbols!

This is true, and unavoidable for now. But, unlike Emacspeak, NVDA has the ability to use a braille display, which makes reading, digesting information, and learning a lot easier for those whose mind, like mine, is more like a train than a race car. In the future, NVDA’s speech refactoring may make pausing, or changing pitch for syntax highlighting, a reality.

VS Code doesn’t play sounds!

This is true too, and I’ve not found a setting or extension to make this happen. Maybe one day…

VS Code doesn’t even have a settings interface!

Before, I thought one had to edit the JSON file for settings to change them. It turns out that if you press Control + , (comma), you get a simple, easy, Windows interface. This is a bit rough around the edges, because you have to Tab twice from one setting to the next, and you could roam from one section of settings to another, but it’s easier than Emacs.

But what about the awful Windows voices!

Yes, Windows voices still are dry and boring, or sound fuzzy, but NVDA has many options for speech now. I’ve settled on one that I can live with. No, it doesn’t have the seeming contextual awareness of paragraphs like Alex, but it’s Windows. I can’t expect too much.

Bonus points for VS Code


I’m only now starting to get Git. It’s a program that allows you to keep multiple versions of things, so you can roll back your work, or even work on separate parts of your work in separate branches of the project. Emacs just… sits there as usual, assuming you have any idea of what you’re doing. VS Code, though, actively tries to help. If you have Git, it offers an extension for that. If you open a Git repository, it asks if you’d like it to fetch changes every once in a while to make sure things are up-to-date when you commit your changes. I was able to commit a pull request in VS Code easily and with minimal fuss. In Emacs, I didn’t even know where to begin. And any program that takes guessing and meaningless work off my shoulders is a program I’ll keep.

Suggestions while typing

VS Code is pretty good at this. If I’m writing code, it will offer suggestions as I type. Sometimes they’re helpful, sometimes they aren’t. In text modes, this doesn’t happen; it appears that this only happens in programming modes. Emacs would just let you type and type and type, and then browsing Reddit you’d find out about snippet packages that may or may not work with Emacspeak.


As mentioned before, VS Code is basically a web app. Emacs is a program written in mostly Emacs Lisp, and a bit written in C. Extensions in VS Code are written in JavaScript, whereas extensions in Emacs are written in its Lisp dialect. Since Emacs is completely text based, any kind of fancy interface must be made manually, which usually means that Emacspeak will not work with it, unless the author, or a community member, massages the data enough to make it work. This is a constant battle, and it won’t get easier for anyone involved.

VS Code is a graphical tool that has plenty of keyboard commands, and screen reader support. Its completion, correction, and terminal processes have already been created, so all extensions have to do is hook into that. This means that a lot of extensions are accessible, without even knowing it.

So, any downsides to VS Code?

VS Code is not perfect by any stretch. When screen reader support is enabled, a few features are actually disabled because Microsoft doesn’t know how to convey them to the user without using sound. Code folding is disabled, which would make navigating markdown a lot simpler. Word wrapping is disabled, meaning that a paragraph is on one very long line. I’ve found Rewrap, a third-party extension that I can use, so that’s fixed. There are no sounds, so the only way I know there are problems is by going to the next problem, or opening the issues panel.

Overall though, VS Code has impressed me, and I continuously find wonderful, time-saving, mind-clearing moments where I breathe a sigh of relief that to create a list in markdown, I can just select lines of text, and choose “toggle list” from the commands panel, whereas with Emacs I had to mark the list of lines and remember some strange command like “string-insert-rectangle” and type “*” to make all of those list items. These kinds of time-savers make me more productive, offsetting slightly the lack of features akin to those in Org-mode.


I didn’t expect this post to be so long, but it will be a good test to see if VS Code’s Hugo support is enough to replace Easy-Hugo on Emacs. While VS Code doesn’t have a book reader, at least, not one I think I’d like, or a media player with Tune-In Radio support made for the blind, and many other packages, it is a great editor, and does have tools like Hugo extensions that make it slightly more that an editor. I should branch out more and see what tools Windows now has for these functions anyways. I already use Foobar2000 for media, I just have to find a good book reader that doesn’t get rid of formatting info.

So, I hope you all have enjoyed reading this long test of VS Code, and an update on what I’ve been doing lately when not playing video games and other things.

In other news, I’ve been using the iOS 14 and macOS 11 public betas. I’ll report on my findings on those when the systems are released this fall.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

So, I’m writing this from a Windows computer, using Notepad, with WinSCP providing SFTP access to the server. This won’t come as a surprise for those who follow me on Mastodon and such, but I want to put this in the blog, so everything is complete.

About half a year ago, I installed Linux. Sometimes, I get curious as to if anything has changed in Linux, or if it’s any better than it once was. And I want to know if I can tackle it, or if it’s even worth it. Half a year ago, I installed Arch using the Anarchy installer, got accessibility switches turned on, and got to work trying to use it.

Throughout my journey with Linux, I found myself having to forego things that Windows users took for granted. Stuff like instant access to all audio games for computers, regular video games which, even being accessible, used only Windows screen readers for speech. And all the tools that made life a little easier for blind people, like built-in OCR for all screen readers on the platform, different choices in Email clients and web browsers, and even stuff like RSS and Podcatcher clients made by blind people themselves, not to mention Twitter clients. Now, there is OCR Desktop, but it doesn’t come with Orca, and you must set up a keyboard command for it.

But I had Emacs, GPodder for podcasts, Firefox, Chromium when I wanted to deal with that, and Thunderbird for lagging my system every time it checked for email. It was usable, and a few blind people do make use of it as their daily driver. But I just couldn’t. I need something that’s easy to setup and use, otherwise my stress levels just keep going up as I not only have to fight with config files and all that, but accessibility issues as well.

The breaking point

A few days ago, I wanted to get my Android phone talking with my Linux computer, so that I could text, get notifications, and make calls. KDE Connect wasn’t accessible, so I tried Device Connect. I couldn’t get anything out of that, so I tried GSConnect. In order to use that Gnome extension, I needed to start Gnome. I have Gnome 40, since I’m on Arch, so I logged in using that session, and got started. Except, Gnome had become much less accessible since the last time I’d tried it. The Dash was barely usable, the top panels trapped me in them until I opened a dialog from them, and I was soon just too frustrated to go much further. And then I finally opened the Gnome Extensions app, only to find that it’s not accessible at all.

There’s only so much I can take until I just give up and go back to Windows, and that was it. It doesn’t matter how powerful a thing is if one cannot use it, and while Linux is good for simple, everyday tasks, when you really start digging in, when you really start trying to make Linux your ecosystem, you start finding barriers all over the place.

Now, I’m using Windows, have Steam installed with a few accessible video games, Google Chrome, NVDA with plenty of addons, and the “Your Phone” app on Windows and Android works great, except for calls. But it still works much better than any Linux integration I could do. Also, with Windows and Android, I can open the Android phone screen in Windows, and, with NVDA or other screen readers, control the phone from the keyboard using Talkback keyboard commands. That’s definitely not something Linux developers would have thought of.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

WWDC is Apple’s “World Wide Developer Conference.” It’s where developers come to learn about what will be new in Apple’s operating systems (iOS, iPad OS, MacOS, ETC.), and learn how to make the best of Apple’s walled garden of tools to program apps. Tech reporters also come to the event, to gather all the news and distill it into the expected bite-sized, simple pieces. Be assured, my readers, that I will not hold anything back from you in my analysis of the event.

WWDC is not here just yet. I know, many news sites are predicting and yammering and getting all giddy with “what if” excitement. I won’t bore you with such useless speculation just to fill the headlines and homepages. I fear that I lack the imagination and incentive to create such pieces. Besides, I’m more interested in what a device can do, and less about how it looks or feels.

However, I am Invested in Apple’s operating systems. I do want to see Apple succeed in accessibility and think that, if they put enough work into it, and gave the accessibility team more freedom and staff, that accessibility would greatly improve. It is in that spirit that I give you my hopes, not predictions, for WWDC 2020. This “wishlist” will be separated into headings based on the operating system, and further divided into subsections of that operating system. After WWDC, I will revisit this post, and updated it with notes on WWDC if things change on the wishlist, and then do a post containing more notes and findings from the event.


MacOS is Apple’s general computer (desktop/laptop) operating system. With much tried and true frameworks and programs, it is a reliable system for most people. It even has functions that Windows doesn’t, like the ability to select text anywhere and have that text spoken immediately, no screen reader needed, and remap keyboard modifier keys, and system wide spell checking. These help me greatly in all of my work.

Its screen reader is VoiceOver. It’s like VoiceOver on the iPhone, but made for a complex operating system, and has some complex keyboard commands. Accessibility, like anywhere else, is not perfect on the Mac. There are bugs that have stood for a long time, and new bugs that I fear will hang around. There are also features that I’d love to see added, to make the Mac even better.

In short, MacOS accessibility isn’t a toy. I want the Mac to be treated like it’s worth something . From the many bugs, to missing features, the Mac really needs some love accessibility-wise. Many tech “reporters” say that the Mac is a grown and stable operating system. For blind people, though, the Mac is shriveled and stale.

Catalyst needs an accessibility boost

“Catalyst” is Apple’s bridge between iPad apps and Mac apps. It allows developers, Apple included, to bring iPad apps to the Mac. Started in MacOS Mojave with Apple’s own apps, Catalyst accessibility was… serviceable. It wasn’t great, but there wasn’t anything we couldn’t do with the apps. It just wasn’t the best experience. The apps were very flat, and one needed to use the VoiceOver custom actions menu, without the ability to use letter navigation, to select actions like one would using the “actions” rotor on the iPad.

Now, in Catalina, the Catalyst technology is available for third-party developers, but accessibility issues still remain. The apps don’t feel like Mac apps at all, not even Apple’s own apps. So, in MacOS 10.16, I hope to see at least Apple’s own apps be much more accessible, especially if the Messages app will be an iPad catalyst app.

VoiceOver needs a queue

Screen readers convey information through speech, usually. This isn’t new for people who are blind, but what may be new is that they manage what is spoken using a queue. This means that when you’re playing a game and new text appears, the screen reader doesn’t interrupt itself from speaking the important description of the environment just to speak that an unimportant NPC just came into the area.

VoiceOver, sadly, does not have this feature, or if it does, it hardly ever uses it. Now, it looks like the speech synthesis architecture has a queue built in so VoiceOver should be using this to great effect. But it isn’t. This means that doing anything complex in the Terminal app is unproductive. Even using web apps, which have VoiceOver speak events, can be frustrating when VoiceOver interrupts itself to say “loading new tweets” and such. It was so bad that the VoiceOver team had to give the option for a sound to play instead of the “one row added” notification for the mail app.

This is a large oversight, and it has gone on long enough. So, in MacOS 10.16, I desperately hope that VoiceOver can finally manage speech like a true screen reader, with a speech queue.

Insertion point at… null

Long time Apple fans may know what the insertion point is. For Windows, Android, and Linux users, it is the cursor, or Point. It is where you insert text. On Mac and iOS, VoiceOver calls this the insertion point, and it appears in text fields. The only problem is, VoiceOver says it appears on read‐only places, like websites in earlier versions of MacOS 10.15, and emails to this day.

VoiceOver believes that there is an insertion point in the email somewhere, but says that it is at “null”, meaning that it is at 0, or doesn’t exist. That’s because there isn’t one. This only appears when you are reading by element, VO + Right or Left arrow, and not when you are reading by line with just the up and down arrows, where there is a sort of cursor to keep track of where you are. But this cursor is, most likely, a VoiceOver construct, so it should know that when moving by element, there practically isn’t one besides VoiceOver’s own “cursor” that is focusing on things.

This bug is embarrassing. I wouldn’t want my supervisor seeing this kind of bug in the technology that I use to do professional work. I stress again that the Mac is not a toy. Yes, it has “novelty” voices, and yes, some blind people talk like them for fun, or use them in daily work to be silly. I don’t, though, because the Mac is my work machine. What’s a computer, Apple asks? A Mac, that’s what! I rely on this computer for my job, and if things don’t improve, I’ll probably move to Linux, which is the next best option for my workflow. Of course, things there don’t improve much either, but at least the screen reader is actually used by its creator and testers, so silly bugs like that don’t appear in a pro device. So, in MacOS 10.16, I hope that the accessibility team took a long vacation from adding stuff and spent a lot of time on fixing MacOS’ VoiceOver so that I can be proud to own a Mac again.

I need more fingers

The Mac has so many keyboard commands, and letter navigation in all menus and lists make navigating the Mac a breeze. But some of the keyboard commands were clearly made for a desktop machine. I have a MacBook Pro, late 2019 with four Thunderbolt ports, but still the same Function, Control (remapped to escape), Option, Command, Space, Command, Option, Capslock (remapped to control because Emacs), keyboard layout. In order to lock the screen, then, with the normal keyboard layout (without remapping due to the touch bar and Emacs), I’d have to lock the screen by holding the command key with my right thumb, hold control with my left pinkie, and… and… how do I reach the Q? Ah, found it! I think. That may be A, or 1, though.

My point is, we blind people pretty much always use the keyboard. sure, we can use the track pad on a Mac, but that’s an option, not a requirement like the touch screen of an iPhone. Keyboard commands should be ergonomic, for every Mac model, not just the iMac. So, in Mac OS 10.16, I hope to see more ergonomic keyboard commands for MacBooks. I hope VoiceOver commands become more ergonomic as well, as pressing Control + Option + Command + 2 or even Capslock + Command + 2 gets pretty cramped. I know, the Touchbar means less keys, but my goodness I hate using those commands when I need to. And no, having us use the VoiceOver menu isn’t a fix. It’s a workaround. And no, having us use letter navigation to lock the screen or do any number of hard keyboard commands is not a fix, it’s a workaround.

Find and replace Touchbar with Function keys

I’ve talked about the Touchbar in earlier articles, so I’ll just give an overview here. The Mac does not have a Touchscreen. The Touchscreen is slower for blind people to use, and so is the Touchbar. We can’t even customize it, as that part of system preferences is seemingly inaccessible to us. One Mac user said he has answers on how to use it well, but I asked him about it, and haven’t seen a reply to my query. For now, then, the Touchbar is useless to me, and blind people who, like me, use their Macs to get work done.

Now, one place it could be good at is in Pages. While in Pages, the Touchbar acts like a row of formatting buttons. But there are keyboard commands for almost all of them, except for adding a heading. If the Touchbar were that useful everywhere else, it may have a place in my workflow. But I write all of my documents, when I can help it, in Markdown or Org-mode, inside Emacs or another text editor. So the Touchbar would be better gone from my MacBook, and replaced by the much more useful function keys, with tactile buttons that do one thing when pressed in each context, and I know what they’ll do when pressed.

So, in a new model of the MacBook, I want the option to use regular function keys, even if it costs $20 more. Either that, or give me a reason to use this useless touch strip that only acts to eliminate keys that VoiceOver can use and make keyboarding that much more limited. And no, an external keyboard is not a fix. It’s a workaround.

Text formatting with VoiceOver

This applies to both MacOS and iOS, but it’d be more useful on the Mac, so I’m putting it here. As I wrote in my Writing Richly post, formatting is important for both reading and writing. I did send Apple feedback based on this, so I hope that in 10.16, I, and all other blind people, are able to read and write with as much access to formatting as sighted people.


There’s nothing on the screen

There are many iOS apps that are very accessible. They work well with VoiceOver, and can be used fine by blind people. However, there are also many which appear blank to VoiceOver, so cannot easily be used. VoiceOver could use its already‐good text recognition technology to scan the entire screen if an element cannot be found with an accessible label, other than the app title. Then, it could get the location of the scanned text and items, and allow a user to feel around the screen to find them.

This could dramatically improve access to everything from games, to utility apps written in an inaccessible framework, like QT. May QT be forgotten, forever. So, in iOS 14, I hope that Apple majorly ramps up its use of AI in VoiceOver. Besides, that would put Google, the AI company, even further to shame, since they don’t use AI at all in TalkBack to recognize inaccessible items or images.


Apple Arcade for everyone

Apple Arcade came out some time last year. 100 games were promised around launch time, and at $5 per month, it is an amazing deal, as you can play these games forever; there is no rotation like in XBox Game Pass. For now, though, there have been no games that blind people can play, so I just canceled my subscription, my hope in Apple dwindling further. So, in this year’s WWDC, I hope that Apple not only adds accessible games to Apple Arcade, or even makes a few of their own, but shows them off. People should know that Apple truly cares, as much as a 1.5 trillion dollar corporation can, about accessibility and people who are blind, who cannot play regular, inaccessible games.


I hope this article has enlivened your imagination a bit regarding the soon‐to‐be WWDC 2020. I’ve detailed what I want to see in MacOS, my most often used Apple system, iOS, and Apple’s services. Now, what do you want to see? Please, let me know by commenting wherever this article is shared.

Thanks so much for reading my articles. If you have any suggestions, corrections, or other comments, please don’t hesitate to reach out to me. I eagerly await your comments.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Coding has always been hard for me. I’ve never been able to get my mind around loops, if and else, for and while, and break almost breaks me instead of the code. However, many people make it look easy, and for them, it probably is. In iOS 14, Apple may loosen their chains upon their technology enough for developers to explore the boundaries of what a pocket computer can do.

Apple is very controlling. All of its operating systems can only run on its own hardware. Its hardware can only be used to practically run officially sanctioned operating systems, unless a Linux user can get passed the security on the Mac. And, for a long time, notwithstanding workarounds that have never been so easy, apps on iOS have only been usable if they were downloaded through Apple’s own App Store. In iOS 14, however, things may change for the better.

Earlier this year, Applevis released a blog post about iOS 14 possibly gaining Custom Text to Speech engine support. While I won’t write about it here, as it seems a minor topic to me, I will say that this is something that the community of blind people have been asking for since VoiceOver revolutionized our lives. Furthermore, though, it is greater evidence that Apple is beginning to open up, just a tad. it isn’t, however, the first time we’ve seen Apple open up, a bit, for accessibility reasons. Apple allows us, in iOS 13, to change VoiceOver commands, and it uses the Liblouis braille tables to display languages in Braille that weren’t available before.

In this article, I will discuss and theorize about the availability of XCode on iOS which is supposedly going to be released this year, and how it can help people learn to code, bring sideloading to many more people, and how it can bring emulation in full force to iOS.

Learning to code on iOS

As I’ve said before, coding has never been easy for me. My skills are still very much at the beginner level. I can write “print” statements in Python, and maybe in Swift, but languages like Quorum, Java, and C++ are so verbose and require much more forethought than Python. Swift seems a bit like Python, although just as complex as Java and more verbose languages when one becomes more advanced.

With XCode on the Mac, accessibility isn’t great. Editing text is okay, but even viewing output seems impossible on first look, and I’m still not sure if it can even be done. This means that the Intro to App development with Swift Playground materials are inaccessible. This has been verified today with the XCode 10 version. Sure, we can read the source code, but cannot directly activate the “next” link to move to the next page. And no, workarounds are not equal access. Furthermore, neither teachers nor students should have to look for workarounds to use a course created by Apple, one of the richest companies in the world, whose accessibility team is great, for iOS.

Because of this, I expect XCode for iOS will be a new beginning, of sorts, for all teams who work on it, not just the accessibility team. It will be a way for new, young developers to come to coding on their phone, or more probably, their iPad, without the history of workarounds that many developers on the Mac who are blind know today. It will also allow blind developers to create powerful, accessible apps. If it is true that Macs will run Apple’s own “A” processor someday, then perhaps this XCode for iOS will move to the Mac, as Apple TV is attempting to do. Hopefully, by then, iOS apps on the Mac will actually be usable, instead of messes, accessibility-wise.

Windows users also cannot currently officially code for iOS. Most blind users have a Windows computer and an iPhone. Having XCode on iOS will allow more blind people, who are good at coding, to try their hand at developing iOS apps. This could also bring more powerful apps, as blind Windows users are used to the power of programs like Foobar2000, NVDA addons, and lots of choice.

Another benefit of having XCode on iOS is that, because of the number of users, there will be even more people working on open source projects, which they could easily download and import into XCode. For example, perhaps PPSSPP User Interface accessibility could be improved, or the Delta emulator could become completely accessible and groundbreaking. Of course, closed source app development could be aided by this as well, but it is harder to join, or make, a closed source development team than it is to contribute to an open source one.

Sideloading with XCode

Sideloading is the process of running apps on iOS which are not accepted by the iOS App Store. These include video game console emulators, torrent downloaders, and apps which allow users to watch “free” movies and TV shows. The last set of apps, I agree, shouldn’t be on the app store, but the first two are not illegal, but simply could facilitate illegal operations; pun intended.

Sideloading can be done in many ways. You can load the XCode project into XCode for Mac, build it, and send it to your own device. This must be renewed every seven days, but is the most difficult technically to do. You can sign up for a third-party app store, which allows you to download apps which are hosted elsewhere and may not be the latest version, but there is a good chance that the certificate which they use to sign the app will be revoked by Apple. Finally, there are a few apps which automate the signing of apps, and pushes the app to the device.

Two of these methods, however, require a Mac computer. Many people, especially blind people, only use a Windows computer and an iPhone. This usually isn’t a problem, as most blind people either use their phone for much of what they do, or use their computer for much of what they do. However, this means that people who have Windows, but not a Mac, cannot sideload apps using all three methods. So, if a blind person creates an extension to alert you that your screen curtain isn’t on, which means that a VoiceOver user doesn’t have a feature enabled so that the screen is blank, that app cannot be distributed on the App Store, and cannot be sideloaded by Windows users. And I highly doubt a third-party app store would host such a niche app.

Emulating with XCode

Emulators were once a legal gray area. They allow gamers to play video games, from game consoles like the Playstation Portable, on computers, tablets, or phones. They have become legal, however, due to Sony’s lawsuits of emulator developers While emulation is legal, however, downloading games from the Internet, unless, some say, you own the game, is not. Steve Jobs himself, at the 1999 MacWorld conference, showed off an emulator, one for playing Playstation games. Now, emulators are not allowed onto the iOS App Store, unless they have been made by the developers of the games which are being emulated.

XCode on iOS would also help in emulator use. The more people use emulators, the more their use will spread. iPhones are also definitely powerful enough to run emulators; the newer the iPhone, the faster the emulation. An iPhone X R, for example, is powerful enough to run a Playstation Portable game at full speed, even while not being optimized for the hardware, and being interpreted. It’s like running nearly a PS3 game using Python. A video I made demonstrates this. The game, Dissidia DuoDecim, isn’t as accessible as its predecessor. However, it runs, as far as I could tell, at full speed. This spectacularly shows that the computers in our pockets, the ones we use to drone over Facebook, be riled up by news sites, or play Pokemon Go, are much more powerful, and are capable of far more than what we use of them.

Also, since blind people will have access to the code ran with XCode, fixes to sound, the user interface, and even enhancements to both, are possible. PSP games could be enhanced using Apple’s 3D audio effects. Games could be described using Apple’s Machine Learning Vision technology. This applies to even more than accessibility, however. Since more users will be learning to code, or finally have the ability to code for iOS, bugs in iOS ports of open source software can more quickly be resolved.


In this article, I have discussed the possibility of XCode for iOS, and how it could improve learning to code, sideloading apps, and emulation of video games. I hope that this information has been informative, and has enlivened the imaginations of my readers.

Now, what do you all think? Are you a blind person who wants to learn to code in an accessible environment? Are you a sighted person who wants to play Final Fantasy VII on your phone? Or are you one who wants to help fix accessibility issues in apps? Discussion is very welcome, anywhere this post is shared to. I welcome any feedback, input, or corrections. And, as always, thank you so much for reading this article.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Whenever you read a text message, forum post, Tweet, or Facebook status, have you ever seen some one surround a word with stars, like *this*? Have you noticed some one surround a phrase with two stars? This is a part of Markdown, a form of formatting text for web usage.

I believe, however, that Markdown deserves more than just web usage. I can write in Markdown in this blog, I can use it on Github, and even in a few social networks. But wouldn’t it be even more useful everywhere? If we could write in Markdown throughout the whole operating system, couldn’t we be more expressive? And for accessibility issues, Markdown is great because a blind person can just write to format, instead of having to deal with clunky, slow graphical interfaces.

So, in this article, I will discuss the importance of rich text, how Markdown could empower people with disabilities, and how it could work system-wide throughout all computers, even the ones in our pockets.

What’s this rich text and who needs all that?

Have you ever written in Notepad? It’s pretty plain, isn’t it? That is plain text. No bold, no italics, no underline, nothing. Just, if you like that, plain, simple text. If you don’t like plain text, you find yourself wanting more power, more ability to link things together, more ways to describe your text and make the medium, in some ways, a way to get the message across.

Because of this need, rich text was created. One can use this in Word Pad, Microsoft Word, Google Docs, LibreOffice, or any other word processor worth something. When I speak of rich text, to make things simple, I mean anything that is not plain text, including HTML, as it describes rich text. Rich text is in a lot of places now, yes, but it is not everywhere, and is not the same in the places that it is in.

So, who needs all that? Why not just stick with plain text? I mean come on man, you’re blind! You can’t see the rich text. In a way, this is true. I cannot see the richness of text, but in a moment, we’ll get to how that can be done. But for sighted people, which text message is better?

Okay, but how’s your day going?

Okay, but how’s your day going?

Okay, but how’s *your* day going?

For blind people, the second message has the word “your” italicized. Sure, we may have gotten used to stars surrounding words meaning something, but that is a workaround, and not nearly the optimal outcome of rich text.

So what can you do with Markdown? You can do plenty of stuff. You could use it for simply using one blank line between blocks of text to show paragraphs in your journal. You could use it to create headings for chapters in your book. You could use it to make links to websites in your email. You could even simply use it to italicize an emphasized word in a text. Markdown can be as little or as much as you need it to be. And if you don’t add any stars, hashes, dashes, brackets, or HTML markup, it’s just as it is, plain text.

Also, it doesn’t have to be hard. Even Emacs, an advanced text editor, gives you questions when you add a link, like “Link text,” “Link address,” and so on. Questions like that can be asked of you, and you simply fill in the information, and the Markdown is created for you.

Okay but what about us blind people?

To put it simply, Markdown shows us rich text. In the next section, I’ll talk about how, but for now, let’s focus on why. With nearly all screen readers, text formatting is not shown to us. Only Narrator on Windows 10 shows formatting with minimal configuration, and JAWS can be used to show formatting using a lot of configuration of speech and sound schemes.

But, do we want that kind of information? I think so. Why wouldn’t we want to know exactly what a sighted person sees, in a way that we can easily, and quickly, understand? Why would we not want to know what an author intended us to know in a book? We accept formatting symbols in Braille, and even expect it. So, why not in digital form?

NVDA on Windows can be set to speak formatting information as we read, but it can be bold on quite arduous to hear italics on all this italics off as we read what we write bold off. Orca can speak formatting like NVDA, as well. VoiceOver on the Mac can be set to speak formatting, like NVDA, and also has the ability to make a small sound when it encounters formatting. This is better, but how would one distinguish bold, italics, or underline from a simple color change?

Even VoiceOver on iOS, which arguably gets much more attention than its Mac sibling, cannot read formatting information. The closest we get is the phrase separated from the rest of the paragraph into its own item, showing that it’s different, in Safari and other web apps. But how is it different? What formatting was applied to this “different” text? Otherwise, text is plain, so blind people don't even know that there is a possibility of formatting, let alone that that formatting isn’t made known to us by the program tasked with giving us this information. In some apps, like notes, one can get some formatting information by reading line by line in the Note text field, but what if one simply wants to read the whole thing?

Okay but what about writing rich text? I mean, you just hit a hotkey and it works, so what could be better than that? First, when you press Control + I to italicize, there is no guarantee that “italics on” will be spoken. In fact, that is the case in LibreOffice for Windows: you do not know if the toggle key toggled the formatting on or off. You could write some text, select it, then format it, but again, you don’t know if you just italicized that text, or removed the italics. You may be able to check formatting with your screen reader’s command, but that’s slow, and you would hate to do that all throughout the document. Furthermore, dealing with spoken formatting as it is, it takes some time to read your formatted text. Hearing descriptions of formatting changes tires the mind, as it must interpret the fast-paced speech, get a sense of formatting flipped from off to on, and quickly return to interpreting text instead of text formatting instruction. Also, because all text formatting changes are spoken like the text surrounding it, you may have to slow down your speech just to get somewhat ahead of things enough to not grow tired from the relentless text streaming through your mind. This could be the case with star star bold or italics star star, and if screen readers would use more fine control of the pauses of a speech synthesizer, a lot of the exhausting sifting through of information which is rapidly fired at us would be lessened, but I don’t see much of that happening any time soon.

Even on iOS, where things are simpler, one must deal with the same problems as on other systems, except knowing if formatting is turned on or off before writing. There is also the problem of using the touch screen, using menus just to select to format a heading. This can be worked around using a Bluetooth keyboard, if the program you’re working in even has a keyboard command to make a heading, but not everyone has, or wants, one of those.

Markdown fixes, at least, most of this. We can write in Markdown, controlling our formatting exactly, and read in Markdown, getting much more information than we ever have before, while also getting less excessive textual information, hearing “star” instead of “italics on” and “italics off” does make a difference. “Star” is not usually read surrounding words, and has already become, in a sense, a formatting term. “Italics on” sounds like plain text, is not a symbol, and while it is a formatting term, has many syllables, and just takes time to say. Coupled with the helpfulness of Markdown for people without disabilities, adding it across an entire operating system would be useful for everyone; not just the few people with disabilities, and not just for the majority without.

So, how could this work?

Operating systems, the programs which sit between you and the programs you run, has many layers and parts working together to make the experience as smooth as the programmers know how. In order for Markdown to be understood, there must be a part of the operating system that translates it into something that the thing that displays text understands. Furthermore, this thing must be able to display the resulting rich text, or Markdown interpretation, throughout the whole system, not just in Google Docs, not just in Pages, not just in Word, but in Note Pad, in Messages, in Notes, in a search box.

With that implemented, though, how should it be used? I think that there should be options. It’s about time some companies released their customers from the “one size fits all” mentality anyway. There should be an option to replace formatting done with Markdown with rich text unless the line the formatting is on has input focus, a mode for simply showing the Markdown only and no rich text, and an option for showing both.

For sighted people, I imagine seeing Markdown would be distracting. They want to see a heading, not the hash mark that makes the line a heading. So, hide Markdown unless that heading line is navigated to.

For blind people, or for people who find plain text easier to work with, and for whom the display of text in different sizes and font faces is jarring or distracting, having Markdown only would be great, while being translated for others to see as rich text. Blind people could write in Markdown, and others can see it as rich text, while the blind person sees simply what they wrote, in Markdown.

For some people, being able to see both would be great. Being able to see the Markdown they write, along with the text that it produces, could be a great way for users to become more comfortable with Markdown. It could be used for beginners to rich text editing, as well.

But, which version of Markdown should be used?

As with every open source, or heatedly debated, thing in this world, there are many ways of doing things. Markdown is no different. There is:

and probably many others. I think that Pandoc’s Markdown would be the best, most extended variant to use, but I know that most operating system developers will stick with their own. Apple will stick with Swift Markdown, Microsoft may stick with Github Markdown, and the Linux developers may use Pandoc, if Pandoc is available as a package on the user’s architecture, and if not, then it’s some one else’s issue.


In this article, I have attempted to communicate the importance of rich text, why Markdown would make editing rich text easy for everyone, including people with disabilities, and how it could be implemented. So now, what do you all think? Would Markdown be helpful for you? Would writing blog posts, term papers, journal entries, text messages, notes, or Facebook posts be enhanced by Markdown rich text? For blind people, would reading books, articles, or other text, and hearing the Markdown for bold, italics, and other such formatting make the text stand out more, make it more beautiful to you, or just get in your way? For developers, what would it take to add Markdown support to an operating system, or even your writing app? How hard will it be?

Please, let me know your thoughts, using the Respond popup, or replying to the posts on social media made about this article. And, as always, thank you so much for reading this post.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

This article will be something rather different from my normal postings. I’ve decided to begin doing news posts, rather than just my ramblings. Oh, there will still be rambles, as I have an opinion on everything, and readers might as well know the person I am, to understand more about my viewpoint, to gauge the content relative to the content writer.

The scope of the news will vary, but I expect it to be mostly open source technology, relevant to the blind community. This may change, as readers may always contact me requesting that I write articles or news items about subjects. I will let the folks at Blind Bargains chase after Humanware, Vispero, HIMS, and other such “big names” in the Assistive Technology world. I seek for my content to be different, meaningful, and lacking the comedic nature of Podcasts for the blind. Yes, I do have a slight grudge against larger sites who can dictate, pretty well without fail, what readers know about. After all, if a blind person only listens to the Blind Bargains podcast, or even reads their news posts, will they know about these advancements, like retroarch accessibility, Stormux, and so on? In any case, with that out of the way, let’s be on with the news.

Retroarch is accessible

Retroarch, the program that brings many video game emulators together into one unified interface, was made accessible in December 2019. Along with its ability to grab text from the screen of games and speak it, this brings accessibility to many games, on all 3 major operating systems for desktop and laptop computers. No, Android and iOS cannot benefit from this yet. Also, there is more to come.

For a detailed page on using Retroarch for the blind, see this guide.

GTK 4 could be more accessible

This year, folks from GTK met with some accessibility advocates. They came up with this roadmap for better accessibility. GTK is the way some Linux apps are given graphical representations, like buttons, check boxes, and so on. As I always say, the operating system is the root of accessibility, and the stronger that root is, the more enjoyable it will be for blind people to use Linux.

I hope that this will bring much more accessibility to GTK programs, and get rid of a lot of reasons to stick with Mac or Windows for many more technically inclined blind people, like myself. Yes, even I have reservations about using it. Will it be good enough? Will I be able to get work done? Will I be able to play the game I like most? Will it require a lot of work? At least with better GTK accessibility, a few of those questions will be better answered affirmatively.

Mate 1.24 brings greater accessibility

Last month, Mate released version 1.24 of its desktop environment, which is basically like a version of the Windows desktop, handling the start menu, task bar, and other such aspects of a graphical interface. Mate uses a system more like Windows XP, while other desktops, like Gnome, are more new in their approaches.

Just search for “accessibility” on the linked page, and you’ll find quite a few improvements. This is a great sign; I really like it when organizations, or companies, display their accessibility commitment proudly in updates, and not just the bland “bug fixes and performance improvements” mantra tiredly used in most change logs today.

Stormux: a distribution which might stick around

After the quiet death of F123 a contributor to the blind Linux community, Storm, created a fork of F123, calling it Stormux The project is new, and still has a few problems, but is designed to be a jumping off point into Arch Linux, which is a more advanced, but very up-to-date, variant of Linux. It is only available for the Raspberry Pi 4 computer for now, and I will have a chance to test it soon. The website is as new as the software, so the downloads section is not linked to the main page, neither is much else. In the coming months, both the website and operating system should see some development.


This has been my first news article on this blog. I hope to write more of these, along with my normal posts, as new developments occur. However, I cannot know about everything, so if one of my readers finds or creates something new, and wishes for it to be written about and possibly read, please let me know. I will not turn away anyone because of obscurity or lack of general perceived interest.


You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Enter your email to subscribe to updates.