devinprater

Devin Prater's blog

WWDC is Apple’s “World Wide Developer Conference.” It’s where developers come to learn about what will be new in Apple’s operating systems (iOS, iPad OS, MacOS, ETC.), and learn how to make the best of Apple’s walled garden of tools to program apps. Tech reporters also come to the event, to gather all the news and distill it into the expected bite-sized, simple pieces. Be assured, my readers, that I will not hold anything back from you in my analysis of the event.

WWDC is not here just yet. I know, many news sites are predicting and yammering and getting all giddy with “what if” excitement. I won’t bore you with such useless speculation just to fill the headlines and homepages. I fear that I lack the imagination and incentive to create such pieces. Besides, I’m more interested in what a device can do, and less about how it looks or feels.

However, I am Invested in Apple’s operating systems. I do want to see Apple succeed in accessibility and think that, if they put enough work into it, and gave the accessibility team more freedom and staff, that accessibility would greatly improve. It is in that spirit that I give you my hopes, not predictions, for WWDC 2020. This “wishlist” will be separated into headings based on the operating system, and further divided into subsections of that operating system. After WWDC, I will revisit this post, and updated it with notes on WWDC if things change on the wishlist, and then do a post containing more notes and findings from the event.

MacOS

MacOS is Apple’s general computer (desktop/laptop) operating system. With much tried and true frameworks and programs, it is a reliable system for most people. It even has functions that Windows doesn’t, like the ability to select text anywhere and have that text spoken immediately, no screen reader needed, and remap keyboard modifier keys, and system wide spell checking. These help me greatly in all of my work.

Its screen reader is VoiceOver. It’s like VoiceOver on the iPhone, but made for a complex operating system, and has some complex keyboard commands. Accessibility, like anywhere else, is not perfect on the Mac. There are bugs that have stood for a long time, and new bugs that I fear will hang around. There are also features that I’d love to see added, to make the Mac even better.

In short, MacOS accessibility isn’t a toy. I want the Mac to be treated like it’s worth something . From the many bugs, to missing features, the Mac really needs some love accessibility-wise. Many tech “reporters” say that the Mac is a grown and stable operating system. For blind people, though, the Mac is shriveled and stale.

Catalyst needs an accessibility boost

“Catalyst” is Apple’s bridge between iPad apps and Mac apps. It allows developers, Apple included, to bring iPad apps to the Mac. Started in MacOS Mojave with Apple’s own apps, Catalyst accessibility was… serviceable. It wasn’t great, but there wasn’t anything we couldn’t do with the apps. It just wasn’t the best experience. The apps were very flat, and one needed to use the VoiceOver custom actions menu, without the ability to use letter navigation, to select actions like one would using the “actions” rotor on the iPad.

Now, in Catalina, the Catalyst technology is available for third-party developers, but accessibility issues still remain. The apps don’t feel like Mac apps at all, not even Apple’s own apps. So, in MacOS 10.16, I hope to see at least Apple’s own apps be much more accessible, especially if the Messages app will be an iPad catalyst app.

VoiceOver needs a queue

Screen readers convey information through speech, usually. This isn’t new for people who are blind, but what may be new is that they manage what is spoken using a queue. This means that when you’re playing a game and new text appears, the screen reader doesn’t interrupt itself from speaking the important description of the environment just to speak that an unimportant NPC just came into the area.

VoiceOver, sadly, does not have this feature, or if it does, it hardly ever uses it. Now, it looks like the speech synthesis architecture has a queue built in so VoiceOver should be using this to great effect. But it isn’t. This means that doing anything complex in the Terminal app is unproductive. Even using web apps, which have VoiceOver speak events, can be frustrating when VoiceOver interrupts itself to say “loading new tweets” and such. It was so bad that the VoiceOver team had to give the option for a sound to play instead of the “one row added” notification for the mail app.

This is a large oversight, and it has gone on long enough. So, in MacOS 10.16, I desperately hope that VoiceOver can finally manage speech like a true screen reader, with a speech queue.

Insertion point at… null

Long time Apple fans may know what the insertion point is. For Windows, Android, and Linux users, it is the cursor, or Point. It is where you insert text. On Mac and iOS, VoiceOver calls this the insertion point, and it appears in text fields. The only problem is, VoiceOver says it appears on read‐only places, like websites in earlier versions of MacOS 10.15, and emails to this day.

VoiceOver believes that there is an insertion point in the email somewhere, but says that it is at “null”, meaning that it is at 0, or doesn’t exist. That’s because there isn’t one. This only appears when you are reading by element, VO + Right or Left arrow, and not when you are reading by line with just the up and down arrows, where there is a sort of cursor to keep track of where you are. But this cursor is, most likely, a VoiceOver construct, so it should know that when moving by element, there practically isn’t one besides VoiceOver’s own “cursor” that is focusing on things.

This bug is embarrassing. I wouldn’t want my supervisor seeing this kind of bug in the technology that I use to do professional work. I stress again that the Mac is not a toy. Yes, it has “novelty” voices, and yes, some blind people talk like them for fun, or use them in daily work to be silly. I don’t, though, because the Mac is my work machine. What’s a computer, Apple asks? A Mac, that’s what! I rely on this computer for my job, and if things don’t improve, I’ll probably move to Linux, which is the next best option for my workflow. Of course, things there don’t improve much either, but at least the screen reader is actually used by its creator and testers, so silly bugs like that don’t appear in a pro device. So, in MacOS 10.16, I hope that the accessibility team took a long vacation from adding stuff and spent a lot of time on fixing MacOS’ VoiceOver so that I can be proud to own a Mac again.

I need more fingers

The Mac has so many keyboard commands, and letter navigation in all menus and lists make navigating the Mac a breeze. But some of the keyboard commands were clearly made for a desktop machine. I have a MacBook Pro, late 2019 with four Thunderbolt ports, but still the same Function, Control (remapped to escape), Option, Command, Space, Command, Option, Capslock (remapped to control because Emacs), keyboard layout. In order to lock the screen, then, with the normal keyboard layout (without remapping due to the touch bar and Emacs), I’d have to lock the screen by holding the command key with my right thumb, hold control with my left pinkie, and… and… how do I reach the Q? Ah, found it! I think. That may be A, or 1, though.

My point is, we blind people pretty much always use the keyboard. sure, we can use the track pad on a Mac, but that’s an option, not a requirement like the touch screen of an iPhone. Keyboard commands should be ergonomic, for every Mac model, not just the iMac. So, in Mac OS 10.16, I hope to see more ergonomic keyboard commands for MacBooks. I hope VoiceOver commands become more ergonomic as well, as pressing Control + Option + Command + 2 or even Capslock + Command + 2 gets pretty cramped. I know, the Touchbar means less keys, but my goodness I hate using those commands when I need to. And no, having us use the VoiceOver menu isn’t a fix. It’s a workaround. And no, having us use letter navigation to lock the screen or do any number of hard keyboard commands is not a fix, it’s a workaround.

Find and replace Touchbar with Function keys

I’ve talked about the Touchbar in earlier articles, so I’ll just give an overview here. The Mac does not have a Touchscreen. The Touchscreen is slower for blind people to use, and so is the Touchbar. We can’t even customize it, as that part of system preferences is seemingly inaccessible to us. One Mac user said he has answers on how to use it well, but I asked him about it, and haven’t seen a reply to my query. For now, then, the Touchbar is useless to me, and blind people who, like me, use their Macs to get work done.

Now, one place it could be good at is in Pages. While in Pages, the Touchbar acts like a row of formatting buttons. But there are keyboard commands for almost all of them, except for adding a heading. If the Touchbar were that useful everywhere else, it may have a place in my workflow. But I write all of my documents, when I can help it, in Markdown or Org-mode, inside Emacs or another text editor. So the Touchbar would be better gone from my MacBook, and replaced by the much more useful function keys, with tactile buttons that do one thing when pressed in each context, and I know what they’ll do when pressed.

So, in a new model of the MacBook, I want the option to use regular function keys, even if it costs $20 more. Either that, or give me a reason to use this useless touch strip that only acts to eliminate keys that VoiceOver can use and make keyboarding that much more limited. And no, an external keyboard is not a fix. It’s a workaround.

Text formatting with VoiceOver

This applies to both MacOS and iOS, but it’d be more useful on the Mac, so I’m putting it here. As I wrote in my Writing Richly post, formatting is important for both reading and writing. I did send Apple feedback based on this, so I hope that in 10.16, I, and all other blind people, are able to read and write with as much access to formatting as sighted people.

iOS

There’s nothing on the screen

There are many iOS apps that are very accessible. They work well with VoiceOver, and can be used fine by blind people. However, there are also many which appear blank to VoiceOver, so cannot easily be used. VoiceOver could use its already‐good text recognition technology to scan the entire screen if an element cannot be found with an accessible label, other than the app title. Then, it could get the location of the scanned text and items, and allow a user to feel around the screen to find them.

This could dramatically improve access to everything from games, to utility apps written in an inaccessible framework, like QT. May QT be forgotten, forever. So, in iOS 14, I hope that Apple majorly ramps up its use of AI in VoiceOver. Besides, that would put Google, the AI company, even further to shame, since they don’t use AI at all in TalkBack to recognize inaccessible items or images.

Services

Apple Arcade for everyone

Apple Arcade came out some time last year. 100 games were promised around launch time, and at $5 per month, it is an amazing deal, as you can play these games forever; there is no rotation like in XBox Game Pass. For now, though, there have been no games that blind people can play, so I just canceled my subscription, my hope in Apple dwindling further. So, in this year’s WWDC, I hope that Apple not only adds accessible games to Apple Arcade, or even makes a few of their own, but shows them off. People should know that Apple truly cares, as much as a 1.5 trillion dollar corporation can, about accessibility and people who are blind, who cannot play regular, inaccessible games.

Conclusion

I hope this article has enlivened your imagination a bit regarding the soon‐to‐be WWDC 2020. I’ve detailed what I want to see in MacOS, my most often used Apple system, iOS, and Apple’s services. Now, what do you want to see? Please, let me know by commenting wherever this article is shared.

Thanks so much for reading my articles. If you have any suggestions, corrections, or other comments, please don’t hesitate to reach out to me. I eagerly await your comments.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Coding has always been hard for me. I’ve never been able to get my mind around loops, if and else, for and while, and break almost breaks me instead of the code. However, many people make it look easy, and for them, it probably is. In iOS 14, Apple may loosen their chains upon their technology enough for developers to explore the boundaries of what a pocket computer can do.

Apple is very controlling. All of its operating systems can only run on its own hardware. Its hardware can only be used to practically run officially sanctioned operating systems, unless a Linux user can get passed the security on the Mac. And, for a long time, notwithstanding workarounds that have never been so easy, apps on iOS have only been usable if they were downloaded through Apple’s own App Store. In iOS 14, however, things may change for the better.

Earlier this year, Applevis released a blog post about iOS 14 possibly gaining Custom Text to Speech engine support. While I won’t write about it here, as it seems a minor topic to me, I will say that this is something that the community of blind people have been asking for since VoiceOver revolutionized our lives. Furthermore, though, it is greater evidence that Apple is beginning to open up, just a tad. it isn’t, however, the first time we’ve seen Apple open up, a bit, for accessibility reasons. Apple allows us, in iOS 13, to change VoiceOver commands, and it uses the Liblouis braille tables to display languages in Braille that weren’t available before.

In this article, I will discuss and theorize about the availability of XCode on iOS which is supposedly going to be released this year, and how it can help people learn to code, bring sideloading to many more people, and how it can bring emulation in full force to iOS.

Learning to code on iOS

As I’ve said before, coding has never been easy for me. My skills are still very much at the beginner level. I can write “print” statements in Python, and maybe in Swift, but languages like Quorum, Java, and C++ are so verbose and require much more forethought than Python. Swift seems a bit like Python, although just as complex as Java and more verbose languages when one becomes more advanced.

With XCode on the Mac, accessibility isn’t great. Editing text is okay, but even viewing output seems impossible on first look, and I’m still not sure if it can even be done. This means that the Intro to App development with Swift Playground materials are inaccessible. This has been verified today with the XCode 10 version. Sure, we can read the source code, but cannot directly activate the “next” link to move to the next page. And no, workarounds are not equal access. Furthermore, neither teachers nor students should have to look for workarounds to use a course created by Apple, one of the richest companies in the world, whose accessibility team is great, for iOS.

Because of this, I expect XCode for iOS will be a new beginning, of sorts, for all teams who work on it, not just the accessibility team. It will be a way for new, young developers to come to coding on their phone, or more probably, their iPad, without the history of workarounds that many developers on the Mac who are blind know today. It will also allow blind developers to create powerful, accessible apps. If it is true that Macs will run Apple’s own “A” processor someday, then perhaps this XCode for iOS will move to the Mac, as Apple TV is attempting to do. Hopefully, by then, iOS apps on the Mac will actually be usable, instead of messes, accessibility-wise.

Windows users also cannot currently officially code for iOS. Most blind users have a Windows computer and an iPhone. Having XCode on iOS will allow more blind people, who are good at coding, to try their hand at developing iOS apps. This could also bring more powerful apps, as blind Windows users are used to the power of programs like Foobar2000, NVDA addons, and lots of choice.

Another benefit of having XCode on iOS is that, because of the number of users, there will be even more people working on open source projects, which they could easily download and import into XCode. For example, perhaps PPSSPP User Interface accessibility could be improved, or the Delta emulator could become completely accessible and groundbreaking. Of course, closed source app development could be aided by this as well, but it is harder to join, or make, a closed source development team than it is to contribute to an open source one.

Sideloading with XCode

Sideloading is the process of running apps on iOS which are not accepted by the iOS App Store. These include video game console emulators, torrent downloaders, and apps which allow users to watch “free” movies and TV shows. The last set of apps, I agree, shouldn’t be on the app store, but the first two are not illegal, but simply could facilitate illegal operations; pun intended.

Sideloading can be done in many ways. You can load the XCode project into XCode for Mac, build it, and send it to your own device. This must be renewed every seven days, but is the most difficult technically to do. You can sign up for a third-party app store, which allows you to download apps which are hosted elsewhere and may not be the latest version, but there is a good chance that the certificate which they use to sign the app will be revoked by Apple. Finally, there are a few apps which automate the signing of apps, and pushes the app to the device.

Two of these methods, however, require a Mac computer. Many people, especially blind people, only use a Windows computer and an iPhone. This usually isn’t a problem, as most blind people either use their phone for much of what they do, or use their computer for much of what they do. However, this means that people who have Windows, but not a Mac, cannot sideload apps using all three methods. So, if a blind person creates an extension to alert you that your screen curtain isn’t on, which means that a VoiceOver user doesn’t have a feature enabled so that the screen is blank, that app cannot be distributed on the App Store, and cannot be sideloaded by Windows users. And I highly doubt a third-party app store would host such a niche app.

Emulating with XCode

Emulators were once a legal gray area. They allow gamers to play video games, from game consoles like the Playstation Portable, on computers, tablets, or phones. They have become legal, however, due to Sony’s lawsuits of emulator developers While emulation is legal, however, downloading games from the Internet, unless, some say, you own the game, is not. Steve Jobs himself, at the 1999 MacWorld conference, showed off an emulator, one for playing Playstation games. Now, emulators are not allowed onto the iOS App Store, unless they have been made by the developers of the games which are being emulated.

XCode on iOS would also help in emulator use. The more people use emulators, the more their use will spread. iPhones are also definitely powerful enough to run emulators; the newer the iPhone, the faster the emulation. An iPhone X R, for example, is powerful enough to run a Playstation Portable game at full speed, even while not being optimized for the hardware, and being interpreted. It’s like running nearly a PS3 game using Python. A video I made demonstrates this. The game, Dissidia DuoDecim, isn’t as accessible as its predecessor. However, it runs, as far as I could tell, at full speed. This spectacularly shows that the computers in our pockets, the ones we use to drone over Facebook, be riled up by news sites, or play Pokemon Go, are much more powerful, and are capable of far more than what we use of them.

Also, since blind people will have access to the code ran with XCode, fixes to sound, the user interface, and even enhancements to both, are possible. PSP games could be enhanced using Apple’s 3D audio effects. Games could be described using Apple’s Machine Learning Vision technology. This applies to even more than accessibility, however. Since more users will be learning to code, or finally have the ability to code for iOS, bugs in iOS ports of open source software can more quickly be resolved.

Conclusion

In this article, I have discussed the possibility of XCode for iOS, and how it could improve learning to code, sideloading apps, and emulation of video games. I hope that this information has been informative, and has enlivened the imaginations of my readers.

Now, what do you all think? Are you a blind person who wants to learn to code in an accessible environment? Are you a sighted person who wants to play Final Fantasy VII on your phone? Or are you one who wants to help fix accessibility issues in apps? Discussion is very welcome, anywhere this post is shared to. I welcome any feedback, input, or corrections. And, as always, thank you so much for reading this article.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Whenever you read a text message, forum post, Tweet, or Facebook status, have you ever seen some one surround a word with stars, like *this*? Have you noticed some one surround a phrase with two stars? This is a part of Markdown, a form of formatting text for web usage.

I believe, however, that Markdown deserves more than just web usage. I can write in Markdown in this blog, I can use it on Github, and even in a few social networks. But wouldn’t it be even more useful everywhere? If we could write in Markdown throughout the whole operating system, couldn’t we be more expressive? And for accessibility issues, Markdown is great because a blind person can just write to format, instead of having to deal with clunky, slow graphical interfaces.

So, in this article, I will discuss the importance of rich text, how Markdown could empower people with disabilities, and how it could work system-wide throughout all computers, even the ones in our pockets.

What’s this rich text and who needs all that?

Have you ever written in Notepad? It’s pretty plain, isn’t it? That is plain text. No bold, no italics, no underline, nothing. Just, if you like that, plain, simple text. If you don’t like plain text, you find yourself wanting more power, more ability to link things together, more ways to describe your text and make the medium, in some ways, a way to get the message across.

Because of this need, rich text was created. One can use this in Word Pad, Microsoft Word, Google Docs, LibreOffice, or any other word processor worth something. When I speak of rich text, to make things simple, I mean anything that is not plain text, including HTML, as it describes rich text. Rich text is in a lot of places now, yes, but it is not everywhere, and is not the same in the places that it is in.

So, who needs all that? Why not just stick with plain text? I mean come on man, you’re blind! You can’t see the rich text. In a way, this is true. I cannot see the richness of text, but in a moment, we’ll get to how that can be done. But for sighted people, which text message is better?

Okay, but how’s your day going?

Okay, but how’s your day going?

Okay, but how’s *your* day going?

For blind people, the second message has the word “your” italicized. Sure, we may have gotten used to stars surrounding words meaning something, but that is a workaround, and not nearly the optimal outcome of rich text.

So what can you do with Markdown? You can do plenty of stuff. You could use it for simply using one blank line between blocks of text to show paragraphs in your journal. You could use it to create headings for chapters in your book. You could use it to make links to websites in your email. You could even simply use it to italicize an emphasized word in a text. Markdown can be as little or as much as you need it to be. And if you don’t add any stars, hashes, dashes, brackets, or HTML markup, it’s just as it is, plain text.

Also, it doesn’t have to be hard. Even Emacs, an advanced text editor, gives you questions when you add a link, like “Link text,” “Link address,” and so on. Questions like that can be asked of you, and you simply fill in the information, and the Markdown is created for you.

Okay but what about us blind people?

To put it simply, Markdown shows us rich text. In the next section, I’ll talk about how, but for now, let’s focus on why. With nearly all screen readers, text formatting is not shown to us. Only Narrator on Windows 10 shows formatting with minimal configuration, and JAWS can be used to show formatting using a lot of configuration of speech and sound schemes.

But, do we want that kind of information? I think so. Why wouldn’t we want to know exactly what a sighted person sees, in a way that we can easily, and quickly, understand? Why would we not want to know what an author intended us to know in a book? We accept formatting symbols in Braille, and even expect it. So, why not in digital form?

NVDA on Windows can be set to speak formatting information as we read, but it can be bold on quite arduous to hear italics on all this italics off as we read what we write bold off. Orca can speak formatting like NVDA, as well. VoiceOver on the Mac can be set to speak formatting, like NVDA, and also has the ability to make a small sound when it encounters formatting. This is better, but how would one distinguish bold, italics, or underline from a simple color change?

Even VoiceOver on iOS, which arguably gets much more attention than its Mac sibling, cannot read formatting information. The closest we get is the phrase separated from the rest of the paragraph into its own item, showing that it’s different, in Safari and other web apps. But how is it different? What formatting was applied to this “different” text? Otherwise, text is plain, so blind people don't even know that there is a possibility of formatting, let alone that that formatting isn’t made known to us by the program tasked with giving us this information. In some apps, like notes, one can get some formatting information by reading line by line in the Note text field, but what if one simply wants to read the whole thing?

Okay but what about writing rich text? I mean, you just hit a hotkey and it works, so what could be better than that? First, when you press Control + I to italicize, there is no guarantee that “italics on” will be spoken. In fact, that is the case in LibreOffice for Windows: you do not know if the toggle key toggled the formatting on or off. You could write some text, select it, then format it, but again, you don’t know if you just italicized that text, or removed the italics. You may be able to check formatting with your screen reader’s command, but that’s slow, and you would hate to do that all throughout the document. Furthermore, dealing with spoken formatting as it is, it takes some time to read your formatted text. Hearing descriptions of formatting changes tires the mind, as it must interpret the fast-paced speech, get a sense of formatting flipped from off to on, and quickly return to interpreting text instead of text formatting instruction. Also, because all text formatting changes are spoken like the text surrounding it, you may have to slow down your speech just to get somewhat ahead of things enough to not grow tired from the relentless text streaming through your mind. This could be the case with star star bold or italics star star, and if screen readers would use more fine control of the pauses of a speech synthesizer, a lot of the exhausting sifting through of information which is rapidly fired at us would be lessened, but I don’t see much of that happening any time soon.

Even on iOS, where things are simpler, one must deal with the same problems as on other systems, except knowing if formatting is turned on or off before writing. There is also the problem of using the touch screen, using menus just to select to format a heading. This can be worked around using a Bluetooth keyboard, if the program you’re working in even has a keyboard command to make a heading, but not everyone has, or wants, one of those.

Markdown fixes, at least, most of this. We can write in Markdown, controlling our formatting exactly, and read in Markdown, getting much more information than we ever have before, while also getting less excessive textual information, hearing “star” instead of “italics on” and “italics off” does make a difference. “Star” is not usually read surrounding words, and has already become, in a sense, a formatting term. “Italics on” sounds like plain text, is not a symbol, and while it is a formatting term, has many syllables, and just takes time to say. Coupled with the helpfulness of Markdown for people without disabilities, adding it across an entire operating system would be useful for everyone; not just the few people with disabilities, and not just for the majority without.

So, how could this work?

Operating systems, the programs which sit between you and the programs you run, has many layers and parts working together to make the experience as smooth as the programmers know how. In order for Markdown to be understood, there must be a part of the operating system that translates it into something that the thing that displays text understands. Furthermore, this thing must be able to display the resulting rich text, or Markdown interpretation, throughout the whole system, not just in Google Docs, not just in Pages, not just in Word, but in Note Pad, in Messages, in Notes, in a search box.

With that implemented, though, how should it be used? I think that there should be options. It’s about time some companies released their customers from the “one size fits all” mentality anyway. There should be an option to replace formatting done with Markdown with rich text unless the line the formatting is on has input focus, a mode for simply showing the Markdown only and no rich text, and an option for showing both.

For sighted people, I imagine seeing Markdown would be distracting. They want to see a heading, not the hash mark that makes the line a heading. So, hide Markdown unless that heading line is navigated to.

For blind people, or for people who find plain text easier to work with, and for whom the display of text in different sizes and font faces is jarring or distracting, having Markdown only would be great, while being translated for others to see as rich text. Blind people could write in Markdown, and others can see it as rich text, while the blind person sees simply what they wrote, in Markdown.

For some people, being able to see both would be great. Being able to see the Markdown they write, along with the text that it produces, could be a great way for users to become more comfortable with Markdown. It could be used for beginners to rich text editing, as well.

But, which version of Markdown should be used?

As with every open source, or heatedly debated, thing in this world, there are many ways of doing things. Markdown is no different. There is:

and probably many others. I think that Pandoc’s Markdown would be the best, most extended variant to use, but I know that most operating system developers will stick with their own. Apple will stick with Swift Markdown, Microsoft may stick with Github Markdown, and the Linux developers may use Pandoc, if Pandoc is available as a package on the user’s architecture, and if not, then it’s some one else’s issue.

Conclusion

In this article, I have attempted to communicate the importance of rich text, why Markdown would make editing rich text easy for everyone, including people with disabilities, and how it could be implemented. So now, what do you all think? Would Markdown be helpful for you? Would writing blog posts, term papers, journal entries, text messages, notes, or Facebook posts be enhanced by Markdown rich text? For blind people, would reading books, articles, or other text, and hearing the Markdown for bold, italics, and other such formatting make the text stand out more, make it more beautiful to you, or just get in your way? For developers, what would it take to add Markdown support to an operating system, or even your writing app? How hard will it be?

Please, let me know your thoughts, using the Respond popup, or replying to the posts on social media made about this article. And, as always, thank you so much for reading this post.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

This article will be something rather different from my normal postings. I’ve decided to begin doing news posts, rather than just my ramblings. Oh, there will still be rambles, as I have an opinion on everything, and readers might as well know the person I am, to understand more about my viewpoint, to gauge the content relative to the content writer.

The scope of the news will vary, but I expect it to be mostly open source technology, relevant to the blind community. This may change, as readers may always contact me requesting that I write articles or news items about subjects. I will let the folks at Blind Bargains chase after Humanware, Vispero, HIMS, and other such “big names” in the Assistive Technology world. I seek for my content to be different, meaningful, and lacking the comedic nature of Podcasts for the blind. Yes, I do have a slight grudge against larger sites who can dictate, pretty well without fail, what readers know about. After all, if a blind person only listens to the Blind Bargains podcast, or even reads their news posts, will they know about these advancements, like retroarch accessibility, Stormux, and so on? In any case, with that out of the way, let’s be on with the news.

Retroarch is accessible

Retroarch, the program that brings many video game emulators together into one unified interface, was made accessible in December 2019. Along with its ability to grab text from the screen of games and speak it, this brings accessibility to many games, on all 3 major operating systems for desktop and laptop computers. No, Android and iOS cannot benefit from this yet. Also, there is more to come.

For a detailed page on using Retroarch for the blind, see this guide.

GTK 4 could be more accessible

This year, folks from GTK met with some accessibility advocates. They came up with this roadmap for better accessibility. GTK is the way some Linux apps are given graphical representations, like buttons, check boxes, and so on. As I always say, the operating system is the root of accessibility, and the stronger that root is, the more enjoyable it will be for blind people to use Linux.

I hope that this will bring much more accessibility to GTK programs, and get rid of a lot of reasons to stick with Mac or Windows for many more technically inclined blind people, like myself. Yes, even I have reservations about using it. Will it be good enough? Will I be able to get work done? Will I be able to play the game I like most? Will it require a lot of work? At least with better GTK accessibility, a few of those questions will be better answered affirmatively.

Mate 1.24 brings greater accessibility

Last month, Mate released version 1.24 of its desktop environment, which is basically like a version of the Windows desktop, handling the start menu, task bar, and other such aspects of a graphical interface. Mate uses a system more like Windows XP, while other desktops, like Gnome, are more new in their approaches.

Just search for “accessibility” on the linked page, and you’ll find quite a few improvements. This is a great sign; I really like it when organizations, or companies, display their accessibility commitment proudly in updates, and not just the bland “bug fixes and performance improvements” mantra tiredly used in most change logs today.

Stormux: a distribution which might stick around

After the quiet death of F123 a contributor to the blind Linux community, Storm, created a fork of F123, calling it Stormux The project is new, and still has a few problems, but is designed to be a jumping off point into Arch Linux, which is a more advanced, but very up-to-date, variant of Linux. It is only available for the Raspberry Pi 4 computer for now, and I will have a chance to test it soon. The website is as new as the software, so the downloads section is not linked to the main page, neither is much else. In the coming months, both the website and operating system should see some development.

Conclusion

This has been my first news article on this blog. I hope to write more of these, along with my normal posts, as new developments occur. However, I cannot know about everything, so if one of my readers finds or creates something new, and wishes for it to be written about and possibly read, please let me know. I will not turn away anyone because of obscurity or lack of general perceived interest.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Earlier this year, my Airpods Pro began making a clicking sound, when in Noise Cancellation or transparency mode. I didn’t think much of it, and just used them regularly, until sound began distorting after a while of listening. I’ve simply stopped using them, as I shudder to think how much a cab ride to the nearest Apple Store, potentially an hour away, would cost. This is only one problem with the Apple ecosystem: being locked into Apple’s wireless headphones, other Bluetooth headphones, or other workarounds, and Apple Stores being far away, which is what I’ll be focusing on in this article. I will show, in the following paragraphs, how Apple’s handling of its ecosystem effects the hardware and software regarding accessibility matters. These matters may effect some in the general population, but people with disabilities are effected much more acutely.

Hardware

Apple’s hardware has usually been very well built. Reviewers often talk about nothing else. From the iPhone’s camera, iPad’s screen, Mac’s CPU and RAM, to the Watch’s health sensors, and the Airpod’s H1 chip, hardware, for Apple, is a big part of their products, and reviewers focus on that. But how does that help or hinder accessibility?

The TouchBar on the Mac

In late 2016, Apple’s MacBook Pro gained the Touch Bar a touch strip across the top of the keyboard, replacing the function keys. The reason was to add variable icons which could visibly change functions across the operating system. Many people may have liked this change, as they could use hand-eye coordination to perform functions they otherwise would have used the trackpad and menus for. These type of users would not have known about keyboard shortcuts, using the function keys, and other easy ways of getting the same functions done without needing yet another touch input.

Blind people, however, are a bit different. We usually know many keyboard shortcuts, use the function keys without a problem, and do not always need a touch screen. The Touch Bar can be used, but it is much slower, as we have no tactile way of finding just one distinct item on the touch bar, like the play/pause button, or the volume slider. Once we have found the function we want, we must tap it twice to activate, like a sighted person must left click twice, once to focus the item, the next to activate it. In fact, VoiceOver, the screen reader for the Mac, had to adopt a command to raise or lower the volume via keyboard, since it is slower to do so on the Touch Bar. On the other hand, most operating system and application features can be accessed via keyboard commands, so I only need to use the Touch Bar for system functions like volume, brightness of the screen, and media playback when I’m not in the media player.

If a blind person wants to use their Mac as a Windows machine also, through Bootcamp, they must attach an external keyboard, or simply not use the function keys, as Windows screen readers have no such notion of a Touch Bar function key row, thus will not read what a user is selecting, and will also not let a user explore the touch bar to find a function before activating them, so one touch activates an item, even if it isn’t the one a user wants. See this Applevis forum post for more information on this.

I feel that Apple should have made this change on the MacBook Air, for regular consumers, and left the Pro machines alone. Yes, they could have made the power button into the Touch ID button on the pro machines, and I hope that, just as they revived the scissor-switch keyboards, they revive the Function keys as well. It would help me greatly in doing even simple tasks easier, like pausing, skipping, and rewinding audio, and handling volume and brightness more quickly.

There is still hope, however. This year, Apple released the MacBook Air refresh with the new keyboard. It has an Escape key, at least. Now, they just need to add back the other twelve keys on that row, and things will be back to normal.

The headphone jack

In 2016’s iPhone 7 and 7+, Apple removed the headphone jack, replacing it with their own Airpods, other Bluetooth headphones, and Lightning audio. They did not add another Lightning port onto the phone so that one could listen to wired headphones and charge the phone at the same time, or, as they did with the TouchBar on the MacBook, but left people to choose between wireless options if one wanted to be able to listen and charge the phone.

For most people, this isn’t an issue. They don’t usually need headphones, only using them when listening to music or movies, or playing games. Even then, some people just listen on speakers built into their phone, or use external speakers, like the HomePod. They also do not have to worry about latency. Music is not effected by it, and videos are usually delayed, so that the picture synchronizes with the audio.

For blind people, however, headphones are important. In order to use an iPhone, most blind people use a screen reader, which speaks information out loud using a voice like the one Siri uses. Using a screen reader without headphones means that anyone nearby can hear what the user’s phone is saying, which can reveal sensitive information like the phone numbers of people who call or text the person, user passwords, and even the pass code to their phone. This means that headphones are quite necessary. Some blind people own Braille displays, which gets output from a screen reader and displays it in braille, but these devices are expensive, starting at $600, up to near $6000, so are out of most blind people’s price ranges.

Wireless headphones, using Bluetooth, often have large lags when being used. If you play a game using them, you’ll surely notice it. A blind person who uses Bluetooth headphones must deal with that for all interactions with the phone. Imagine having to deal with a phone with a screen that lags behind what you’re doing on the phone, even by 300 Milliseconds. Some Bluetooth headphones are better, but none can match wired ones. Apple’s Airpods 2 and Airpods Pro come closer, but have their own problems: they still must be charged, have lesser battery life, and cost much for the sound quality they come with.

To solve all of these problems, I have bought a $10 Lightning to 3.5 Millimeter Headphone adapter, and use that with the headphones that I already have. Sure, I have to take my iPhone with me in my pocket wherever I go, but I usually do that anyways now that my Apple Watch is broken also. Sure, I don’t have my Lightning connector free, but I have a charging mat that I use to charge the phone. There is no lag when using VoiceOver, the sound quality is very good, and I don’t have to charge my headphones.

Hope is not lost, however. There is a rumor that iPhones could be completely wireless. Of course, one still must plug the iPhone into a computer, so it could be like the older MacBook products with a magnetic spot to plug dongles into. In this case, a third-party dongle could add the Lightning and headphone jack back to the iPhone.

The Home button and TouchID

In 2017, Apple shipped the iPhone X, the first iPhone without a home button. This was meant to extend the iPhone’s screen completely across the bottom of the screen, even though they had to notch the screen at the top. Along with the removal of the home button, they added FaceID. This replaced TouchID as the authentication method for unlocking the device in general usage of the phone.

Most users do not have a problem with FaceID. They raise the phone to look at it, and as they look at the camera, the phone unlocks. They can then swipe the lock screen away from the bottom, revealing the home screen. For sighted users, this is a quick, easy, and intuitive motion.

For blind people, it isn’t so simple. We do not have to look at our phones in order to use them. In fact, users with braille displays or hardware, Bluetooth keyboards, do not have to touch their phone. These users can easily and quickly enter their pass codes, however, so they usually are not effected by this. Most users must pick up the phone, wait for the unlock sound from the screen reader, then put it back down on the surface they were using it on before. If FaceID doesn’t work, they must angle the phone away and back again for another try. if it fails a few more times, they must enter their pass code, with headphones in, if they seek to preserve their privacy around others.

Hope is not lost, however. There is a rumor that a new iPhone SE type device, the iPhone 9, could be released this year with a home button, TouchID, and still sport the A13 CPU. This would be something that I myself may purchase, as I doubt there will be much greater features in the iPhone 12, released later this year.

Software

Apple’s software usually comes last in reviews. Reviewers may talk about the smooth animations, camera machine learning effects, or updates to apps. For users of Apple’s accessibility services, however, software is the core experience of a device, and what sets MacOS apart from Windows and Linux, and iOS apart from Android. I have covered Apple’s accessibility options extensively elsewhere, so I will use this section to highlight parts of software which effect accessibility indirectly.

Gatekeeper on MacOS

For a pro machine, the Mac lately has become a mess of confirmation dialog boxes and hindrances to opening software not blessed by Apple or its notarization process. For most users, even most blind users, this won’t be much of an issue. If you use Apple’s apps, or apps from the App Store, you’ll be fine. But what happens when you want to use, say, Emacs for editing text, or Retroarch for playing video games?

Blind people sometimes use specialized software to complete tasks. We use apps on our phones for recognizing pictures, money, and images of text, since these are not usually accessible to us. On the Mac, I use Emacs for editing text, using the Emacspeak extension, because I find it much easier and more enjoyable than Text Edit, Pages, and other alternatives. In fact, I am using Emacs right now, to write, and publish, this blog post. However, this program is not notarized by Apple’s processes, so instead of just being able to open it, I must open it from the contextual menu, press “Cancel,” then open it again, and press “Open.” My laptop is a pro machine; I should be treated as a professional. These features, as with the Touch Bar, should be left to MacBook Air users, or left for iPad users, when, or if, the iPad becomes a general-purpose computer.

Conclusion

In this article, I’ve explored how some of Apple’s decisions across its ecosystem have effected accessibility. Hardware has changed much, with software mainly being usable besides accessibility bugs and overbearing security. More about direct accessibility in software and services can be found in other articles. Other, smaller issues include the lack of Apple Stores is smaller cities, turning on iPhone not producing a vibration, sound, or other way for a blind person to immediately know when it has turned on, and the Mac’s startup chime disabled by default.

Now, what do you think, readers? I’d love to have your feedback, and thank you for reading.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

In a previous blog post called “Apple’s Accessibility Consistency”, I talked about Apple having a few problems to fix. Last month, they fixed one of them, being Apple Research. The hearing study now will have accessible hearing tests and questions. Focus is still a little jumpy in the Heart and Movement study questions, and my watch screen has become a moving part so I can’t participate in that study completely, or track my sleep accurately, but getting transportation to the Apple Store is something I’ve covered well on Twitter already.

So, thanks so much to the people at Apple who handled the Research accessibility to this point, and may it become even better, reversing the trend started with the inaccessibility of Apple Arcade.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

-

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

What if all of your software were free, like NVDA? What if the only thing asked of you by software makers was to donate or contribute? How would this effect your life, and the lives of developers? In this article, I will explain what open source is, what it is currently used for, my experiences with it, and how you can make it better.

What is Open Source?

Open source is a splinter of the Free Software movement. The Free Software movement believes that everyone should be able to view a program’s code, and modify it if needed. The thing which sets open source apart is that it doesn’t mind working with companies which create closed source, or proprietary, software which cannot be modified or have its source code seen by the user.

When free and open source spokespeople talk about freedom, they mean free as in free speech, not as in free things. This talk of freedom upsets business, so the term “open source” is used instead. Much open source software is free of cost, with the developers asking for donation instead of demanding payment.

What is Open Source used for?

Open source software is just about everywhere, and often comes with a tightly knit community of users. Examples of open source in the blind community include: NVDA, LibreOffice, Orca Screen Reader, BrailleBlaster, LibLouis, Emacspeak.

Examples of closed source include:

https://www.freedomscientific.com/products/software/jaws/ JAWS => https://support.microsoft.com/en-us/help/17173/windows-10-hear-text-read-aloud Narrator (HTTP) => https://www.apple.com/accessibility/iphone/vision/ VoiceOver (HTTP) => https://support.google.com/accessibility/android/answer/6283677?hl=en TalkBack (HTTP) => https://duxburysystems.com Duxberry Braille Translator (HTTP)

iOS, Windows, and plenty of apps you may have on your iPhone or Android phone.

Interestingly, some projects are a mixture of both. JAWS incorporates Liblouis for braille translation, and so do Narrator and VoiceOver. Apple uses plenty of open source tools: Python, command line shells, and many command line tools on MacOS. Microsoft makes BRLTTY and Liblouis available for download to interface with Narrator.

Linux, which founded many offshoots, is an entire operating system built on open source ideals. Blind people began customizing Linux for use with speech, and work is ongoing to make Linux an accessible operating system. This began with

=> https://wiki.vinuxproject.org Vinux (HTTP)

It started up talking, something no other system had done before. One could use it with speech or braille, and used the eSpeak voices.

That operating system, or distribution of Linux as they are called, is now abandoned, not having been updated in years. Another project

https://distrowatch.com/table.php?distribution=sonar Sonar Gnu Linux (HTTP)

also came and went. It was based on Arch Linux, and was my favorite distribution. People now use

https://talkingarch.info/download.html Talking Arch (HTTP)

or

https://tarch.org Tarch (HTTP)

if they are adventurous and

https://slint.fr/wiki/doku.php?id=en:installation Slint Linux (HTTP)

if they aren’t. These are the most popular Linux distributions for those who are blind. If I’ve missed anything, let me know. Some distributions which were not made for the blind are also accessible.

=> https://getfedora.org Fedora (HTTP) => https://trisquel.info Trisquel (HTTP) => https://www.debian.org Debian (HTTP) => https://ubuntu.com Ubuntu (HTTP)

are also able to be installed, but the user must know the correct keyboard command to turn on the screen reader.

Most open source software can be found on

=> https://github.com Github (HTTP)

That’s where NVDA, Orca, and many other tools, even for the blind, are. But how reliable are these tools? What about the operating system? Could one get rid of Windows with this software founded on ideals?

## My experiences with Open Source

### Linux

Accessibility is a software issue, so the root of software, the operating system, will make or break any accessibility. My experiences with Linux began, mainly, with an old operating system called Vinux. I didn’t stick with it for long, and soon forgot about it, and it is now abandoned. Linux can run many different desktops, which give users the major system functions of accessing apps and system utilities. Gnome and Mate are accessible, just about everything else, for now, including KDE, isn’t. Vinux used Gnome 2, which is basically what Mate is now.

I came back to Linux for a short while with Sonar. I really liked it, but missed the games and speech options Windows had. I liked all the software that we have access to on Windows, and browsing the Internet with Linux wasn’t that good back then. I soon got into the Apple ecosystem with an iPhone and such, and already had a Mac for quite a while. Still, Linux called to me.

I’m never satisfied with the workflow I have. I always want to be more efficient, more quick, more capable in what I do. I always want better sound, even if 3D effects and virtual surround sound aren’t actually necessary or real. Like a sighted person wants great graphics, I want great sound. On Linux, there is a way to enable virtual surround sound, but it offers little reward, and much configuration, crackling in audio, and doesn’t augment stereo audio as options on other systems does. The Mac has a third-party option,

=> https://www.globaldelight.com/boom/ Boom 3D (HTTP)

and Windows has

=> https://www.windowscentral.com/how-use-windows-sonic-windows-10-creators-update Windows Sonic for Headphones (HTTP)

Both of these require nearly no configuration, augments much more audio, and only Boom 3D causes a bit of sluggishness.

I also want a faster way of doing things. Many keyboard shortcuts, letter navigation of items in lists and menus, and ways of only getting the information I want. I have much of this on the Mac, with the Mail app allowing me, through table navigation, to speed through subjects instead of having to hear the row titles and contents and all before what I really want to hear, and being able to go to the previous or next message in a thread without needing to close the window. Linux has some of this, but many times things are unclear, with Orca, the Linux screen reader, just speaking the items, and not what type of item it is. This is clear in the area of Audacious settings where you choose sound effects.

Even so, Linux has such an appeal to me. I have tried Fedora Linux, Slint, Ubuntu, Debian, Arch, and found that there is always something missing. Accessibility isn’t that good in the graphical interface, and much still takes a lot of configuring and asking the community. And I really hate asking for help.

Recently, the

=> https://mate-desktop.org Mate Desktop (HTTP)

team has released a version with accessibility fixes. This is important, as many companies, like app developers, Apple, and Google, rarely share that there are accessibility fixes in minor updates, and don’t even share all the new features in major releases. This gives me some hope that the open source community at large just needs more blind people telling them about our needs. Then again, this is probably just another of my excuses to bash my head against the hardened wall of Linux, yet again. Plus, everything in the open source moves slowly, and this is doubly true for open source assistive technology.

There are, however, blind people who use Linux, just as there are some in the blind community who use Android. In fact, there is an entire

=> https://linux-a11y.org Linux Accessibility Site (HTTP)

However, the site does have links to abandoned software, and doesn’t link to all accessibility initiatives, like

=> https://stormux.org Stormux (HTTP)

Both Linux-a11y and Stormux ask for donations, so there is also duplicated effort and decentralization even in the blind Linux user community.

Now, I use a Mac. It contains enough open source technology to support

=> https://brew.sh Homebrew (HTTP)

which is a

=> https://en.wikipedia.org/wiki/Package_manager package manager (HTTP)

I can run Emacs, with Emacspeak on it, along with just about any command line program I’d use on Linux. The Mac’s graphical interface is good enough for mail and some web browsing, just not so good with Google Docs, and I can probably do anything on it that a Linux user can do.

And yet, sometimes, Linux calls to me still. VoiceOver isn’t the best screen reader out there, and Linux has the appeal of being run by people, not corporations. And yet, looking at the

=> https://www.gnu.org/accessibility/accessibility.en.html GNU accessibility statement (HTTP)

you’d think it was updated in 2006 or so. It may have been, which is a slap in the face for any accessibility advocate. The GNU project, with this statement, says to us that we’re only worth putting up a quick page, detailing the inaccessibility of old technologies and not maintaining it. It tells us that we’re a good poster to hang up in their trophy room of “people aided by our courageous stand for the minorities who desperately need our help,” but then discarded for the “community” to handle. After all, the GNU don’t know anything about helping the blind, do they? Can the GNU be expected to enforce accessibility among their projects? Doesn’t the government take care of the poor blind people? Blind people have their Vinux and Sonar, why not just use those? No, that is definitely not segregation, not at all!

### Open Source Programs

I began using NVDA around high school. No one had ever heard of it at that point, in a day when people called all screen readers either “JAWS” or “Microsoft.” I’ve not stopped using it ever sense. Its features have grown, its users growing even faster. It now has a community of programmers, translators, and writers. It is, in my opinion, the most versatile Windows screen reader. JAWS still works okay for some things, like malformed spreadsheets, but for everything else on Windows, I use NVDA.

Braille Blaster is also a great project, making braille translation, embossing, and transcription free. I use it for translating EBooks into good, formatted braille files for reading on my iPhone using the BARD Mobile app. Now, I don’t even use Duxberry, even though it is provided on my work computer.

I’ve found that open source programs, built upon closed source operating systems, are the best compromise. NVDA, BrailleBlaster, TDSR, and many other tools built for the blind community run on Windows or Mac. Having a great foundation in accessibility makes all the difference for users.

## How can You help?

Github, as stated earlier, is a hub of open source projects. One great thing about the service is that anyone can contribute. Just make an account, and you’re ready to help.

If you can program, you can

=> https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request collaborate by modifying code (HTTP)

If you try the software and find accessibility problems, you can tell developers about

=> https://help.github.com/en/github/managing-your-work-on-github/creating-an-issue bugs or features (HTTP)

that need fixing or adding. If you find a project you like, they may have a Patreon to which you can donate, or you can simply spread the word.

One large project which has become accessible through efforts of the blind reaching out is Retroarch. An issue was created asking for accessibility, and it was released in the very next version, and even more work is being done to make even more games accessible. Open source collaboration is great for even more than just programming. See projects I’m working on, all text, on

=> https://devinprater.github.io/about/ the About page of my original blog (HTTP)

Another bit of news is that GTK, a way for programs to be displayed and written, has had a

=> https://blog.gtk.org/2020/02/17/gtk-hackfest-2020-roadmap-and-accessibility/ Hackfest (HTTP)

where accessibility was extensively discussed. It is hoped that this means that accessibility will become a larger issue in Linux, and that blind people will one day be able to use Linux as confidently as they use Windows and Mac now.

## Conclusion

As time goes by, I find myself drawn to open source. its promise of a better way of making software, the community of helpful people, and the freedom give me hope. While the Linux operating system does not come close to satisfying the hope I have for accessibility, programs and initiatives on top of Windows and Mac have thrived. While the poor accessibility statement of the GNU project shows that the community at large does not yet care much about accessibility, the community of blind people working for our own future, rather than that of a corporation, gives me hope of a bright future of digital accessibility for blind people.

What do you think, reader? Does open source call to you as well? Do you just use whatever system you’re given? Have you made peace with Linux’s shortcomings around accessibility? Please, let me know. I am glad to receive feedback. If you’d like, you may even suggest, via email or Twitter, articles for which you feel passionate about that need coverage. I will consider all that you send me, and thank you for reading.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

This article will explore Apple’s consistent attention to accessibility, and how other tech companies with commitments to accessibility, like Microsoft and Google, compare to Apple in their accessibility efforts. It also shows where these companies can improve their consistency, and that no company is perfect at being an Assistive Technology provider yet.

Introduction

Apple has shown a commitment to accessibility since the early days of the iPhone, and since mac OSX Tiger. Its VoiceOver screen reader was the first built-in screen reader of any usability on a personal computer and smart phone. Now, VoiceOver is on every Apple product, even the HomePod. It is so prevalent that people I know have begun calling any screen reader “VoiceOver.” This level of consistency should be congratulated in a company of Apple’s size and wealth. But is this a continual trend, and what does this mean for competitors?

This will be an opinion piece. I will not stick only to the facts as we have them, and won’t give sources for everything which I show as fact. This article is a testament to how accessibility can be made a fundamental part of a brand’s experience for effected people, so feelings and opinions will be involved.

The trend of accessibility

The following sections of the article will explore companies trends of accessibility so far. The focus is on Apple, but I’ll also show some of what its competitors have done over the years as well. As Apple has a greater following of blind people, and Applevis has documented so much of Apple’s progress, I can show more of it than I can its competitors, whose information written by their followers are scattered, thus harder to search for.

Apple

Apple has a history of accessibility, shown by this article Written just under a decade ago, it goes over the previous decade’s advancements. As that article has done, I will focus on little of a company’s talk of accessibility, but more so its software releases and services.

Apple is, by numbers and satisfaction, the leader in accessibility for users of its mobile operating systems, but not in general purpose computer operating systems. Microsoft’s Windows is used far more than Apple’s MacOS. Besides that, and services, Apple has made its VoiceOver screen reader on iOS much more powerful, and even flexible, than its competitor, Google’s TalkBack.

iOS

As iPhones were released each year, so were newer versions of iOS. In iOS 6 accessibility settings began working together, VoiceOver’s Rotor gained a few new abilities, new braille displays worked with VoiceOver, and bugs were fixed. In iOS 7, we gained the ability to have more than one high quality voice, more Rotor options, and the ability to write text using handwriting.

Next, iOS 8 was pretty special to me, personally, as it introduced the method of writing text that I almost always use now, Braille Screen Input. This lets me type on the screen of my phone in braille, making my typing exponentially faster. Along with typing, I can delete text, a word or character, and now, send messages from within the input mode. I can also change braille contraction levels, and lock orientation into one of two typing modes. Along with this, Apple added the Alex voice, its most natural yet, which was only before available on a Mac. For those who do not know braille or handwriting, a new “direct touch typing” method allows a user to type as quickly as a sighted person, if they can memorize exactly where the keys are, or have spell check and autocorrection enabled.

In iOS 9, VoiceOver users are able to choose Siri voices to speak using VoiceOver, as an extension of the list of Vocalizer voices, and Apple’s Alex voice. One can now control speech rate more easily, and the speed of speech can be greater than previously possible. One can control the time a double tap should take, a better method of selecting text, braille screen input improvements, and braille display fixes and new commands.

Then, iOS 10 arrived, with a new way to organize apps, a pronunciation dictionary, even more voices, reorganized settings, new sounds for actions, a way to navigate threaded email, and some braille improvements. One great thing about the pronunciation editor is that it does not only apply to the screen reader, as in many Windows screen readers, but to the entire system speech. So, if you use VoiceOver, but also Speak Screen, both will speak as you have set them to. This is a testament to Apple’s attention to detail, and control of the entire system.

With the release of iOS 11, we gained the ability to type to Siri, new Siri voices, verbosity settings, the ability to have subtitles read or brailled, and the ability to change the speaking pitch of the voice used by VoiceOver. VoiceOver can now describe some images, which will be greatly expanded later. We can now find misspelled words, which will also be expanded later. One can now add and change commands used by braille displays, which, yes, will be expanded upon later. A few things which haven’t been expanded upon yet are the ability to read formatting, however imprecise, with braille “status cells,” and the “reading” of Emoji. Word wrap and a few other braille features were also added.

Last year, in iOS 12, Apple added commands to jump to formatted text for braille display users, new Siri voices, verbosity options, confirmation of rotor actions and sent messages, expansion of the “misspelled” rotor option for correcting the misspelled word, and the ability to send VoiceOver to an HDMI output.

Finally, In iOS 13,Apple moved accessibility to the main settings list, out of the General section, provided even more natural Siri voices, haptics for VoiceOver, to aid alongside, or replace, the sounds already present, and the ability to modify or turn them off. A “vertical scroll bar” has also been added, as another method of scrolling content. VoiceOver can now give even greater suggestions for taking pictures, aligning the camera, and with the iPhone 11, what will be in the picture. One can also customize commands for the touch screen, braille display, and keyboard, expanding the ability braille users already had. One can even assign Siri shortcuts to a VoiceOver command, as Mac users have been able to do with Apple Script. One can now have VoiceOver interpret charts and graphs, either via explanations of data, or by an audible representation of them. This may prove extremely useful in education, and for visualizing data of any type. Speaking detected text has improved over the versions to include the detecting of text in unlabeled controls, and now can attempt to describe images as well. Braille users now have access to many new braille tables, like Esperanto and several other languages, although braille no longer switches languages along with speech.

MacOS

MacOS has not seen so much improvement in accessibility over the years. VoiceOver isn’t a bad screen reader, though. It can be controlled using a trackpad, which no other desktop screen reader can boast. It can be used to navigate and activate items with only the four arrow keys. It uses the considerable amount of voices available on the Mac and for download. It simply isn’t updated nearly as often as VoiceOver for iOS.

OSX 10.7, 10.8, and 10.9 have seen a few new features, like more VoiceOver voices, braille improvement, and other things. I couldn’t find much before Sierra, so we’ll start there.

In Sierra, Apple added VoiceOver commands for controlling volume, to offset the absence of the physical function keys in new MacBook models. VoiceOver can also now play a sound for row changes in apps like Mail, instead of interrupting itself to announce “one row added,” because Apple’s speech synthesis server on the Mac doesn’t innately support a speech queue. This means that neither does VoiceOver, so interruptions must be worked around. Some announcements were changed, HTML content became web areas, and interaction became “in” and “out of” items. There were also bug fixes in this release.

In High Sierra, one can now type to Siri, VoiceOver can now switch languages when reading multilingual text, as VoiceOver on the iPhone has been able to do since iOS 5 at least, improved braille editing and PDF reading support, image descriptions, and improved HTML 5 support.

In MacOS Mojave Apple added the beginning of new iPad apps on Mac. These apps work poorly with VoiceOver, even still in Catalina. There were no new reported VoiceOver features in this release.

This year, in MacOS Catalina, Apple added more control of punctuation, and XCode 11’s text editor is now a little more accessible, even though the Playgrounds function isn’t, and the Books app can now, after years of being on the Mac, be used for basic reading of books. Braille tables from iOS 13 are also available in MacOS.

The future of Apple accessibility

All of these changes, however, were discovered by users. Apple doesn’t really talk about all of its accessibility improvements, just some of the highlights. While I see great potential in accessible diagrams and graphs, Apple didn’t mention this, and users had to find this. Subsequently, there may be fixes and features that we still haven’t found, three versions of iOS 13 later. Feedback between Apple and its customers has never been great, and this is only to Apple’s detriment. Since Apple rarely responds to little feedback, users feel that their feedback doesn’t mean anything, so they stop sending it. Also of note is that on VoiceOver’s Mac accessibility page the “Improved PDF, web, and messages navigation” section is from macOS 10.13, two versions behind what is currently new in VoiceOver.

Another point is that services haven’t been the most accessible. Chief among them is Apple Arcade, which has no accessible games so far. Apple research, I’ve found, has some questions which have answers that are simply unlabeled buttons. While Apple TV Plus has audio description for all of their shows, this is a minor glimmer of light, shrouded by the inaccessibility of Apple Arcade, which features, now, over one hundred games, none of which I can play with any success. In all fairness, a blind person who is patient may be able to play a game like Dear Reader, which has some accessible items, but the main goal of that game is to find a word in a different color and correct it, which is completely at odds with complete blindness, but could be handled using speech parameter changes, audio cues, or other signals of font, color, or style changes.

Time will tell if this new direction, no responsibility for not only other developers’ work, but also the Mac and work done by other developers and flaunted by Apple, will become the norm. After all, Apple Arcade is an entire Tab of the App Store; inaccessibility is in plain view. As a counterpoint, the first iPhone software, and even the second version, was inaccessible to blind people, but now the iPhone is the most popular smart phone, in developed nations, for blind people.

Perhaps next year, Apple Arcade will have an accessible game or two. I can only hope that this outcome comes true, and not the steady stepping back of Apple from one of their founding blocks: accessibility. We cannot know, as no one at Apple tells us their plans. We aren’t the only ones, though, as mainstream technology media shows. We must grow accustom to waiting on Apple to show new things, and reacting accordingly, but also providing feedback, and pushing back against encroaching inaccessibility and decay of macOS.

Apple’s competitors

In this blog post, I compare operating systems. To me, an operating system is the root of all software, and thus, the root of all digital accessibility. With this in mind, the reader may see why it is imperative that the operating system be as accessible, easy and delightful to use, and promote productivity as much as possible. Microsoft and Google are the largest competitors of Apple in the closed source operating system space, so they are what I will compare Apple to in the following sections.

Google

Google is the main contributor to the Android and Chromium projects. While both are open source, both are simply a base to be worked from, not the end result. Not even Google’s phones run “pure” Android, but have Google services and probably other things on the phone as well. Both, though, have varying accessibility as well. While Apple pays great attention to its mobile operating system’s accessibility, Google does not seem to put many resources towards that. However, its Chrome OS, which is used much in education, is much more easily accessible, and even somewhat of an enjoyable experience for a lite operating system.

Android

Android was released one year after iOS. TalkBack was released as part of Android 1.6. Back then, it only supported navigation via a keyboard, trackpad, or scroll ball. It wasn’t until version 4 when touch screen access was implemented into TalkBack for phones, and up to this day, only supports commands done with one finger, two finger gestures being passed through to Android as one finger commands. TalkBack has worked around this issue by recently, in Android version 8, gaining the ability to use the finger print sensor, if available, as a gesture pad for setting options, and the ability the switch spoken language, if using Google TTS, when reading text in more than one language. TalkBack uses graphical menus for setting options otherwise, or performing actions, like deleting email. It can be used with a Bluetooth keyboard. By default, it uses Google TTS, a lower quality, offline version of speech used for things like Google Translate, Google Maps, and the Google Home. TalkBack cannot use the higher quality Google TTS voices. Instead, voices from other vendors are downloaded for more natural sound.

BrailleBack, discussed on its Google Support page, is an accessibility service which, when used with TalkBack running, provides rudimentary braille support to Android. Commands are rugged, meaningless, and unfamiliar to users of other screen readers, and TalkBack’s speech cannot be turned off while using Brailleback, meaning that, as one person helpfully provided, that one must plug in a pair of headphones and not wear them, or turn down the phone’s volume, to gain silent usage of one’s phone using braille. Silent reading is one of braille’s main selling points, but accessibility, if not given the resources necessary, can become a host of workarounds. Furthermore, brailleback must be installed onto the phone, providing another barrier to entry for many deaf-blind users, so some simply buy iPods for braille if they wish to use an Android phone for customization or contrarian reasons, or simply stick with the iPhone as most blind people do.

Now, though, many have moved to a new screen reader created by a Chinese developer, called Commentary. This screen reader does, however, have the ability to decrypt your phones if you have encryption enabled. For braille users BRLTTY is used for braille usage. This level of customization, offset by the level of access which apps have to do anything they wish to your phone, is an edge that some enjoy living on, and it does allow things like third-party, and perhaps better screen readers, text to speech engines, apps for blind people like The vOICe which gives blind people artificial vision, and other gray area apps like emulators, which iOS will not accept on the App Store. Users who are technically inclined do tend to thrive on Android, finding workarounds a joy to find and use, whereas people who are not, or are but do not want to fiddle with apps to replace first-party apps which do not meet the needs of the user, and unoptimized settings, find themselves doing more configuring of the phone than using it.

Third party offerings, like launchers, mail apps, web browsers, file managers, all have variable accessibility, which can change from version to version. Therefore, one must navigate the shifting landscape of first party tools which may sort of be good enough, third party tools which are accessible enough but may not do everything you need, and tools which users have found workarounds for using them. Third party speech synthesizers are also hit or miss, with some not working at all, others, like Eloquence, being now unsupported, and more, like ESpeak, sounding unnatural. The only good braille keyboard which is free hasn’t been updated in years, and Google has not made one of their own.

Because of all this, it is safe to say that Android can be a powerful tool, but has not attained the focus needed to become a great accessibility tool as well. Google has begun locking down its operating system, taking away some things that apps could do before. This may come to inhibit third party tools which blind people now use to give Android better accessibility. I feel that it is better to have been on iOS, where things are locked down much, but you have, at least somewhat, a clear expectation of fairness on Apple’s part. Android is not a big income source for Google, so Google does not have to answer to app developers.

Chrome OS

Chrome OS is Google’s desktop operating system, running Chrome as the browser, with support for running Android apps. Its accessibility has improved plenty over the years, with ChromeVox gaining many features which make it a good screen reader. You can read more about chromeVox One of the main successes to ChromeVox is its braille support. It is normal for most first-party screen readers to support braille nowadays. When one plugs in a braille display to a Chromebook with ChromeVox enabled, ChromeVox begins using that display automatically, if it is supported. The surprise here is that if one plugs it in when ChromeVox is off, ChromeVox will automatically turn on, and begin using the display. This is beyond what other screen readers can do. ChromeVox, and indeed TalkBack, do not yet support scripting, editing punctuation and pronounciation speech, and do not have “activities” as VoiceOver for iOS and Mac have, but ChromeVox feels much more polished and ready for use than TalkBack.

The future of Google accessibility

Judging by the past, Google may add a few more features to TalkBack, but less than Apple adds to iOS. They have much to catch up on, however, as they have only two years ago added the ability for TalkBack to detect and switch languages, and use the finger print sensor like VoiceOver’s rotor. I have not seem much change over the two years since, except making a mode for tracking focus from a toggle to a mandatory feature. I suspect that, in time, they will remove the option to disable explore by touch, if they’ve not already.

With Chrome OS, and Google Chrome in general, I hope that the future brings better things, now that Microsoft is involved in Chromium development. It could become even more tied to web standards. Perhaps ChromeVox will gain better sounding offline voices than Android’s lower quality Google TTS ones, or gain sounds performed using spacial audio for deeper immersion.

Microsoft

Microsoft makes only one overarching operating system, with changes for XBox, HoloLens, personal computers, and other types of hardware. Windows has always been the dominant operating system for general purpose computing for blind people. It hasn’t always been accessible, and it is only in recent years that Microsoft have actively turned their attention to accessibility on Windows and XBox.

Now, Windows’ accessibility increases with each update, and Narrator becomes a more useful screen reader. I feel that, in a year or so, blind people may be trained to use Narrator instead of other screen readers on Windows.

Windows

In the early days of Windows, there were many different screen readers competing for dominance. JAWS, Job Access with Speech, was the most dominant, with Window-Eyes, now abandoned, as second. They gathered information from the graphics card to describe what was on the screen. There were no accessibility interfaces back then.

Years later, when MSAA, Microsoft Active Accessibility, was created, Window-Eyes decided to lean on that, while JAWS continued to use video intercept technology to gather information. In Windows 2000, Microsoft shipped a basic screen reader, Narrator. It wasn’t meant to be a full, useful screen reader, but one made so that a user could set up a more powerful one.

Now, we have UI Automation which is still not a very mature product, as screen readers are still not using it for everything, like Microsoft Office. GW Micro, makers of Window-eyes, bonded with AI Squared, producers of the ZoomText magnifier, which was bought by Freedom Scientific, whom promptly abandoned Window-eyes. These days, JAWS is being taken on by NVDA, Nonvisual Desktop Access, a free and open source screen reader, and Microsoft’s own Narrator screen reader.

In Windows 8, Microsoft began adding features to Narrator. Now, in Windows 10, four years later, Narrator has proven itself useful, and in some situations, helpful in ways that all other screen readers have not been. For example, one can install, setup, and begin using Windows 10 using Narrator. Narrator is the only self-described screen reader which can, with little configuration, show formatting not by describing it, but by changing its speech parameters to “show” formatting by sound. The only other access technology which does this automatically is Emacspeak, the “complete audio desktop.” Narrator’s braille support must be downloaded and installed, for now, but is still better than Android’s support. Narrator cannot, however, use a laptop’s trackpad for navigation. Instead, Microsoft decided to add such spacial navigation to touchscreens, meaning that a user must reach up and feel around a large screen, instead of using the level trackpad as a smaller, more manageable area.

Speaking of support, Microsoft’s support system is better in a few ways. First, unlike Apple, their feedback system allows more communication between the community and Microsoft developers. Users can comment on issues, and developers can ask questions, a bit like on Github. Windows Insider builds come with announcements by Microsoft with what is new, changed, fixed, and broken. If anything changes regarding accessibility, it is in the release notes. Microsoft is vocal about what is new in accessibility of Windows, in an era when many other companies seem almost ashamed to mention it in release notes. This is much better than Apple’s silence on many builds of their beta software, and no notice of accessibility improvements and features at all. Microsoft’s transparency is a breath of fresh air to me, as I am much more confident in their commitment to accessibility for it.

Their commitment, however, doesn’t seem to pervade the whole company. The Microsoft Rewards program is hard to use for me, and contains quizzes where answers must be dragged and dropped. This may be fun for sighted users, but I cannot do them with any level of success, so they aren’t fun for me at all. Another problem is the quality of speech. While Apple has superb speech options like Macintalk Alex, Vocalizer, or the Siri voices, Microsoft’s offline voices sound bored, pause for too long, and have a robotic buzzing sound as they speak. I think that a company of Microsoft’s size could invest in better speech technology, or make their online voices available for download for offline use. Feedback has been given about this issue, so perhaps the next version of Windows will have more pleasant speech.

Windows has a few downsides, though. It doesn’t support sound through its Linux subsystem, meaning I cannot use Emacs, with Emacspeak. Narrator does not yet report when a program opens, or when a new window appears, and other visual system events. Many newer Universal Windows apps can be tricky to navigate, and the Mail app still automatically expands threads as I arrow to them, which I do not want to happen, making the mail app annoying to use.

The future of Microsoft accessibility

I think that the future of Microsoft, regarding accessibility, is very bright. They seem dedicated to the cause, seeking feedback much more aggressively than Apple or Google, and many in the blind community love giving it to them. Windows will improve further, possibly with Narrator gaining the ability to play interface sounds in immersive audio using Windows Sonic for Headphones, braille becoming a deeper, and built in part of Narrator, and higher quality speech made available for download. Since Microsoft is also a gaming company, it could work on creating sound scapes for different activities: browsing the web, writing text, coding, reading, to aid in focus or creativity. Speech synthesis could be given even more parameters for speaking even more types of formatting or interface item types. really, with Microsoft’s attention to feedback, I feel that their potential is considerable for accessibility. Then again, it is equally possible that Apple will implement these features, but they aren’t as inviting as Microsoft when it comes to sharing what I’d love in an operating system as Microsoft has been, so I now just report bugs, not giving Apple new ideas.

Conclusion

It may be interesting to note the symmetry of accessibility: Apple’s phone is the dominant phone, but Microsoft’s Windows platform is the dominant laptop and desktop system among blind people. Apple’s iPhone is more accessible than Google’s Android, but Google’s Chrome OS is more polished and updated accessibility-wise than Apple’s MacOS. Personally, I use a Mac because of its integration with iOS Notes, Messages, Mail, and other services, the Mail app is a joy to breeze through email with, and open source tools like Emacs with Emacspeak do not work as well on Windows. Also, speech matters to me, and I’d probably fall asleep much more often hearing Microsoft’s buzzing voices than the somewhat energetic sound of Alex on the Mac, who speaks professionally, calmly, and never gets bored. I do, however, use Windows for heavy usage of the web, especially Google web apps and services, and gaming.

Time will tell if companies continue in their paths, Apple forging ahead, Microsoft burning bright, and Google… being Google. I hope, nevertheless, that this article has been useful for the reader, and that my opinions have been as fair as possible towards the companies. It should be noted that the accessibility teams for each company are individuals, have their own ideas of what accessibility is, means, and should be, and should be treated with care. After all, this past decade has been a long journey of, probably, most effort spent convincing managers that the features we now have are worth spending time on, and answering user complaints of “my phone is talking to me and i want it turned off right now!”.

This does not excuse them for the decay of Android and Mac accessibility, and the lack of great speech options on Windows. It does not excuse them for Apple Arcade’s lack of accessible games, or Microsoft Rewards’ inaccessible quizzes. We must give honest, complete, and critical feedback to these people. After all, they do not know what we need, what will be useful, or, if we dare tell, what will be delightful for us to use, unless we give them this feedback. This applies to all software, whether it be Apple’s silent gathering of feedback, Microsoft’s open arms and inviting offers, or open source software’s issue trackers, Discord servers, mailing lists, and Github repositories. If we want improvement, we must ask for it. If we want a better future, we must make ourselves heard in the present. Let us all remember the past, so that we can influence the future.

Now, what do you think of all this? Do you believe Apple will continue to march ahead regarding accessibility, or do you think that Microsoft, or even Google, has something bigger planned? Do you think that Apple is justified in their silence, or do you hope that they begin speaking more openly about their progress, at least in release notes? Do you like how open Microsoft is about accessibility, or do they even talk about accessibility for blind users enough to you? I’d love to know your comments, corrections, and constructive criticism, either in the comments, on Twitter, or anywhere else you can find me. Thanks so much for reading!

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

In this post, I’ll detail my experiences of advocating for accessibility in open source software, why it is important, and how others can help. I’ve not been doing it for long, but at least now, I’ve done a bit. I’ll also touch upon why I think open source software, on all operating systems, is important, and what closed source and closed feedback systems cannot offer, which open source grants. On the other hand, there are things which closed source somewhat grants, but which has faltered slightly in recent days. I will attempt to denote what is fact and what is opinion, this goes for any post of a commentary of informative nature.

The Appeal of Open Source

Open source, or free software, basically means that a person can view and change the source code of software that they download or own. While this doesn’t mean much to users, it does mean that many different people can work on a project to make it better. This has no value on its own, see the “heartbleed” SSL bug and its Aftermath, but as with SSL, things can obviously improve when given an incentive.

For now, open source technology is used in many closed source operating systems. For example, the Liblouis braille tables are used in iOS, macOS, and most Linux distributions through BRLTTY. While the software is not perfect, it is often made for more than one operating system, has a helpful community of users, and, greatest for accessibility, developers who are more likely to consider accessibility. This is greatly improved with platforms for open source development, like Github and Gitlab, which allow users to post “issues” on projects, including accessibility ones.

The Appeal of Closed Source

People like getting paid. I should know, as a working blind person who does love getting paid for time and effort well spent. People love keeping things hidden while being worked on. I wouldn’t want a reader reading an incomplete blog post, after all, and spreading the word that “Devin just kind of wrote a few words and that’s all I got from the blog.” People love being able to claim their work as theirs, instead of having to share the credit with other people or companies. I don’t have direct experience with this, because I need all the help I can get, but in my opinion, it is a factor in choosing to create on your own, as a user or a company. Another great thing about closed source is that your competitors can’t copy what you’re doing, as you do it, and when you’re an important company, with allegiance to your shareholders, you must do anything to keep making money. But, what about accessibility?

Open Source Accessibility

Accessibility of open source projects vary a lot. For example, before Retroarch was made accessible, its interface was not usable by blind people. Now, though, I can use it easily. However, current versions of the KDE Plasma desktop do not work well with the Orca screen reader. The following quote is from the release notes for KDE’s latest desktop version:

#+beginquote KDE is an international technology team that creates free and open source software for desktop and portable computing. Among KDE’s products are a modern desktop system for Linux and UNIX platforms, comprehensive office productivity and groupware suites and hundreds of software titles in many categories including Internet and web applications, multimedia, entertainment, educational, graphics and software development. KDE software is translated into more than 60 languages and is built with ease of use and modern accessibility principles in mind. KDE’s full-featured applications run natively on Linux, BSD, Solaris, Windows and Mac OS X. #+endquote

Source

“Modern accessibility principals,” you say? In my opinion, we seem to be talking about different definitions of “accessibility.” Yes, there are multiple definitions. One is accessibility in the sense of being able to be accessed, another is the ability to be found, and the ability of being easy to deal with. As stated in the About section of the site, I use accessibility to mean being able to be used completely by blind people. This carries with it the implication that every single function, and all needed visual information, can be conveyed to a blind person in order for it to be accessible. This rules out the “good enough” approach that so many blind people accept as the status quo. Luckily for blind people who would love to use KDE, there is Work being doneon this issue.

Gnu, the project behind much of Linux, also has anAccessibility Statement which does seem to be very out of date, as it references flash player and Silverlight, which are no longer in common use, and does not reference Apple’s iOS, Google’s Android, and other modern technologies which are not open source (or are, but might as well not be because of the necessity of closed-source services), but which include assistive technologies. I encourage every adventurous blind person to make themselves available for testing open source software and operating systems; user testing was mentioned by the KDE team as something blind people could do to help. Believe me, having an environment which is a “joy to use” is a dream of mine.

Gnome, and Mate, accessibility are okay, but they do not come close to the accessibility of Windows and Mac systems. For a good example, if you press Alt + F1 in Gnome, and probably Mate too (tested, Mate works a lot better than Gnome), you may only hear “window.” Advanced users will know to type something in Gnome, or use the Arrow Keys in Mate, but regular users should not have to learn to hunt around due to bad accessibility, and the fact that less technically inclined users use Linux is a testament to blind people’s ingenuity and ability to adapt, rather than the accessibility of the platform.

Open source accessibility is so hit and miss because there are so many standards. There is the GTK framework for building graphical apps, which does have some accessibility support, but developers must label the items in their programs with text. There is the QT framework, which seems to have more poor accessibility support. Basically, developers can do anything they want, which is good for freedom, but often is not great for accessibility. Also, much of the community has not heard of accessibility practices, do not know that blind people use computers, or think that we must use braille interfaces to interact with computers and digital devices. This is a failure on our part, as we do not “get out there” on the Internet enough. With the advent of an accessible Reddit clientthis may begin to change. Further work must be done to give blind users an accessible Reddit interface on the web for users to use on computers, not iPhones. However, Github is very accessible, and there is nothing stopping one from submitting issues.

Closed Source Accessibility

“Okay but what about Windows? And Apple? You like Apple, right?” Basically, it’s hard to tell. Software doesn’t write itself, it is written, for now, by people. People can make mistakes, ignore guidelines, or simply not care about accessibility. However, those guidelines do exist, and are usually one standard, like the iOS accessibility standard. This means that companies can develop accessible software easily, and are held accountable by managers to uphold accessibility. But, even the best of accessible companies do not always do the right thing. Apple, for example, has created two services, Apple Arcade and Apple Research. Apple Arcade contains no games which a blind gamer can play without expending much more effort than a sighted gamer. Apple Research contains some questions with answer buttons which are not labeled, or cannot be activated. Does Apple think that blind people do not want to game, or that we don’t care about our hearing, heart, or for women, their reproductive health? Apple has also created Swift Playgrounds, an app for children to learn to code. This is accessible. But what about adults? Shouldn’t blind adults, who are usually technically inclined enough, be given a chance to learn to code? I’ll probably rant about this in a future article.

Microsoft has been on an accessibility journey for a few years now, but even they have a few problems. First, the voices in Windows 10 are poor for screen reading tasks. They pause way too long at the end of clauses and sentences, leading me, at least, to press Down Arrow to move to the next line before the last line was actually done being spoken, all because it paused just long enough to make me think that there was no more text to speak. Microsoft’s XBox Game Pass is great, but I could not find any accessible games in the free rotations. Sure, there’s Killer Instinct that many blind people can enjoy playing, but I found it not only inaccessible, as the menus do not speak, but boring, as the characters all seemed to simply do the same thing. I know that games do not have to be accessible to be fun, but I expect companies who showcase games, like Apple with Arcade, to have at least one accessible game for blind people to enjoy. And I also know that neither Apple nor Microsoft makes these games, but they do choose to advertise them, endorse them even, and it shows that, for Apple Arcade at least, video games are not something which they expect blind people to play. Microsoft is proving them wrong, with the release of Halo with screen reader usability in menus, and the possibility that the new Halo game will be accessible.

Another problem with Microsoft is that not all of their teams are onboard. Like Apple with Arcade and Research, Microsoft has the Rewards team. Their quizzes require one to move items around to reorder answers to get the quiz correct. This may be easy, and perhaps fun, for sighted people, but is simply frustrating for blind people. Other problems include the release of the new Microsoft Edge, which, for most users of screen readers, require that the user turn off UI Automation in order to read some items on the web. Otherwise, if Microsoft’s upcoming foldable phone comes with greatly enhanced accessibility relative to pure Android, and the Narrator screen reader, optimized and made great and enjoyable for a mobile experience, I think that Microsoft could take plenty of market share back from Apple of mobile phone users. Update: It’s barely any better than any other Android phone, so Apple still wins. They already have most general purpose computer users who are blind, so taking from Apple would be a huge win for them regarding accessibility. But, on that, we’ll have to wait and see how far Microsoft takes their commitment to accessibility. The more cynical side of me says that Microsoft will simply slap Android on a folding phone and release it, because why fight Apple.

Reporting Bugs

So, what can we do to make accessibility better? Just about all open source software, previously including the stuff making up this blog, is hosted on Github. Just about all companies, of closed source software, claim to want your feedback. So, I recommend giving them any feedback you have. I know that giving feedback to Apple is like throwing $100 bills into the ocean, giving your valuable time to something which may offer no results, and just gives you the robotic “thanks” message. I know that sometimes talking to Microsoft’s accessibility team may seem unproductive, because they lead you from Twitter to one of a number of feedback locations. I know that feedback to open source software projects may take a lot of time and explaining and promoting accessibility to a community which has never considered it before, but it all may help.

For a great, and successful, Github issue regarding accessibility, see this issue on accessibility of Retroarch. You can see that I approached the Retroarch team respectfully, with knowledge of basic accessibility and computer terminology. Note that I gave what should happen, what is happening, and what can be done to fix the problem. As the saying goes, if you do not contribute to a solution to a problem, you are a part of the problem. Blind people will need to remember to give solutions, not just whine about something not working and can’t play Poke A Man like everyone else.

Also, share links to your feedback with other blind people who can vote, thumb up, or comment on it. Remember, if you do comment, please remember that feedback does not net instant results. I’m still waiting on Webcamoid to have an accessible interface. But, at least I’ll know when something changes, and I could even Pay for features to be implemented.

This is opposed to the closed source model, where feedback is “passed on to the team,” or you are thanked, by your iPhone, for your feedback, but do not hear anything back from developers, and you most definitely can not pay for specific features to be worked on, or donate to projects that you feel deserve it. You must hope and have faith that large companies with more than one billion users cares enough to hear you. For perspective, if every blind person stopped using an iPhone, Apple would not miss many lost sales, compared to the billions of sighted users. However, the engineers who work on iOS accessibility are people too, with deadlines, lives, and feelings, and we should also respect that they are probably tightly restricted in answering feedback, fixing bugs, and creating new, exciting features.


As for me, I will continue to support open source software. I’ll keep using this mac and iPhone because they work the best for me and what I do for work and writing. But, believe me, when something better comes along, I’ll jump ship quickly. As blind people, I feel, we cannot afford to develop brand loyalty. Apple, Microsoft, or Google, I think, could drop accessibility tomorrow, and there we’d be, left in the cold. I highly doubt they will. They may let it lie stagnant, but they probably won’t remove it. I do not write this to scare you in the least, but to make you think about how much control you actually have over what you use, how companies and developers view us, and how we can improve the situation for ourselves. if sighted people notice a bug or want a feature in iOS or Windows, they can gather their tech press and pressure Apple or Microsoft. If we find an accessibility bug, do we have enough clout, or unity, to pressure these companies? Writing feedback, testing software, trying new things, writing guides and fixing documentation, or, if able, translating software into other languages are all things that any blind person can do. I’m not saying that I’m perfect at any of this. I just think that we as a community can grow tremendously if we strike out from our comfortable Windows PC’s, Microsoft Word, audio games, TeamTalk, and old speech synthesizers.

I’ll give some projects you could try out and give feedback on:

Element chatting service

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!