devinprater

Devin Prater's blog

New Mastodon Instance

I now have my own instance. “There can’t be a safe space on the Internet” my big fat ass. I guess if you’re not the owner of an instance there can’t be, but about $6 a month is a small price to pay for a space where I can be myself and not just a subset of myself. @devinprater@devin.masto.host is where I post now. I think it’ll be a good home moving forward.

It’s still a new instance, and so hasn’t federated with all the ones I remember from the dragon’s cave, but it serves its purpose. I don’t know if I have all the followers from there on here. But I don’t care. The people who give a damn will find me.

A Chromebook

We got a bit more equipment at work a few weeks ago, one of which is a new Chromebook. I got to take it home, and I’ve installed Linux, and some Android apps, on it. It works pretty well, as it should with 16 GB RAM and some Intel processor.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

If you have sight, imagine that in every digital interface, the visuals are beamed directly into your eyes, into the center and peripheral vision, blocking out much else, and demanding your attention. All “visuals” are mostly text, with a few short animations every once in a while, and only on some interfaces. You can’t move it, unless you want to move everything else, like videos and games. You can’t put it in front of you, to give you a little space to think and consider what you’re reading. You can’t put it behind you. You can make it softer, though, but there comes a point where it’s too soft and blurry to see.

Also imagine that there is a form of art that 95% of other humans can produce and consume, but for you is either blank or filled with meaningless letters and numbers ending in .JPEG, .PNG, .BMP, or other computer jargon, and the only way to perceive it is to humbly ask that the image is converted to the only form of input your digital interface can understand, straight, plain text. This same majority of people have access to everything digital technology has to offer. You, though, have access to very little in comparison. Your interface cannot interpret anything that isn’t created in a standards-compliant way. And this culture, full of those who need to stand out, doesn’t like standards.

There is, though, a digital interface built by Apple which uses machine learning to try to understand this art, but that’s Apple only, and they love control too much to share that with other interfaces on other company’s systems. And there are open source machine learning models, but the people that could use it are too busy fixing their interface to work with breaks in operating system behaviour and UI bugs to research that. Or you could pay $1099, or $100 per year, for an interface that can describe the art, by sending it to online services of course, and get a tad bit more beauty from the pervasive drab, plain text.

Now, you can lessen the problem of eye strain, blocked out noise, and general information fatigue by using a kind of projector, but other people see it too, and it’s very annoying to those who don’t need this interface, with its bright, glaring lights, moving quickly, dizzyingly fast. It moves in a straight line, hypnotically predictable, but you must keep up, you must understand. Your job relies on it. You rely on it for everything else too. You could save up for one of those expensive interfaces that show things more like print on a page… if the page had only one small line and was rather slow to read, but even that is dull. No font, no true headings, no beauty. Just plain, white on black text, everywhere. Lifeless. Without form and void. Deformed and desolate. Still, it would make reading a little easier, even if it is slower. But you don’t want to be a burden to others or annoy them, and you’ve gotten so used to the close, direct, heavy mode of the less disruptive output that you’re almost great at it. But is that the best for you? Is that all technology can do? Can we not do better?


This is what blind people deal with every day. From the ATM to the desktop workstation, screen readers output mono, flat, immovable, unchanging, boring speech. There is no HRTF for screen readers. Only one can describe images without needing to send them to online services. Only a few more can describe images at all. TalkBack, a mobile screen reader for Android, and ChromeVox, the screen reader on Chromebooks, can’t even detect text in images, let alone describe images. All of them read from top to bottom, left to right, unless they are told otherwise. And they have to be specifically told about everything, or it’s not there. We can definitely do better than this.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Braille

For a while now, I’ve been curious about which platform’s accessibility is, at its foundation, more secure, more “future proof”, and better able to be extended. Today, I’m looking into the TalkBack source code that is currently on Github, which I cloned just today. I’ll go through the source, to see if I can find anything interesting.

First of all, according to this file:

This whole project was started in 2015. Of course, we then have this one:

Which shows that it is copyright 2020. The first just seems to wrap Liblouis in Java, but what about this one?

Ah, it seems to be the thing that translates the table files and such into Java things. So that’s kind of where the Braille keyboard gets its back-end. Now let’s look at the front-end.

So, this was made in 2019. I do like seeing that they have been working on this stuff. Now, here, we have:

/**
/**
/**
/** Stub implementation of analytics used by the open source variant. */

Yeah, figured I wouldn’t get much out of this file.

Now here’s where we might just get something:

https://github.com/google/talkback/blob/master/brailleimesrcmainjavacomgoogleandroidaccessibilitybrailleimedialogContextMenuDialog.java#L4

This part was made in 2020. When they need to crank out a feature, they really get rolling. I just hope they give us a good few features this year.

Oh, now this is pretty considerate, a dialog will show if the Braille keyboard is opened and TalkBack is off:

And this one for if a device doesn’t support enough touch points (like an iPhone!):

Okay, so this next one seems to allow the braille keyboard to grab braille from a file, then turn it into something else:

This could lead to a sort of substitutions list, or spell checking or braille correction facility.

Ah, now this is a good summary of the interface. Also yeah, looks like lots of geometry.

Okay, this part is rather interesting:

/**
/**
/**
/** Reads saved points from SharedPreference. */

So, does that mean it can remember where I most put my fingers for typing?

And here’s the jackpot:

Here, we learn how Google is working around explore by Touch to give a braille keyboard that bypasses TalkBack’s own touch interaction model.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

A few weeks ago, Apple’s app review team dropped the ball by refusing to allow an update to a Hangman game for blind people, which eventually got resolved after tech news sites posted about it. About a week ago, we learned that Apple had disabled a few Siri commands that are important for blind people, and people in general even, that prefer to use Siri for checking voicemail, email, and missed calls. That too was reversed, as once Apple said that it was to get blind people to use VoiceOver. Now they say it’ll be fixed in the coming weeks and months.

This post isn’t a hate letter to Apple, although these two events were the final straw, and made me decide to deal with Android if it means having something I can loosely control. This is to remind myself, and the blind community, and anyone else who cares to read it, that if we depend on Apple, Google, Facebook, Microsoft, Twitter, and other big tech companies, we depend on people who, for the most part, care nothing for us.

It’s mostly our fault

As a community, we blind people seem, to me, more eager than most to lean on anything that claims to help us. Whether it be free government assistance or built-in screen readers and voice assistants from big corporations, we put our livelihoods into the hands of these people. Then stuff like this happens, or money mismanagement strikes government agencies, or good people quit, leaving us with poor replacements or unsteady leadership, or one careless ending to a feature leaves people with reduced functionality, and we’re tempted to place all the blame on the people in power. And we should rightfully let them know our concerns.

When Assistive Technology instructors or service providers get calls throughout the week from elderly people, asking “What did I do wrong?” “What did I break?” Well, that’s a problem. But then, we should ask ourselves: “Why did this happen? Why is it that these people rely on Siri, rely on Apple?”

We have placed all of our technical hope into a few big corporations. Of course, there’s also Humanware, Freedom Scientific, and other such companies who thrive off of selling out-of-date technology at a very high price. And they’re part of the problem too. But so many of us rely on Apple. So many of us use an iPhone as our only computer. Sighted people can get away with it because the big problems for them are usually visual issues or minor functionality problems. If we only use an iPhone for our job, though, the sometimes major issues could mean the difference between being able to confidently do a job, or finding so many workarounds that we aren’t even competent at the job anymore.

So, why not just use VoiceOver? Why use Siri for all that? Because elderly people don’t need all of the functionality on an iPhone, and shouldn’t have to learn touch screen commands that, through bad memory or unsteady hands, they may not even be able to do. To ask them to use VoiceOver instead of the thing that has worked for them for years, is near the peak of privileged behavior, and should not be tolerated. But these are supposed accessibility experts, or Apple experts, representing a top tech company full of supposedly smart people. Surely they know what they’re talking about, right? Right?

A hard solution

We need to build for ourselves. No one else will do it for us. We see this in so many areas, like the lack of a great and wonderful braille experience, which for now, across all operating systems, is bland, with no spatial separation for paragraphs and headings, and no formatting. We don’t even have formatting info through speech changes, something the Emacspeak audio desktop has had for decades now, although Narrator is trying, but constrained by the rigid text-to-speech engine used. I know we can’t affect services like Uber, Lyft, Walmart, Amazon, and other apps and sites. But the operating system and tools we use should work with us, and for us; never should they work against us. And more and more, as these big companies realize that they have us on their hook, that they’ve reeled us in, they’ll take away what we need, whether by accident or on purpose. They take our freedom to do what we want with our devices day by day, some companies more than others. And yes, while Apple does give us image descriptions and stuff like that, we could do the same thing with open source Python libraries and tools if we wanted. We just have to stop giving ourselves away, and either make these companies work for our money again, or make stuff ourselves, so we never are beholden, and trapped under, them again.

What can we do?

As users, we can vote with our dollars. We can use more open platforms, like Android, Windows, and ChromeOS, and give feedback to these companies, rewarding them for giving us the option to use something other than what they allow on the device. We can give money to NV Access and other blind creators of software. We can raise awareness on social networks about how we use more open platforms, and what we can do with them. And, we can encourage the many talented blind developers by openly supporting, and funding, any of their work to create helpful, open software. If we can pay for Apple Music, Apple news, Apple Fitness Plus, and Apple Airpods Max, we can afford to give blind developers much more than what they get now. We could give NV-Access enough money to hire a developer to work on a great braille experience, for example. Or a few Android developers to work on improvements to TalkBack, adding Braille support and improving its features.

Now, I know there aren’t many of us, and even less blind people with money. But I’ll begin doing my part, donating even more to NV Access when I have a bit more money, since I’ve canceled all Apple subscriptions. It’s time we have a say in the technology that means the most to us. It’s time we expect more from developers, from ourselves, and much less from big companies who hide in the shadows and expect us to hold them up. No more of that.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Today, I came across an article called Why Linux Is More Accessible Than Windows and macOS. Here, I will give responses to each point of the article. While I applaud the author’s wish to promote Linux, I think the points given are rather shallow and very general in nature, and could be given about any computing operating system comparison.

1. The Power of Customization

In this section, the author argues that, while closed source systems do have accessibility options, people with disabilities, (who the author calls “differently abled, which some people with disabilities would consider ableism due to the fact of differently abled feeling more like inspiration porn), have to compromise on what modifications they can make to their closed source operating systems. This can be true, but from my experience using MacOS, Windows, iOS, Android, and Linux, closed source systems have a wider community of people with disabilities using them, thus have addons and extensions to allow for as few compromises to the user’s experience as possible.

Another point that must be kept in mind is that Linux is not the most user-friendly OS yet. The modifications that can be made with Linux are more than in MacOS and Windows, yes. But I, for example, want to hold down the Space bar and have that register as holding the Control key. I probably cannot do that in Windows and MacOS. I surely can do it in Linux, but it would take a lot of learning about key codes and how to change keyboard maps throughout the console and X/Wayland sessions. The GUI will not provide this ability. The best I can do with the GUI is change Capslock to Control.

Also, let’s say a new user installs a distribution like Fedora Linux, and needs a screen reader, or any accessibility service. The user has done a little homework, so knows to turn on Orca with Alt + Super + S. The user then launches firefox from the “run application” dialog. And it doesn’t work. Nothing reads. Or the user runs a media player, and gets the same result. Why is this? I’ll spare you the arduous digging needed to find the answer. In the personalization menu of a desktop’s system menu, or in the Assistive Technologies dialog, there is a checkbox which needs to be checked in order to even enable the assistive technology to work correctly with the rest of the system. The user has to know that it’s there, how to get to it in the chosen desktop environment, and has to know how to check the box and close the dialog. This, before even doing anything else with their system.

This means that, out of the box, on almost all Linux distributions, this one key shows that the Linux GUI, by nature of needing this box to be checked, is hostile to people with disabilities. Can distribution maintainers check this box by default? Yes. Do they? No. Does this box even need to be there? No. Assistive Technologies could be enabled by default, with advanced users, after receiving warning in comments of a configuration file, able to disable it, only via changing the configuration file.

2. Linux Is Stable and Reliable

About fifteen minutes ago, I was using Gmail within the latest Google Chrome on Fedora Linux. Suddenly, the screen reader, Orca, stopped responding as I tried to move to the next heading in an email. I switched windows, and nothing happened. I got speech back in a good 20 seconds, but that shows that Linux isn’t quite as stable as the author may believe. At least, not every distribution.

My experience is my own; I do not claim to be an expert in Linux usage or administration. But this is still my experience; while Linux is stable, and I can use it for work purposes, it is not as stable, especially in the accessibility department, as Windows or MacOS. I would say, though, that it is more usable than MacOS, where just about anything in Safari, the web browser, results in Safari going unresponsive for a good five seconds or more.

Another important point is that while many developers hammer away at the core of Linux, how many people maintain ATSPI, the Linux bridge between programs and accessibility services? How many people make sure the screen reader is as lean and performant as possible? How many people make sure that GTK is as quick to give information on accessibility as it is to draw an image? How many people make sure that when a user starts a desktop, that focus is set somewhere sensible so that a screen reader reads something besides “window”? My point is, open source is full of people that work on what they want to work on. If a developer isn’t personally impacted by accessibility needs, that developer is much less likely to code with accessibility in mind. So let’s stop kidding ourselves into thinking that overall work on Linux includes even half the needed work on accessibility specifically.

While Linux’s accessibility crawls towards betterment at about one fix per month or two, Windows and MacOS have actual teams of people working specifically on accessibility, and a community of disabled developers working on third-party solutions to any remaining problems. Do all the problems get fixed? No, especially not in MacOS. But the fact that the more eyes on a problem there are, the more things get noticed applies significantly to accessibility.

3. Linux Runs on Older Hardware

This section is one I can agree with completely. Linux running on old hardware is what will drive further adoption when Windows 11 begins getting more features than Windows 10. This is even more important for people with disabilities, who usually have much less money than people without disabilities, so cannot upgrade computers every year, or even every three or five years.

4. Linux Offers Complete Control to the Users

This is true if the user is an advanced Linux user. If the user is just starting out with Linux, or even just starting out with computers in general, it is very false. How would it feel to be trapped in a place without a gate, without walls, without doors, without windows? That’s how a new computer user would feel when dealing with Linux, especially if the person is blind, and thus needs to know how to use the keyboard, what the words the speech is saying mean, what all the terminology means, but not even knowing where the Space bar is, or even how to turn the computer on.

This is a huge issue for every operating system, but was somewhat solved by MacOS by adding a wonderful tutorial for VoiceOver, its screen reader, and guiding the user to turn it on when the computer starts, without the user having to touch a single key.

As for this piece:

#+beginquote On the other hand, Linux shares every line of code with the user, providing complete control and ownership over the platform. You can always try new technologies on Linux, given its inherent nature, compatibility, and unending support for each of its distros. #+endquote

This is practically wrong. First, new Linux users won’t understand the code that Linux “shares” with them. New Linux users will not know where to look to find this code. So, this really doesn’t help them. Open source or closed, the OS is going to be a black box to any new user. And new users are what count. If new users do not want to stay on Linux, they will not spend the time to become old users, who can then teach newer users. Also, good luck trying new technologies on Debian.

Accessibility Comparison Between Linux and Windows

Here, the author compares a few access methods. A thing the author calls “screen reader” on Linux, which I hope they know is called Orca, versus Windows Narrator, the worst option, but built in.

The author doesn’t mention NVDA on Windows, which is far more powerful than Narrator, and has several addons to enhance its functionality even further. One can add many different “voice modules” to Windows, and NVDA has plenty of addon voice modules as well, many of which are not a part of Linux, like DecTalk, Softvoice, and Acapela TTS.

Accessible-Coconut: An Accessible Linux Distribution

I’m going to be blunt here: this distribution is daded off of an old, LTS version of Ubuntu, will lack the latest version of Orca, ATSPI, GTK, and everything else. If you want something approaching good, try Slint Linux. That’s about the most user-friendly distribution for the blind out there right now. Fedora’s Mate spin is what I use, but Orca doesn’t come on at startup, and neither is assistive technology support enabled.

Linux Distros Cater to Every User Type

This summary continues the points expressed in the article, and ends with the author inviting “you” to try Linux if “you” want your computer to be more accessible. I suppose the author is pointing people to try Accessible Coconut. At this point, I would rather users do a ton of reading about Linux, the command line, Orca, all the accessibility documentation they can find, try Windows Subsystem for Linux, and then, if they want more, put Linux on a separate hard drive and try it that way. I would definitely start with Slint, or Fedora, but never with a lackluster distro like Accessible Coconut.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

At least one person has wanted me to go more into detail about why I’ve left a simple, mainly left-leaning Mastodon instance, and gone over to a free-for-all free speech instance, where I’ve actually seen a pedophilia status. I hope it was a joke. Good thing there’s a block function. So, here, I’ll explain it, as best I can.

I am not always a happy person. Sometimes, I’m a very depressed person. Sometimes, I even dance around thoughts of suicide. It doesn’t last forever. But one thing that always helps is human interaction, like that on Mastodon. All the wonderful people I’ve met there and interacted with, all those wonderful, beautiful people. The people I felt safe with. The people that made me smile and laugh. And now…

A few months ago

Mayana (@mayana@dragonscave.space) is the instance moderator (one of them at least) who runs dragonscave.space, the instance I was on. (just setting things up) and a few months ago, work became a lot more hectic than usual. I’ve gotten more used to it now, so things are better, slightly. I was quite depressed during this time, so posted about it. I didn’t care really if anyone replied or opened the content warning or not. All I cared about was that I was reading things from people I cared about, and communicating. Getting myself out of my head for once.

Mayana, seriously thinking she was helping, told me to see a therapist, privately. This made me feel as though I shouldn’t talk about my feelings, coming from an instance moderator especially. I still did, sometimes, kinda forgetting what happened because lovely people and safe feelings. Mayana did apologize for this yesterday, but by that time, it barely meant anything. I still accept it, though.

Yesterday (the breaking point)

Now, we come to yesterday, when I woke up to a message from Mayana stating that I shouldn’t publically talk about my sexual kinks. I won’t list them here, because I plan on posting this to my old instance to settle things and move on. But they’re pretty out there as far as Southern United States standards are concerned. And, apparently, for others too. I don’t know if my kink posts were actually seen as disturbing for others on the instance, or if that was a worry of the moderators. But that’s the reason they gave. It wasn’t in their rules, because they thought they wouldn’t need it. I’m glad at least that no future occupant of the instance will come in expecting to be able to show their complete self.

And this is where the cold, empty void of displaced feeling and betrayal and hurt happens. Imagine a place where you felt like you’ve finally found home. Imagine somewhere that, for the first time in your life, you may have found people who could stomach the sight of you; actually enjoying your company. Imagine settling in and getting familiar with the place. Then, well, some one gently lets you know that you cannot show your entire self here anymore. To enter this place again, you’ll have to leave parts of yourself at the door.

Because only the clean or interesting parts of me are worthy, I suppose. The blind person, the accessibility advocate, maybe even the food lover. But the darkness and the sexuality, well, no. That must stay out in the wild, in the untamed parts of the federation where the undesirables live.

Extreme free speech

So, that morning, a friend recommended a free speech instance. I had reservations at first, because I’m sure few instances federate with it. By the afternoon, though, I was wondering “why not? If the ’civilized’ world doesn’t want me, why not go there. After all, if I don’t go to a free speech instance, if I just switch to another one, what else will people have a problem with? My ‘radical’ accessibility advocacy? My kinks? The fact that I don’t have a profile picture?” So, I decided to go with it. Who knows, though. Maybe I’ll just quit the federation altogether and just post here every once in a while. It would probably result in the same amount of human interaction anyway. Thankfully, I’m on a few Telegram groups with other blind people that actually enjoy talking about just about anything, so maybe my human interaction can come from there. But all the people I’ve met on the feddiverse… I’ll miss them deeply. All the wonderful people. But I must move on. If I am not wanted in all of myself, why should I stay and be hurt further?

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!

Microsoft announced Windows 11 a few weeks ago, and, from my searches at least, still doesn’t have an audio described version of the announcement. Update: There’s one now. Anyways, they also released a post on their Windows blog about how Windows 11 was going to be the most accessible, inclusive, amazing, delightful thing ever! So, I thought I’d analyze it heading by heading to try to figure out what’s fluff and what’s actual new stuff worthy of announcement.

Beyond possible, efficient and yes, delightful

So, they’re trying to reach what the CEO in his book “hit refresh,” called the “delightful” experience he wanted to work towards. His gist was that Windows was pretty much required now, but he wanted to make it delightful. Well, the only user interface that is delightful to me is Emacspeak. MacOS and iOS come close. What makes them delightful are a few things: sound and quality speech and parameter changes. I won’t go over all that here, my site has plenty on all that already. But it’s safe to say that Microsoft isn’t going near that anytime soon.

Instead of trying to offload cognitive strain from parsing speech all day, they put even more on it. Microsoft Edge has “page loading. Loading complete.” Teams has similar textual descriptions of what’s going on. And while I appreciate knowing what’s going on, speech takes a second to happen, be heard, and be processed. Sound happens a lot quicker, and over time, a blind user can get pretty good at recognizing what’s going on. But whenever I brought this up to the VS Code team, they said something about not having the ability to add sounds, so they’d have to drag in some other dependency, so they’d have to bring that up with the team and all that. Well, they won’t become the most delightful editor for the blind any time soon. Just the most easy to use.

And, while this is partly the fault of screen reader developers who just won’t focus on sound or speech parameter changes for text formatting and such like that, Microsoft could be leading the way in that with Narrator. And yeah, they’ve got a few sounds, and their voices can change a little for text formatting, but their TTS is just too limited to make it really flexible and enjoyable. Instead of changing pitch, rate, and entonation, they change pitch, rate, and volume, and sometimes it’s jarring, like the volume changes. But there’s not really much else they can do with their current technology. I guess they’ll have to maybe change the speech synthesis engine a bit, if they’re even able to. In the past six years, I’ve not seen any new, or better, first-party voices for US English for Windows. Sure, they have their online voices, which are rather good, but they haven’t shown any inclination to bring that quality to Windows OneCore Voices.

People fall asleep listening to Microsoft David. He’s boring and should not be the default voice. While this is anecdotal, I’ve heard quite a few complaints about it, and if you listen to him for a long time, you’d probably get bored too. This is seriously not a good look, or rather, sound, for people who are newly blind and learning to use a computer without sight, or someone who doesn’t know that there are other voices, or even if Microsoft wants to demonstrate Narrator to people who haven’t used it before. And while NVDA users can use a few other voices, the defaults should really be good enough. Apple has had the Alex voice for years. Over ten years, in fact. He’s articulate, can parse sentences and clauses at a time, allowing him to intone very close to the way humans speak, with context. He’s also not the most lively voice, but he sounds professional. And, Alex is the default voice on MacOS. David, on Windows, just sounds bored. And so blind people, particularly those used to Siri and VoiceOver from iOS, just plain fall asleep. It’s nowhere near delightful.

## Windows 11 is the most inclusively designed version of Windows

Okay, sure. Even though from what I’ve heard from everyone else, it’s just the next release of Windows 10. But sure, hype it up, Microsoft, and watch the users be disappointed when they figure out that, yeah, it’s the same old bullcrap. Bullcrap that works okay, yeah, but still bullcrap.

#+beginquote People who are blind, and everyone, can enjoy new sound schemes. Windows 11 includes delightful Windows start-up and other sounds, including different sounds for more accessible Light and Dark Themes. People with light sensitivity and people working for extended periods of time can enjoy beautiful color themes, including new Dark themes and reimagined High Contrast Themes. The new Contrast Themes include aesthetically pleasing, customizable color combinations that make apps and content easier to see. #+endquote

Okay, cool, new sounds. But are there more sounds? Are there sounds for animations? Are there sounds for when copying or other processes complete? Are there sounds that VS Code and other editors can use? Are there sounds for when auto-correct or completion suggestions appear? Are their sounds for when an app launches in the background, or a system dialog appears? Are there sounds for when windows flash to get users’ attention?

#+beginquote And, multiple sets of users can enjoy Windows Voice Typing, which uses state-of-the-art artificial intelligence to recognize speech, transcribe and automatically punctuate text. People with severe arthritis, repetitive stress injuries, cerebral palsy and other mobility related disabilities, learning differences including with severe spelling disabilities, language learners and people that prefer to write with their voice can all enjoy Voice Typing. #+endquote

Um, yeah, this has been on Windows for years. Windows + H. I know. I get it.

design and user experience. It is modern, fresh, clean and beautiful.

Okay, but is it fresh, clean and beautiful for screen readers? Are there background sounds to help us focus, or maybe support for making graphs audible for blind people, or support for describing images offline? Oh wait, wrong OS, haha. Funny how Apple’s OS’ are more modern when it comes to accessibility than Microsoft.

Windows accessibility features are easier to find and use

Okay, this whole section has been talked about before, because it’s no different than the latest Windows Insiders’ build. Always note that if companies have to fill blog posts with stuff that they’ve had for like months or a year now, it means they really, really don’t have anything new to show, or say. They just talk because not doing so would hurt them even more. Contrast this with Apple’s blog post on Global Accessibility Awareness Day, where everything they talked about was new or majorly improved. And all Microsoft did that day was “listen”. There’s a point where listening has gathered enough data, and its time to act! Microsoft passed that point long ago.

#+beginquote Importantly, more than improving existing accessibility features, introducing new features and making users’ preferred assistive technology compatible with Windows 11, we are making accessibility features easier to find and use. You gave us feedback that the purpose of the “Ease of Access” Settings and icon was unclear. And you said that you expected to find “Accessibility” settings. We listened and we changed Windows. We rebranded Ease of Access Settings to Accessibility and introduced a new accessibility “human” icon. We redesigned the Accessibility Settings to make them easier to use. And of course, Accessibility features are available in the out of box experience and on the Log on and Lock screens so that users can independently setup and use their devices, e.g., with Narrator. #+endquote

So, the most important thing they’ve done this year is what they’ve already done. Got it. Oh and they changed Windows. Just for us guys. They did all that hard work of changing a name and redoing an icon, just for us! Oh so cringeworthy. This “courage” thing is getting out of hand. Also, if changing Windows is so hard, maybe it’s time to talk to the manager. Seriously. If it’s so hard to do your job that changing a label and icon is hard work, there’s something seriously wrong, and I almost feel bad for the Windows Accessibility team now.

Windows accessibility just works in more scenarios

#+beginquote Windows 11 is a significant step towards a future in which accessibility “just works,” without costly plug-ins or time-consuming work by Information Technology administrators. With Windows 10, we made it possible for assistive technologies to work with secure applications, like Word, in Windows Defender Application Guard (WDAG). With Windows 11, we made it possible for both Microsoft and partner assistive technologies to work with applications like Outlook hosted in the cloud… #+endquote

Okay, so, from Twitter, Joseph Lee has complained that the Windows UI team isn’t writing proper code to let screen readers read and interact with apps in Windows 11’s Insider builds. So right there, we’re going to still need Windows App Essentials, an NVDA add-on that makes Windows 11 a lot easier to use. This add-on is mostly for the first-party apps, like weather and calculator. So, um, what’s all this about again? So, nothing seems to be new. We will still need “costly” addons and plugins and junk. Because I don’t see Microsoft fixing those UI issues by release. System admins, keep that list of NVDA addons around, because they’ll still be needed in Windows 11.

Remote Application Integrated Locally (RAIL) using Narrator. While that may sound like a lot of jargon to most people, the impact is significant. People who are blind will have access to applications like Office hosted in Azure when they need it.

Yeah because people with disabilities are dumb and can’t understand tech speak. Sure. Okay. Keep dumbing us down, Microsoft. We really enjoy the slap in the face. Just explain the terms, like RAIL. With a quick Google search, it looks like Azure supports Ruby on Rails, so, I guess that’s what it is. Which doesn’t make much sense because Rails makes web apps, from what I understand. Ah well. Keep lording your tech knowledge over us, oh great Elites at Microsoft.

What I want to see is Electron apps getting OS-level support in accessibility, so that VS-code doesn’t have to feel like a web app, because it shouldn’t feel like that on Microsoft’s own OS.

Now, being able to host Office on a server and have Narrator, and hopefully other screen readers (because Narrator is still not good enough), support it, is nice. But that’s not really a user-facing feature. Users probably won’t know Word is hosted on a server.

#+beginquote Windows 11 will also support Linux GUI apps like gedit through the Windows Subsystem for Linux (WSL) on devices that meet the app system requirements. And, we enabled these experiences to be accessible. For example, people who are blind can use Windows with supported screen readers within WSL. In some cases, the assistive technology experience is seamless. For example, Color Filters, “just work.” Importantly, the WSL team prioritized accessibility from the start and committed to enable accessible experiences at launch. They are excited to share more with Insiders and to get feedback to continue to refine the usability of their experiences. #+endquote

In some cases… Wanna elaborate a bit, Microsoft? Will I be able to use Gedit with a screen reader? Or Kate? Or Emacs? I have gotten Emacs with Emacspeak working on WSLG in Windows Insider builds. But it’s too sluggish to be used productively. So yeah, if that’s the same experience as using a screen reader with it, I don’t see myself using it much, if at all.

experiences we introduced last week like our partnership with Amazon to bring Android apps to Windows in the coming months.

Okay, well I’m waiting. I suspect they’ll use something similar to what they did with the Your Phone app, just pipe accessibility events to the screen reader through, the title bar I think? That’ll be okay I guess, but no sound feedback would mean that the experience isn’t quite to TalkBack standards, as low as that is.

Modern accessibility platform is great for the assistive technology ecosystem

closely with assistive technology industry leaders to co-engineer what we call the “modern accessibility platform.” Windows 11 delivers a platform that enables more responsive experiences and more agile development, including access to application data without requiring changes to Windows.

I’m not going to pretend to understand that last bit, but if the UI problems found by Joseph Lee are any indication, a lot more has been broken than fixed or new. Also, which Assistive Technology industry leaders? And what biases do they have?

#+beginquote We embraced feedback from industry partners that we need to make assistive technology more responsive by design. We embraced the design constraints of making local assistive technology like Narrator “just work” with cloud hosted apps over a network. We invented and co-engineered new Application Programming Interfaces (APIs) to do both; to improve the communication between assistive technologies like Narrator and applications like Outlook that significantly improve Narrator responsiveness in some scenarios. The result is that Narrator feels more responsive and works over a network with cloud-hosted apps. #+endquote

I, as a user, don’t care about cloud-hosted apps. Office may at some point become a cloud-hosted app, and that’s what they may be preparing for, but I don’t care about that. Responsiveness is cool and good, but NVDA is very responsive, and some people still fall asleep using it. Why? Because it sounds boring! The voices in Windows suck. No audible animations or anything to make Windows delightful.

#+beginquote We also embraced feedback from industry partners that we need to increase assistive technology and application developer agility to increase the pace of innovation and user experience improvements. We made it possible for application developers, like Microsoft Office, to expose data programmatically without requiring Windows updates. With Windows 11, application developers will be able to implement UI Automation custom extensions, including custom properties, patterns and annotations that can be consumed by assistive technologies. For users, this means we can develop usability and other improvements at the speed of apps. #+endquote

At the speed of apps. That’s pure marketing crap. A lot is said in this article that is pure marketing, and not measurable fact. I want real, factual updates, not this. And the fact that they don’t provide that is a hint that they have nothing to provide. Now, having “custom” rolls and states and such things is nice for developers who have to reinvent the wheel and the atoms that make up that wheel, so maybe new applications have a chance of being accessible. But accessibility won’t happen with developers unless its in their face. They probably won’t know about these abilities, or even care in many cases.

Try Windows 11 and give us feedback

I’ve read feedback from those who have tried Windows 11 Preview. I myself can’t try it because no TPM chip and I don’t feel like being rolled back to Windows 10 when 11 is released. The feedback I’ve gotten so far from others is, well, very little, actually. From what I’ve heard, it’s still just Windows 10.

Conclusion

So, why should I even care about Windows 11? Not much is new or changed or fixed for accessibility, as this article full of many empty words shows. Six years of development, and the Mail app still has that annoying bug of expanding threads whenever keyboard focus lands on them, instead of waiting for the user to manually expand them. The Reply box still doesn’t alert screen readers that it’s opened, so it thinks its still in the message pane being replied to, and not the reply edit field. The Microsoft voices still sound pretty bad, even worse than Google’s offline TTS now, and that’s pretty bad.

Will any of this change? I doubt it. I’ve lost a lot of confidence in Microsoft, first because of their do-nothing stance on Global Accessibility Awareness Day, then their event without audio description, which Apple did perfectly, and now this article which tells us very little, and is almost a slap in the face when it talks about Windows being “delightful” because really, it’s not, and it won’t change substantially enough before release to be so.

Discuss...

You can always subscribe to my posts through Email or Mastodon. Have a great day, and thanks for reading!