Writing with Sound

I sense that there is a change happening in my writing, because there is a change in how I write. I used to write, like most people, silently. My eyes and fingers worked together to lay down words on the page.

When you compose with your eyes, you read over what you’ve written with your internal voice. You supply all of the missing elements of speech: tone, emphasis, pause, and all the other things that add texture and life to the words on the page.

When I used to write with my eyes, I would become so familiar with the way I read a piece of text in my head that I could not imagine it any other way. I never considered that those invisible auditory elements would not be immediately evident to any other reader. At least, not until I came back to a piece of writing a few days later, when I had forgotten the finer details, the shape and flow of each individual sentence. Then I would inevitably puzzle over a phrase or paragraph until I realized how I had meant it to be read. I found these pitfalls lurking in the writing of others, too—places where the emphasis or tone was important but not apparent, waiting like rocks in tall grass to trip up unsuspecting readers.

Now I write with VoiceOver. It is Apple’s main accessibility feature, which gives immediate audio feedback for every character, word and sentence that I write, and reads me paragraphs so I can remind myself of their flow and argument. It is a multi-sensory experience, since I still see the words appearing on my screen, sometimes zoomed in close enough to read them but most often not. I used to find the constant wash of letters and words distracting and intrusive, but in the few weeks since I’ve started writing this way exclusively, it has become natural and I feel adrift when it is turned off.

When VoiceOver reads every word, line, sentence, and paragraph, it supplies its own intonation, emphasis, and rhythm. Computer voices today are not the abrasive robot voices of the eighties and nineties. They are getting closer and closer to the sound of natural speech. No one would mistake them for human, but developers are focused on improving their realism and fluency in reading long passages. So the computer adds pauses, shifts its tone, and inflects words up or down based on how it interprets the context. All those choices I used to make in composing, usually without even thinking about it, are now made by the computer. The computer becomes a controlling voice in my writing, because if it reads something awkwardly, I often change the text.

Of course, I don’t always cede to the computer. Sometimes it is obviously wrong (like when it pronounces a homonym inappropriately, like “read” as red instead of reed) or its interpretation is clumsily rule-bound (like when it thinks “Mr.” is the end of a sentence). But if the computer’s reading is misleading, ambiguous, or simply awkward, I will often rewrite it. I just can’t bear to hear the voice stumble over the same passage dozens of times as I write and edit, so I alter it to accommodate the computer’s flow.

The big question is, how will this affect my writing? Will it become more stilted or robotic as I adapt to a computerized interpretation of natural language? I hope not. My hope is that the instant feedback will remind me to consider the auditory dimensions of language more carefully than I did before. VoiceOver, imperfect as it is, is an audience that is present and interactive at every stage of my writing process, letting me know what I’ve written and at least one way it can be read. If it makes a mistake, there’s a chance another reader would also have made that mistake. VoiceOver was developed by people, after all, and those people defined the interpretive choices it makes.

VoiceOver is changing my writing, but how remains to be seen. I guess I’ll just have to wait for you, my human readers, to let me know.

 

Hebrew Braille: First Impressions

An image of my twenty-volume Hebrew Bible in Braille, sitting on my bookshelf.

I finally took my first stab at reading a second language in Braille.

My twenty-volume Bible in Hebrew Braille has been sitting around for five months, ever since Jewish Braille International graciously sent it to me, free of charge. This particular copy is used. It once belonged to a certain Nancy Ellen Jaslow, presented to her “on the wonderful occasion of her Bat Mitzvah, October 11, 1963.” So thank you, Ms. Jaslow, for your Bible. I hope I will put it to good use.

I cracked it this weekend and read through the introductory material. The project of creating a Braille system for Hebrew and transcribing the Bible was conducted by a team of blind and sighted Jewish rabbis and scholars from New York, London, and Vienna. They began in the early 1930s and finally published in 1954, hindered by “the stringencies of the time,” as the introduction so euphemistically admits. It’s not a scholarly edition of the text, but I was impressed to see that well-known biblical scholars like H. L. Ginsberg and Theodore Gaster had reviewed the text and notes.

On Monday morning, I perused the key to the text and began to read. At this time, I have read exactly one page of Hebrew in Braille. Since some of you have asked, I thought I would share some of my first impressions here.

First Thing: What is It?

All Braille, everywhere and in every language, is made up of cells, which are made up of six or eight dots in two columns, like so:

⠿   ⣿

It has to be embossed very precisely and uniformly; there are no fonts or scripts or cursive in Braille. Braille already pushes the fingers to their perceptive limits, and there is no room for fanciful embellishments. Eight-dot Braille is mostly reserved for musical and mathematical notation, while every language that I know of uses six-dot cells.

Six dots allow for sixty-three different combinations of dots, not counting the blank cell. Every language has the exact same stock of cells to choose from, and each language gets to choose how it will use those cells. Since English only has 26 letters, it uses the rest of the cells to represent punctuation, common letter combinations, or whole words. Chinese, which has thousands of characters, has to get more creative. It uses two or three cell combinations to represent each character. Hebrew is like English in that it has fewer than 63 letters in its alphabet: 22 consonants (5 of which have a second form that appears at the end of words) and 15 or so vowels. This means one cell can be used to represent each letter or vowel, and there will still be some left over for punctuation.

But regardless of how a language uses Braille, it’s still just combinations of those same 63 cells. So no matter how different two languages are, and no matter how different their written scripts look, in Braille the cells look the exact same, and lines of text look very similar.

⠠⠓ ⠁ ⠝⠊⠉⠑ ⠐⠙

⠚⠪⠍⠂ ⠝⠊⠋⠄⠇⠣⠁

See? One of the lines above is Hebrew, the other English. Can you tell which is which? The first line says “Have a nice day” in English, the second says יום נפלא “have a wonderful day” in Hebrew.

Before I started learning the Hebrew Braille system, I worried that I would sometimes not know what language I was reading in. No one would ever mistake a page of printed Hebrew for English, because the scripts are just too different. But since the Braille script is universal, and reading it with fingers doesn’t allow for that same full-page first impression you get with printed text, I thought sometimes I might get really confused for a while.

It turns out this is not a problem. It could be confusing for one letter, maybe two, but then it becomes completely incomprehensible. If I tried to read the Hebrew sentence above as English, it would be “jowm, nif’lgha”—no confusion there!

I guess it’s like looking at a page of German or French. They use the same letters as English, but you immediately know that it’s not English.

So, one less thing to worry about.

Second Thing: How does it compare to reading printed hebrew?

I knew that reading Hebrew in Braille would be a different experience from reading it printed on a page or written on a manuscript. It’s written from left to right, like English, so some people have asked me if it’s more like reading Hebrew transliterated into English characters. So far, I would say it’s not like reading transliteration or Hebrew script. It’s like reading Hebrew in Braille.

Classical Hebrew, the Hebrew of the Bible, was originally written with only consonants. This is a fine way of writing for people who grew up speaking the language, but once it fell out of everyday use, readers needed help remembering proper pronunciation. Scribes and copyists added in vowels and other pronunciation aids, in the form of small dots and marks surrounding the consonants. Now when you see Hebrew, it looks like this:

וְלֹא־לְמַרְאֵה עֵינָיו יִשְׁפּוֹט

And transliterated Hebrew looks like this:

wᵉlōʾ lᵉmarʾēh ʿênāyw yišpôṭ

Both Hebrew script and transliteration include marks above and below the letter: vowel points in Hebrew and diacritical marks in transliteration. In Braille, it is impossible to modify a letter by placing something above or below it. Everything has to be linear. Each of those marks needs to be represented by a character that either precedes or (more often) follows the letter it modifies.

This has a couple of effects. It hides somewhat the similarities between related vowels. One example is that of holem and holem waw (the ō and ô in the transliteration above). These two vowels make the same sound and are interchangeable in the spelling of many Hebrew words. The transliteration and their writing in Hebrew script make this similarity apparent. In Braille, holem is ⠕ and holem waw is ⠪—two completely different cells. For those who know Hebrew, the same principle applies to shureq and qibbutz, hireq and hireq yod, and the hatef vowels.

The feel of reading Hebrew (pardon the pun) also changes, because the vowels don’t play second fiddle to the consonants the way they do in print. They are given equal weight on the page. Apart from making words feel longer, though, I’m not sure how this will affect my experience of reading Hebrew. Let me get back to reading and I’ll let you know.

And of course, until next time, “jowm, nifl’gha!”

The Value of Photographs

An image of my daughter at the kitchen table, grinning over her breakfast. She is in striped pajamas and has wild, messy bed-head.

I spent a long time looking at this photograph today. Many minutes, because I had to, and because it was worth it.

Kristin took it at breakfast this morning. I was sitting right there, and I was enjoying the moment, but seeing the photograph was a different experience entirely.

In general, I look at a lot fewer photographs these days. It takes me longer to make sense of what’s going on in them. The colors and lines just won’t resolve into recognizable objects the way they used to. Sometimes it takes a few seconds, or a few minutes before I realize who or what is in a picture.

But there’s a flip side to this, because I don’t see life as quickly as I used to either. It takes time for me to make sense of what I see, for my brain to construct an image out of the faulty and partial information my eyes pass along. And life doesn’t always stick around waiting for me to make sense of it. 

My daughter is a toddler right now, and she doesn’t stick around long enough for almost anything. Sometimes, if the light is just right and at just the right angle, and she moves in just the right way so she’s framed in it perfectly, and of my eyes feel like behaving themselves at just the right moment, I get this clear, fleeting glimpse of her face, and I get to see just how beautiful she is. These moments stick in my mind, but they are rare, and all too brief. I miss so many details so much of the time.

But when Kristin takes a photo like this—a crystal clear, stunning capture of a living moment—The world slows down. It stops, allowing me all the time I need to pore over the scene, working out and appreciating every feature and every nuance. 

I know that a day will come when I won’t be able to see her this way, to know her face in this much detail, even from photographs. And I know that some day after that, I won’t know what she looks like at all.

And I will miss it.

It won’t affect my love for her, or my care for her. I will still play with her and laugh with her and teach her and share life with her. I will know a million things about her that are more important than her physical appearance.

But I will still miss it.

I’ll miss that shining grin and those sparkling  blue eyes, those looks of joy, inquisitiveness, mischief, and wonder.

So for now, I will treasure the photographs, and I will gladly take all the time I need to etch every detail into my mind and into my memory.

My Quest for the Perfect Word Processor: Act Two

Photograph of a winding path through a dark forest. This is a quest, after all.

In Act One of this epic tale, our hero had fallen on dark days. Forced away from Mellel, his comfortable word-processing home, he began to wander the land seeking new possibilities and brighter horizons.

Now we see him revisiting familiar territory. Microsoft Word for Mac 2011 is already installed on his machine, after all. But it too offers only disappointment. Hazardous to navigate and full of unmarked and unlabeled dangers, it is a VoiceOver nightmare. 

He considers other options: Pages, VoiceDream Writer. These are friendly and accessible, but nowhere near full-featured enough for a dissertation. He falls to using TextEdit—at least it works well with VoiceOver. Perhaps he will write his whole dissertation in plain text and typeset it with LaTeX. But of course this is absurd. Navigating a document as long as a dissertation in plain text would be next to impossible. Plus he would have to learn LaTeX, so…

And then at last, on the verge of despair, he finds hope. There is a new version of Microsoft Word for Mac, and it has been substantially rebuilt and reconfigured. Word has always had features galore, of course, and is capable of handling large projects like books and dissertations. In the new 2016 version, the development team has increased VoiceOver compatibility and improved support for Hebrew (as long as the Hebrew keyboard is used). 

Almost all of the buttons, tabs, and menus are clearly labeled for VoiceOver, and navigating the interface is relatively easy. Setting VoiceOver Hotspots for the ribbon and main text pane makes it even more painless. The only problem with this is that the Hotspots for the ribbon are document-specific, so if you have two documents open at the same time, you have to make sure you go to the correct ribbon. 

Navigating long documents can also be cumbersome. You can navigate by page or line, but it would be very useful to be able to navigate within your document structure. The VoiceOver rotor could come in handy here, connecting the headings menu to document headings and allowing users to skip back and forth that way.

The biggest bug in Word for Mac 2016 comes when documents get long and cover multiple pages. If you make changes to early pages in the document that affect later pages, VoiceOver can get confused about what it should be reading . When you use the “Read Line” or “Read Paragraph” commands, it will read the wrong line or paragraph, or start or stop too early. When this happens, closing and reopening the document solves the problem, It is not insurmountable, but it does get very tedious. 

Track Changes and Comments—two critical tools in academic work—are also difficult to use, but these are acknowledged issues that Word is working to improve.

So our hero takes up this tool, imperfect though it is, and sets his hand to the work. But his vigilance remains constant, and from afar he hears rumours of a new kind of tool: a powerful writing suite with deep VoiceOver compatibility. Tune in next time, brave readers, as our hero encounters…the Scrivener.

 

(This  epic post reviewed MS Word for Mac 2016 Version 15.24. Any subsequent  improvements to accessibility in later versions are not covered)

Connection and Collaboration

For the past several years, being a blind person in the academy has seemed like a very lonely, and often discouraging road. I was able to find a few stories about blind professors online, but I had difficulty finding any in real life. Faculty and students at my university were supportive and willing to help, but no one (including myself!) was well acquainted with the particular problems of blindness or how to address them. 

A couple weeks ago, however, in a crazy series of events, I suddenly began to find the blind academic camaraderie I’ve wanted. My wife Kristin, who is an interaction and usability designer, decided on a whim to attend an accessibility even in San Francisco, hosted by Cordelia McGee-Tubb. The event was great, and while she was there, she fell into conversation with Jennifer Sutton, an accessibility consultant who also happens to be something of an expert on Braille, and also happens to be blind (go see what she does on Twitter or on LinkedIn). Kristin took her card, and told me I should get in touch with her.

So I gave her a call, and I couldn’t be more happy that I did. Jennifer was generous with her time and resources. In a phone call and a number of subsequent emails, she connected me to blind scholars and academics in a variety of fields—a professor of linguistics at Rice, an English professor at Berkeley. Through these connections, I found an email group of blind academics. There are smart blind people doing great academic work in practically every field imaginable—English, history, art, psychology, human computer interaction, you get the idea.. They share knowledge and advice freely, and I have already learned a great deal from this group.

Perhaps most important of all, she connected me with a small group of people who are working to increase access to biblical language study for people who are blind. They have transcribed the Hebrew Bible, New Testament, and some other original language documents into Braille. Now they are working on scholarly research tools, grammars, and other ancient language materials. 

Check out night-light.org for more information, and this blog post in particular to see the current state of the project. My skills at Braille pale in comparison to theirs, but I hope to contribute to this work as I improve.

It’s been exciting and slightly overwhelming to realize how many other talented blind people there are out in the world and in the academy. Those I have met are kind, generous, and resourceful. They are determined to succeed and eager to help others do the same. They have thought through and worked around many of the difficulties associated with academic work, and they are happy to give advice and encouragement. Of course, there will still be many challenges in my future, many problems left to solve, but the task feels a little lighter.

My Quest for the Perfect Word Processor: Act One

An image of a big, unlabeled red button.

“Button. You are currently on a button. To click this button, press Control-Option-Space.”

Uh oh. This is the sound of VoiceOver non-compliance, and the first time I heard it, my heart sank. I am still new at VoiceOver, but I was even newer then, just learning basic commands and navigation skills. I was testing the various apps I use on a regular basis, experimenting to see how I would use them when I could no longer use my sight. VoiceOver is the main accessibility feature on Mac—it identifies objects and reads text on the screen, and allows the user to control everything with the keyboard and trackpad. So what is the problem? It’s helpful to know you’re on a button, right? It is, but it would also be nice to know what the button does. Exploring sloppy apps like this is like breaking into a super-villain’s secret lair. There are buttons—oh so many buttons—but none of them are labelled. Does this button open a trap door to the dungeons, or order minions to bring coffee? Does this button save my file, or delete it?

What that button above should have said was something like “Save button. Save. You are currently on a button. To click this button, press Control-Option-Space.” See? Proper labelling makes everything so much clearer.

The problem app in this case—the app that made my heart sink—was Mellel, my favorite word processor. It is the word processor of choice in my field because it was developed by an Israeli team and handles right-to-left languages (Hebrew, Aramaic) without a hitch. It also includes a robust set of options for formatting, structuring, and managing citations in long documents like academic papers and dissertations. In short, it was the perfect tool while I had sight.

But the developers had not considered blind users and had not put in the effort to make Mellel VoiceOver compatible by labelling buttons and ensuring that the menus and palettes were navigable. It would not even read the text I had written back to me.  Now I came face-to-face with the realization that I couldn’t use this familiar tool to write my dissertation. Worse, everything I’d written for the last eight years was inaccessible.

For now, I could muddle through. I can still see enough to spot read and navigate the on-screen geography of buttons and banners, can zoom in to read the smaller text. But this is getting harder, and it certainly won’t last forever, I need a word processor that will work when I can no longer see at all. So I need a word processor that is

  • VoiceOver Compatible
  • Robust enough to handle a dissertation-sized project
  • Capable of dealing with all the languages I use

May be a tall order. We’ll see. In upcoming posts, I’ll talk about some of my experiments and experiences with other word processors. As it turns out, I’ve just found one that I think is going to work. Stay tuned for my review, and in the meantime, feel free to share with me any recommendations for accessible word processors that have worked for you!

My Vision in Experiential Terms

(This is part two of a series on my vision at the start of this blog. To read the first part, see My Vision in Medical Terms.)

Image of an old TV set. It is displaying fireworks. The color is muted and the picture is grainy. A metaphor for my vision.

In the last post, I described how my photoreceptor cells are dying one by one and in turn taking with them what remains of my vision. I wrote that photoreceptors are like the “eye’s pixels,” and some of you may have imagined that a retina full of dead photoreceptors looks like an iPhone screen full of dead pixels. It would be nice if that were the case, but the truth is much more bizarre.

The Low-Res Retina

The pixel analogy holds in one important respect: fewer cones and rods mean less detail, just as fewer pixels on a screen do. In a sense, I was born with lower resolution vision than most. Think of an old, first-generation color TV from the 1950s. Broadcast resolution was low, and the images that showed up on these sets were blurry and indistinct. In this analogy, people with normal vision are like the latest generation of iPad, which have about nine times as many pixels as a broadcast TV. The difference is stark.

Comparing retinal resolution is the concept behind the common method of expressing visual acuity as “20/xx.” This method compares the distances at which two people can see the same object. The first “20” represents the patient, or more precisely, text the patient can read from 20 feet away. The second number is the distance from which a normally-sighted person can read the same text. Since people with a higher resolution can see something clearly from further away, the second number is usually higher for RP patients.

The last time I had my eyes checked, about a year ago, my right eye (which is my better eye) had an acuity of 20/250, meaning I can read at twenty feet what a normally-sighted person can see at 250 feet. To put this in more concrete terms, if there was a sign on the goal line of an American Football field, and I could read it from the 10 yard line, a normally-sighted person could still read it while sitting in the stands behind the opposite goal post. Keep in mind that this is with glasses.

Mind the Gaps

Poor resolution isn’t the only problem with my vision. There are also the gaps left by groups of dead photoreceptors. On a phone screen, dead pixels don’t give up their spot. They leave little black squares, constant reminders that there is a small part of the image you cannot see. The brain, on the other hand, doesn’t much like admitting there are gaps in your vision. It tries to fill in these gaps as best it can to create a cohesive picture of the world.

Have you ever tried that experiment where you use your finger to find your blind spot? You close your left eye and stare at a dot or letter or something on a piece of paper. Then you put a finger on the dot and slowly move it to the right, while keeping your eye on the dot. When your finger gets an inch or two away from the dot, it disappears!

This happens because everyone’s retinas have a barren spot with no photoreceptors where the optic nerve leaves the back of the eye. Your brain doesn’t like to let on that it’s missing anything, though, so usually it fills in the hole with the image from your other eye. Even when one eye is closed, you don’t see a big blank space, Your vision seems continuous; your finger is just missing.

Now imagine that, instead of one blind spot in each eye, you had a few dozen spread out across your visual field. Some are big, some are small, but your brain tries to hide them all.

I have a lot of gaps in my vision. In fact, my last visual fields exam said I had “central vision with peripheral islands,” meaning it might be more gap than vision. Most of these, my brain tries to hide from me. When I try to read long words with my eyes, some letters will often just not be there. I’ll have to look again or move my eyes slightly to see what I’m missing. The gaps across the bottom of my visual field explain my antagonistic relationship with “Wet Floor” signs, which have cause me much more grief than wet floors themselves ever have.

One hole has finally developed that my brain cannot hide. It is located near the middle of my vision in my left eye, probably spreading out from around the optic nerve. It looks like a smudge of something  on my glasses, with blurry edges that fade into clearer vision. It is always there, but I think my brain still tries to hide it away. In the daytime it is white or light grey, but at night it goes dark. It’s not very good camouflage, but it still tries.

The Fireworks

A screen with dying pixels fades into darkness, and many people think of blindness the same way, as vision fading into darkness. The Bard himself described “looking on darkness which the blind do see.”

Sorry Shakespeare, but going blind from RP is a constant chaos of color and light,

I mentioned some alarming visual phenomena in the last post, which I started to notice when I was working at an archaeological site in Israel. After a full day of digging in the blazing sun, we would go back to the air-conditioned hotel to wash up and eat. In the dim, cool room I would notice intricate, tightly-packed patterns that flashed and twinkled in my left eye. Sometimes it would look like the pattern of bumps on a basketball, sometimes like raindrops on glass. Other times it looked like a repeating tile mosaic, In every form, the pattern would flicker and flash brightly and gradually fade.

To this day, my left eye provides a near-constant psychedelic light show. The right has joined in, just a bit tamer. The patterns are still there, joined by other flashes, sparks, and twinkles. Electric whips crack across my vision. Fireworks blaze and pop, sometimes so large they blot out everything else. My entire field of vision strobes very quickly from bright red to bright blue, like I’m staring at flashing police lights from about six inches away. It must be a real party in there, because someone even turned on the smoke machine. Mist covers everything some days, like a thick morning fog.

The colors are brilliant, even beautiful, but of course it’s not a party. It is the anguished thrashing of half-dead photoreceptors, the aimless firing of bored and bereft neurons, the brain trying to synthesize an image when the signal is lost or senseless. It changes every day, every hour, every minute. I don’t know today what the world will look like tomorrow, this morning what it will look like in the afternoon, at 10 AM what it will look like at 10:05.

What I see—what I can and can’t see—is impossible to predict, difficult to communicate, and risky to trust. Vision is still useful, and many days I am grateful for what I have left, but it is not as useful as it once was, and far less dependable. I can no longer rely on my old ways of doing things. I have to change and adapt to keep moving forward through the mist and the fireworks..

Appendix A: Videos!

As I was writing this, Kristin reminded me of a video I had showed her that she found really helpful in understanding what RP looks like.

Aaron Morse made this video called “How I See the World with my RP Eyes” (bonus: cute baby!). His tunnel vision is a more classic pattern of RP than I have, but it gives a good general idea. Another YouTuber simulated some of the lights he sees in “My RP Visual Flashes.” I found it interesting that he too sees tightly-packed repeating patterns. Unlike mine, his are very angular.

(Photo credit: King-of-Herrings)

My Vision in Medical Terms

(This is Part One in a series of posts about the state of my vision at the start of this blog. The next post will deal with my personal experience of my vision. There’s a lot of technical medical information in this post, but I’ll try not to make it too boring.)

I have had very poor eyesight since birth—poor enough that I was deemed “low vision.” I got my first pair of glasses before I was two. No one could identify exactly what was wrong with my eyes, though, and not for lack of trying. My parents carted me around to practically every expert in the state of Colorado, and none of them could provide an answer. To all appearances, my eyes were perfect; I just couldn’t see well. The closest approximation of the truth was probably given by Dr. Alexander, my favorite childhood optometrist, who said “You may just have been born with fewer cones and rods [retinal photoreceptors] than other people.”

For the first thirty years of my life, this was about all that could be said about it. My vision was poor, but it seemed at least to be stable. In 2012, though, while I was working at an archaeological excavation in Israel, I started to notice strange visual phenomena that prompted me to visit an ophthalmologist when I returned home to Massachusetts. This ophthalmologist gave me a preliminary diagnosis of Retinitis Pigmentosa and referred me to the Massachusetts Eye and Ear Infirmary for confirmation.

The doctors at MEEI took high-res photographs of my retinas and subjected me to a battery of tests, including color discrimination, visual fields, dark adaptation, and the notorious ERG. The ERG, or Electroretinograph, measures the electrical activity of your retina (like an EKG for your eyeball. You put on a giant contact lens with electrodes implanted in it, and stare at a strobe light for what feels like an hour (probably five minutes). People with healthy eyes produce a graph with a nice wave of ups and downs. Mine? Flatline.

A flat ERG response is the hallmark of advanced Retinitis Pigmentosa. There’s no measurable electrical activity going on in the retina, but most of us can still see to some extent. As the doctor told me, “it’s like there are people who don’t register a pulse, but they’re still up and walking around.”

So what is this strange form of retinal zombiism known as Retinitis Pigmentosa?? There’s a boilerplate description that appears with a little variation across the internet, and goes a little something like this:

Retinitis Pigmentosa (RP) refers to a group of inherited diseases causing retinal degeneration. The cell-rich retina lines the back inside wall of the eye. It is responsible for capturing images from the visual field. People with RP experience a gradual decline in their vision because photoreceptor cells (rods and cones) die.

(I took this version from the article at http://www.blindness.org/retinitis-pigmentosa — the rest of the article is quite good also, and the Foundation Fighting Blindness is an excellent resource for learning about retinal disorders and new research)

Photoreceptor death explains the diminished ERG response. Fewer cells means less electrical activity, and a weaker signal traveling from the eyes to the brain. You can imagine these photoreceptor cells as the eyes’ “pixels”—the fewer there are the less detail there is in the image. Healthy eyes contain over 120 million photoreceptors—way too many for the optic nerve and brain to handle at once, so much of the signal is just discarded. For people with RP, vision loss only begins once the number of photoreceptors falls below the maximum number the brain can interpret at one time.

A hand-drawn graph showing the loss of photoreceptors over time in RP, as it relates to the capacity of the optic nerve and brain. There is a horizontal line representing the amount of input the optic nerve and brain can handle at one time. A second line starts in the upper left and slopes downward. It crosses the horizonatl line about halfway and ends in the lower right. This line represents the number of photoreceptors that someone with RP has, which decrease across their lifespan.

Graph of photoreceptor loss in RP

This explanation is a bit simplistic, since degradation does not occur evenly across the visual field. Most people with RP develop dead spots in some areas of their vision, while other areas remain relatively clear for much longer. In the majority of patients, degeneration happens from the outside of the retina in. They lose their peripheral vision and night vision first, and only lose their central vision in the final stage of the disease.

Since my vision has always been poor, I was probably born with a reduced number of photoreceptors across the board, as Dr. Alexander surmised so many years ago. That means my graph might look more like this:

A second hand-drawn graph representing my personal case. The same horizontal line is there representing the brain/optic nerve capacity, but this time the line representing the number of photoreceptors I have starts below the line on the left and slopes gradually downward toward the bottom right.

Graph of how I imagine my photoreceptor loss

Now more of my photoreceptors are dying, but not in the usual pattern. I am losing cells from the center of my retinas outward. My detailed central vision is going first, and the periphery will follow.

It may seem strange that I stray so far from the norm, but this leads to an important point: RP is not one disease, but many.

When I first started seeing that standard description of RP four years ago, it did not mention a “group” of inherited diseases, but in the past several years it has become very clear just how many forms RP can take. It can strike at any age, progress at any speed, and carry with it a number of other symptoms. Some of the nastier varieties, like Usher’s Syndrome, cause deafness as well as blindness. They all culminate in the death of photoreceptors, but can differ substantially in how long this takes and the path they take to get there.

The variety in RP patterns probably results from the variety of its causes. It is commonly considered an inherited genetic disorder, but the details get pretty complicated. Genetic testing of RP patients has linked the disease to mutations in more than 250 genes at this time, and that number keeps growing. Certain mutations are quite common and well-understood, while others are rare and more tenuously linked to disease symptoms.

With this many genetic links identified, it is a startling fact that the genetic cause of RP cannot be determined in about 45% of patients. Almost half of patients’ RP is caused by either an unidentified genetic factor or something else entirely. This is my lot—despite the cutting-edge genetic testing I received from the Ocular Genomics Institute in Massachusetts, a genetic cause for my RP has not been found. 

It is possible that an as-yet-unidentified gene mutation or combination of gene mutations is causing my vision loss, but it could also be something completely unrelated to genes. No one else in my extended family has suffered from any similar malady, which points away from an inherited cause. It could have been some accident of development in the womb, a vascular event that starved my retinas of oxygen or nutrients at a critical moment. At this point, who knows?

So have I learned anything from being diagnosed with RP? I have confirmed the suspicion that I don’t have enough photoreceptors, and have learned that more are dying every day. I’ve learned that this condition is progressive, and my vision will continue to deteriorate. I have passed the boundary into legal blindness, and without medical treatment I will end up totally blind at some point in the future. This won’t happen right away—my last photoreceptor may not blink out for another twenty or thirty years— but with the current state of technology, it is inevitable.

So it’s a good thing technology changes. This diagnosis has also clued me in to a world of exciting research on retinal deterioration and rejuvenation. There is nothing on the market right now that can halt or stop what is happening in my eyes, but research teams are working on treatments and cures using gene therapy and stem cells. All of these are still at least 5 to 10 years away from widespread deployment, but clinical trials are going strong. Hope may not be right around the corner, but it is on its way. 

 

(Note: The information in this post comes from my research on the topic of RP and from personal conversations with retinal specialists, I am not an expert in the field, so there may be a few inaccuracies. I alone am responsible for all of them and I welcome corrections!)

C[ong]ratul[ation]s!

Eric reading Braille

“Congratulations, you have completed the study of contracted Braille!” said the dots to my fingers earlier this week. Or rather, “C[ong]ratul[ation]s, [you] [have] [com]plet[ed] [the] [st]udy [of] [con]tract[ed] brl!”

I’ve been studying Braille for a year. I learned Grade One quickly and easily enough — it’s what most people think of when they think of Braille, where each cell represents one letter or one punctuation mark. Grade Two, or contracted Braille, is another story. Various contractions are used to shorten common words or series of letters, so one or two cells can represent two, three, four or more letters. In the quote above, everything within brackets is contracted. There are dozens of these contractions, and many of the signs play multiple roles, depending on context. So Grade Two Braille took a little longer, due to its complexity and, well, life getting in the way.

I feel proud of this little milestone (and relieved that there are no more lists of brain-breaking contractions left to learn), but I also know I’ve got a lot of milestones left ahead of me.

I am slowly going blind, and slowly learning to be blind and work as a blind scholar. I am not at the very beginning, but neither am I anywhere near the end. I have a long path ahead of me as I gain the skills I need to conduct my research, finish my dissertation, and teach what I have learned.

I know all the contractions now, but I also know I need to speed up. I timed myself to see my current pace: sixteen minutes and twenty-five seconds for one page—just shy of fourteen words per minute. It’s not bad for a beginner, but I feel like a six-year-old. I want to fly through academic prose; instead, I’m struggling through the simple stories in my Braille primer.

So now I am shifting to work on speed and technique. “Elite” Braille readers usually read around 130–150 words per minute, and I’ve heard rumors that some have reached 400 words per minute. They use three fingers on each hand, reading with both hands. I have a lot of work ahead to master Braille, and that is just English Braille. I will probably end up using it for German, French, and Hebrew as well.

The state of my Braille is much like the state of my journey into blindness as a whole. I’ve made progress, but there is still a lot of work and learning to do. I’m starting this blog in the middle, not the beginning. I hope to make it a space to share the process— not only with Braille but with all the other strange adventures of blind scholarship: exploration, experimentation, collaboration, frustration, and hopefully a few moments of exhilaration. I’ll get into the nitty gritty of multi-lingual Braille reading, my quest for the perfect word processor, adventures with assistive technologies, and much, much more. I’ll also use the space to share more general thoughts on life, blindness, my research, and everything else besides.

Please read along and tell me what you think. Whether a lifelong friend, another blind person on a similar path, or just a curious stranger, I look forward to hearing from you!

(Addendum: As of January 1, 2016, Level One and Level Two Braille are outdated terms. The new Unified English Braille standard is now the most prevalent form of Braille, as it combines and streamlines literary and computer Braille codes. The primer I used to learn Braille used the old system. Those in the know may have noticed the [ation] abbreviation, which no longer exists in UEB.)