Skip to content

Working Group Meeting Notes 2022

wfree-aph edited this page Nov 12, 2024 · 1 revision

Notes from 2022 are kept here. Dividing notes by year will make them easier to edit. Previously the notes had gotten so long that editing was becoming an issue.

2022-12-13 eBraille Working Group Meeting Notes

13 December 2022

Present: Richard Orme, Avneesh Singh, Francisco J. Martínez Calvo, Ashley Nashleanas, Jennifer Dunnam, Susan Osterhaus, dan.gardner, William Freeman, Manfred Muchenberger, John Ylioja, Danielle Montour, Marty McKenzie, James Bowden, Steve Noble (Pearson) (Steve Noble), mischa and marlies at SBS, Orbit Research, Sara Larkin, Basile Mignonneau (Association Valentin Haüy), Ka, Jen Goulden, Robert Matt Garrish, Svetlana Vasilyeva, George Kerscher He / Him# Missoula# Montana, Charles LaPierre - Benetech - San Jose CA, Michael Hingson, Matthew Horspool (UK Association of Accessible Formats), Mike Paciello, Bert, Jenna Gorlewicz, Tkáčik Michal, Jennifer Sutton

Agenda

  1. Housekeeping
  • Please remember to say your name before speaking.
  • Notes for all meetings and for the tactile graphics taskforce are available on GitHub.
  1. Minimum Viable Product document
  2. Granularity of Use Cases

Discussion of organizational representation Richard: We have been approached by some of the braille standards groups, which want to discuss how this initiative might relate to their work. Five-part way of working together:

  • Understand how this initiative will interact with braille-standards bodies/authorities. Examples are World Braille Council, BANA, International Council of English Braille, etc.
  • Approach is to develop future braille and graphics file formats through consensus approach
  • Recognize the contributions from braille standards bodies are warmly welcomed to participate. Those who wish to participate will need to complete the application form on the Daisy website. If we're officially informed that someone is from a braille standards body, we will recognize in the meeting notes that they are participating in an official representative capacity.
  • On request, we will provide a written status report for the meetings and newsletters of these entities.
  • On request, we will field someone to the meetings of braille standards bodies to discuss the initiative and answer questions.

Francisco is representing the Spanish Braille Commission.

James Bowden mentioned that many participants are wearing multiple hats and that the contributions of hardware and software developers are equally relevant to those from braille authorities. Will we indicate their representation?

Richard suggests we do that and put that info at the top of the meeting notes.

William said he would like to keep a running list so Lara Kirwan and William can keep track of the info in the notes.

Matthew mentions we could include the organizational representations in a separate document on the website. He also mentions that we should ensure we reach out to braille organizations. Richard would like guidance on how to know who to invite; multiple people recommend asking Judy Dixon.

Jen talks about having more than one role. She indicates she is participating thanks to the support of the company she works for but that she is officially representing BANA, not her employer.

Richard asks why approach Judy. She is president of ICEB and has good relationships with English braille authorities, was involved with the World Braille Council (WBC), and has a "fantastic phonebook." (Kim, who has attended these meetings, is chairman of the WBC.

It is agreed to put the attendees and their organizations in a separate document.

Minimum Viable Product (MVP) Document

Making Change Suggestions

William: This document is on the GitHub. Its purpose is to determine the bare minimum for us to consider this document is complete—even if we add to it later. These helps us declare what our purpose/priorities are. He wrote a draft and then worked with people from DAISY and with Anja to refine it. Thanks to those who have already entered comments in the document. Does anyone want to add to or questions to ask about it?

Language Changes

George mentions the need to include to include something about the mechanism for changing the language via, e.g., ePUBs or HTML documents.

William: Says it makes sense and would be easy to include this something about language changes to the MVP. Right now, it includes only braille code changes and braille grade changes. There has been discussion about this topic on GitHub. William says he thinks how the creation software handles language changes would be the same mechanism as code and grade changes.

Matthew asks if we really need a language change. If we already have code changes, is language change necessary? The code change would already address the language change. William indicates he doesn't know enough about international braille NS; he asks if changing the code automatically changes the language or if there are instances where a language change would be needed. If not, it should still be possible to back-translate.

James from the UK is concerned about the emphasis on backtranslation, since this is a braille-first file format, and some codes cannot be backtranslated with 100% accuracy. George says that people complain about the automatic translations to braille being accurate, and he echoes James concerns.

James Bowden says it makes sense to have this kind of markup as "advisory."

Michael says they are having to deal with similar issues. Using their Slovak braille codes with another language, such as for names and streets, they either use their Slovak letters or if there is some longer text, they will switch to a different code.

Svetlana thinks that language changes are still important, as multi-language documents are used a lot. It's not complicated for them because it is legend-based languages. Sometimes it can get quite complex.

James Bowden says the braille file contains braille characters, no matter the print script, what advantage is there to mark the language change in the braille apart from backtranslation?

Manfred, who says he is not a braille expert, said that since we will be using Unicode the letters are less complicated. Still thinks that language is important, especially if the book has more than one language in it.

Matthew Horspool, still not convinced and thinks the code switches would handle that. From a user's perspective, it would facilitate automatic cataloging. E.g., if a person wants the book in SEB, they would know right away if the braille is in UEB.

William mentions that this info can come in metadata. James concurs and reminds the group that this is advisory.

George says that we have the lang attribute in every markup language, so it "comes for free" as an attribute for that word, sentence, or paragraph. Then its up to the reading system to do what is appropriate. He has settings on his machine to switch to a different language as needed. Other reading systems will be similar.

William asks about—for the MVP—the possibility of identifying language, code, and grade changes in markup but not require them. Have users/customers demand them if they need to, but not require this.

Manfried: RE the MVP in general and perhaps in this case specifically: Should we have a mechanism in place that does not prevent use cases from going further? So while we don't have to have the language in there, but we should not prevent our new standard from including it.

William clarifies that the group has not settled on whether we should allow these things but not require them, and he asks if it this could be something that a group could require locally. Does that over-complicate things?

Dan Gardner: This could be optional, not required; this is how he does project-planning.

Richard: The MVP as a document could include something about language markup but having it would not a requirement in produced documents.

Avneesh: Clarifies the purpose of the MVP is to pinpoint what this group should be working on. Later, we can discuss what will be mandatory features versus what will be optional feature. Do we want to invest time in creating the content for the 1) switching the braille codes or 2) switching the braille grade, and 3) switching the language? Language will be free, but we'll need to work on braille code and grade changes.

William: Thinks it would be useful to include braille code and grade changes. These will be useful in many parts of the world, even if not everyone uses them. Otherwise, backtranslation will be much harder for everyone.

Dan asks if the MVP includes anything about providing the actual equivalent text so you don't have to rely on backtranslation.

William: Yes, there has been discussion about and there is a use case for things like text-to-speech (TTS) and interlined embossing.

Robert: Asks about whether somewhere in the spec this information is included in the file somewhere.

George points out that this is in the metadata. He then says that the more markup we allow (though not necessarily require), the better off we'll be in twenty years.

George says that whenever there is a transition in the braille, like going to Nemeth or Music or something, is there a standardized way to identify those transitions? William says that's true for UEB but not necessarily for other braille codes.

James concurs and says sometimes there are just layout differences that indicate braille code changes.

Richard reminds that we're discussing the MVP of the specification—not the file format that will be produced later, already in the MVP are braille code and grade change, and the group seems to be saying we should include language changes in the specification; further on we can decide whether these are required or optional in the files that are produced.

William names the categories currently in the MVP:

  • navigation,
  • semantic markup (SM), with list of different types of SM
  • formatting,
  • tactile graphics,
  • metadata—long section on this and what kinds of metadata we're including, and
  • encoding (indicates we are going to use Unicode UTF 8)

If you have questions, notice omissions, etc., you can put comments in the MVP or discuss them now, especially if it's something controversial.

Type forms/Emphasis

James asks William to say more type forms and emphasis—Are they for showing/hiding text? Wouldn't we have to specify that as well as use indicators for these things? Why are we including them?

William says the purpose is primarily for backtranslation, doesn't think it will be necessary to include indictors for showing/hiding text. He gives examples from UEB where there's markup for type forms and emphasis, such as bold and script, and says it would be useful to have mark up for these even though there are indicators in the braille to represent them.

James mentions that there are different ways italics can be shown in different languages and that's a lot to expect from readers.

Matthew: Is in favor of hiding the indicators. Asks if James if he wants to hide the text between the indicators.

James: Not the text, just the markup. This is in one of the use cases.

Matthew: The indicators will be shown as braille dots, so do we need to say what the indicator is or can we just say this is bold or this is italic or this is general, etc.? He was curious because he was wondering about the possibility of hiding these indicators. What are the things that one might want to hide or note hide? Matthew says you'd have just a generic indicator type and then separately localize them for each braille code.

Robert doesn't think you'd need the indicators in braille when backtranslating the text, doesn't think you'd want the indicators in print.

James says that is not entirely true but the print is different typeface and it is deliberately ignored in braille. Example: In print, most headings are bold, but in braille the bold is ignored since centering the heading is sufficient.

William was thinking that the markup would only indicate when emphasis was used in braille and not worry about times when it was suppressed.

Robert talks about how odd it would be to include markup for something that is not included in braille.

Jennifer wonders if we want the styles to do some of that. If we can allow the styles to handle the backtranslation.

Matthew says he agrees. He says when he was a transcriber he would remove emphasis early in the transcription process. If he had to put this information into one of these new eBooks, he'd have to change his workflow substantially. He would not be in favor of including emphasis info that isn't currently in the braille already.

William says as a transcriber he wouldn't want this either. There aren't enough transcribers right now, and they already have a difficult job following all the rules.

Handling Further Discussion of the MVP

Manfred asks about what the preferred method is for discussing the MVP further.

William says you can edit directly, comment on the ticket, or email the mailing list.

Avneesh prefers commenting on GitHub so that this one issue will be in the same thread. If it's difficult, then send an email.

George says that on GitHub if you sign up for notifications, you'll get an email when someone makes a comment; if you reply to that email it will be inserted as another comment in the issue.

William suggests that when you want to reply to the email you select the entire email message, delete that message, and then draft your reply. Otherwise, the original email will be included in the comment on GitHub—there will be a lot of unnecessary text, which may prove problematic visually and with a screen reader.

How Software Reads Files/Formatting

Basile asks about use cases, mentions how on the DAISY reader, files read differently depending on which tables is used. Duxbury and LibLouis do not interpret the code the same way. Should we work on another table, or should we unify the existing tables RE these new codes (not the text—just the code)?

William asks how applicable this will be since this is a braille-first document and it will happen at document creation. Is this accurate? LibLouis doesn't do markup at all, right?

James asks if this is where the CSS comes in. The HTML markup behaves differently in, e.g., Duxbury and LibLouis.

William doesn't think LibLouis addresses style or markup.

James: LibLouis uses Unified Tactile Document Markup Language (UTDML) for markup.

William says that this is happening at document creation, so we shouldn't run into this issue. James is wondering if this is a CSS issue. Basile clarifies that a non-breaking space will get interpreted differently by LibLouis and Duxbury.

William said we will address this issue with a validator and exemplars, which can be opened by reading software. Assuming you can open these files, work with them, and meet these requirements, then you can say your reading software supports this file type.

Next Meetings

We will stick with our plan to meet the second Tuesday of every month, only discussing meeting dates when we there's an exception. We will keep two meetings on your calendar at all times. The next two meetings are:

  • January 10, 2023, 10:00 a.m. ET
  • February 14, 2023, 10:00 a.m. ET

Use Case Granularity

See Avneesh's ticket about the MVP

Avneesh created an issue (#42) that makes one of the issues about the MVP more granular. Check that out on GitHub, and we can discuss that if there are questions.

William added an issue (#11) about opening files quickly. The overall purpose of the ticket is to discuss volume divisions, but "open files quickly" is a more granular use case of that subject. We can create separate tickets that go into more detail about different aspects/use cases that fall under the umbrella of volume divisions. More granular tickets could address having a spine; allowing reading systems to sort through the files, know how they're related, and know which to open; the need of the reading system to remember your last position. The group may think of others related to open files more quickly.

Jennifer Sutton asked if there's an example of the standard way to indicate dependencies.

William: Avneesh's ticket entitled Breaking Navigation Further talks about how navigation should be broken down in a more granular way. In this ticket, he refers to the ticket number that his ticket relates to with the keystrokes #41, and GitHub automatically made that keystroke combination a link to issue 41.

George said you can add a label too.

Avneesh: Asked William to add an issue about backtranslation.

If you have more granular topics of use cases already on GitHub, please add them. The point is to get into the implication of the topic. Try not to worry about doing this incorrectly. If something is done incorrectly, it can be corrected. Once some more granular tickets get posted, William will send the mailing list links to those issues.

2022-11-29 eBraille Working Group Meeting Notes

Present: Richard Orme, Michael Hunsaker, Tkáčik Michal, Tina Herzberg, William Freeman, Samuel Proulx, samuel desmecht, Susan Osterhaus, John Ylioja, Matthew Horspool, Jen Goulden, Mike Paciello, Danielle Montour, Anja Lehmann, Avneesh Singh, Jennifer Sutton, Jennifer Dunnam, bert, Steve Noble (Pearson), Matt Garrish (Matt), George Kerscher He / Him# Missoula# Montana, Nicole Gaines (Nicole), Ka, dan.gardner, Francisco J. Martínez Calvo, Svetlana Vasilyeva, Andrew Flatres, Kim Charlson, Caryn Navy (person behind the video in today’s meeting is named David), Leona Holloway, Manfred Muchenberger, Venkatesh Chari, Sara Larkin, Ron Miller, Basile Mignonneau (Association Valentin Haüy), Amanda Lannan (ala300), Basile Mignonneau (Association Valentin Haüy), Charles LaPierre - Benetech - San Jose CA, Basile Mignonneau (Association Valentin Haüy), Jenna Gorlewicz

Lara Kirwan, who was not present, will watch the recording and take notes from there.

Tactile Graphics Task force

The eBraille group decided to form a tactile graphics (TG) task force that will meet separately, in addition to attending the eBraille meetings. They will select their own chair, manage their own schedule, and meet until they come up with a standard for working with TGs. William will be on that task force, and he asked who else would want to join the group. Volunteers: Venkatesh Chari, George Kerscher, Michael Hunsaker, Tina Herzberg, Ka from NNELS, Susan Osterhaus, Dan Gardner and Christian (either) from ViewPlus, Leona Holloway, Mike Paciello (or someone else from Pearson), Avneesh Singh, Michal Tkacik from Slovakia, Sara Larkin, Elita Group (Svetlana Vasilyeva), Andrew Flatres (HW). If anyone else wants to join the task force, they should send a message to main email group. If someone else from your company wants to join in your place, notify the group via email.

William doesn’t expect to find a perfect solution for eBraille to handle dynamic TGs, but the goal is to find a way for the file format to handle the TGs “gracefully” today and into the future.

Braille Encoding

What kind of coding do we want to use for braille encoding? Unicode vs ASCII vs something else you currently use, or do we want to enable multiple coding?

Input:

Francisco: Unicode because the group wants to be international. He wants to avoid the issues they’ve historically had with tables and different languages.

Michal: Unicode. Thinks one standard would be best rather than using multiple types of encoding.

John Ylioja: A con for Unicode is the size issue.

William: Unicode is three times larger than ASCII, but the TGs will be our main issue when it comes to size, so the encoding shouldn't matter very much. How will Unicode handle on lower end devices?

David from Duxbury supports unambiguous ASCII with Unicode as a backup. Mentioned the situation where multiple print characters represent one braille character.

Manfred: strongly supports Unicode for sharing across multiple languages. Did tests four years ago, distributed in both ASCII and Unicode and there are some older braille displays that can't handle Unicode at all.

Jen Goulden would not argue against Unicode. How much more work is it to have ASCII as an option? William mentions that this isn't difficult.

Matt Garish, has three separate encodings—UTF-8 vs UTF-16 vs UTF-32. This has come up in ePUB discussions, where they really should have only used UTF-8. ASCII is a part of UTF-8 encoding.

Jennifer Sutton: Asked if historically we currently focus on ASCII. William: Yes, though there are other similar encodings that are used. Aside from size, what other pros of ASCII would there be?

David from Duxbury wants us to make sure the ASCII is unambiguous. An example, in Japan, there are 47 different Japanese characters that are closely fixed to a particular braille cell. There's 16 or so other cells in braille that aren’t not used, besides space. So when you read a manual for Japanese transcribers those show those 47 characters as the representative Japanese characters. Duxbury has a mode for display that uses these Japanese standards. If we are international, we would learn about many different ways of representing braille characters on computers around the world.

Jennifer Dunnam: Thinking about two things—1) find a simple way to convert existing BRF documents to Unicode so they can be used in this new file format, and 2) ensure the interface would allow users to easily search this file format (like they currently do with six-key interfaces) and that it would support Unicode. William talks about how this issue isn't difficult to address.

Converting from ASCII to Unicode is not difficult (it isn’t on BrailleBlaster [BB]). It also is not complicated when you view an ebraille editor on your braille display.

William mentions how Unicode could change how we share braille within our community for the better. Right now if you send braillle to someone and they don’t have your braille font installed, then they will see the ASCII not the braille. With Unicode, we wouldn’t have to worry about all the braille fonts.

Tina: Asked if BB, Duxbury, and Braille 2000 (B2000) support Unicode.

  • William: BB does in Preview mode, and APH is currently adding the capability to the braille view.
  • William: LibLouis displays Unicode. Plan to make ASCII optional.
  • David: In Duxbury, you specify the coding yourself, and Unicode is one of the options. Duxbury will make sure they can read and write these files. And internally made a commitment to take fixed braille files and turn them into reflowable files and make the correct file in this new format.
  • Dan: ViewPlus also supports Unicode in Tiger Software Suite.

George: ASCII is teeny, and 3X teeny is still very small. Doesn't think size will be an issue here.

William, Jen Goulden, and Jennifer Sutton: Do we lose anything if we go with Unicode only?

Matthew Horspool: does not support have multiple encodings. Now we're tied to those forever more. Transitioning to Unicode should be easy enough and we should mandate that Unicode is the encoding we will support. He suggested we write tools that will help people transition their old files over to Unicode.

Charles LaPierre: agrees 100% with Matthew. Thinks it would be a mistake to go backwards. UTF-8 is already a standard and ASCII is a subset of that. We're not losing anything by going with Unicode.

Dan G: Older devices can't read Unicode. Matthew: Older devices that can’t display Unicode wouldn't be able to open eBraille files anyway. Dan: He wants to capture this information as a “known known” so the group doesn’t have to revisit the subject later. We could have a converter for all devices, so they can “drop it back down” if they needed to.

William talks about how backwards compatibility and conversion tools would help us with older devices. APH is working on a converter. Indicates it sounds like the group wants to go with Unicode, and asked that if anyone objects to say so now.

Venkatesh: Supports using Unicode and thinks it is the right way to go. He wants to make sure the group doesn’t lose sight of the fact that size is a factor. While there are gobs of memory on modern computers, Orbit still has some low-end systems with limited memory. For those systems the file size is a factor. This is not just a matter of available storage space on SD cards. Remember that when you're searching files, for example, if something is three times as large, it will take three times longer to do that search.

Avneesh: If talking about eBraille, if it's an HTML XML based format. In memory model of ASCII vs Unicode is not so huge, because the objects are there, so parsing makes things a little different. Avneesh recommends settling with Unicode and then evaluate any hardware issues come up with Unicode.

William: Key is addressing backwards compatibility and offering file-converstion tools that can address the size issues. Doesn’t want hardware issues to be problematic when this file format comes out and garners wide support, e.g., doesn’t want people to have to buy something new to be able to use this file format.

The group decided to go forward with Unicode.

Goals for the next month

The next phase is identifying the technical requirement and the content document. This requires diving deeper into the use cases and building the requirements that will be used for the spec, potentially making more granular use cases.

Avneesh: Agrees with William about putting together the content document. He recommends forming a meta data task force. Before we do those things, though, he believes we should define and go deeper into the use cases.

Richard: Important to set up a graphics task force. Decouple that work so it isn’t a dependency for getting through the ebraille discusion.

William: The tactile graphics task force will meet separately and go over what file type/s to support and recommend, how we’ll deal with them. Then they’ll bring their recommendations to the main group. In addition, plan is this meeting will occur once a month, and we’ll primarily do our work via the mailing list and GitHub.

George: Tactile graphics task force can meet whenever they decide? The main content group whenever they decide. Asked if the whole group would meet monthly. If the main group needs to discuss the content document or technical requirements, then they could meet more than that.

Avneesh: Main group will address content, metadata, and other things. Do we want a metadata task force in addition to the TG task force? Is once a month sufficient, given that they will have a lot to work on.

Andrew: Told William he’d like to be on the TG task force.

Go to once a month for eBraille

Matthew fine with once a month, then we can re-evaluate after the holidays.

Manfred; if we have very specific taks, would probably be more efficient to have smaller groups. Maybe a metadata task force.

Richard: likes monthly meetings. Over the next month, he would like the co-chairs to evaluate what the forward plan is, see if the group is proceeding at the correct pace, and then determine if a monthly meeting is a sufficient timeframe.

William, Anja, and DAISY leadership can clarify the plan, including milestones, and then get feedback from the group via email. Once milestones for progress are established, they can determine if monthly is a doable time frame. Anja: You can always email the group and use GitHub to keep things moving. Because everyone is so busy, this may be helpful, rather than trying to align all of the schedules, people can work on things when it’s convenient for them.

Michael and Jen: Agree with this plan.

At William’s request, Avneesh said he would help start the conversation about working on the content document. He referred to Matt and his having the mainstream standards.

William: How can the group contribute until when we meet next?

Matt and Avneesh found we are not at the right place for the use cases to make the content document.

Matt: Start constructing an idea for outlining the spec, which requires an understanding of the technical details of what needs to be included in the content document. This means they will have to go through the use cases again and determine the key requirements. Need to plan out the specifications doc and how it will line up with the use cases. What are the impediments to it being

Avneesh: We have the technical document, and it would be good to prioritize for a minimal viable product (MVP) and then go back to the use cases and define them more so we can get into the detail of the technical requirements. Matt believes this will take the most work at this stage.

William volunteers to draft the MVP on a wiki and allow everyone to review it by adding comments and making changes. Then with that document in place, the group can go back to the use cases, making them more granular and provide info to flesh out the technical requirements.

Duxbury (David): recommends getting a category of the use case area in time for the next meeting, just to give everybody an idea of what information is needed to fill this out much faster.

Avneesh: Is everyone available after two weeks, it’s a possibility. Making decisions via a teleconference expedites things, but that means we need to make sure everyone is available.

William: Because of the upcoming holidays, he recommends meeting December 13, and then begin meeting in the middle of each month. They could provide a draft of the MVP, and go through a category of the use cases to give everyone an idea of what’s expected. He clarifies that it will not be perfect; it will be a starting point that people can create a better version.

Dan, Jennifer: like the plan because if we can take a category and those who are more experienced can model how we want to present more granular use cases as we're working on them together.

George: December 13 and January 10 meetings? Richard, Dan, William: yes to both. Make them the 2nd Tuesday unless that poses a problem for anyone.

Action Items

  • William will make a draft MVP statement on GitHub
  • William and Anja meet with DAISY before sharing with the group and go over the category of use cases to address.
  • William will email the Tactile Graphics taskforce and start the conversation about its meeting schedule, tasks, etc. Do we need our own mailing list? DAISY has volunteered to make one if needed.
  • Jennifer: Likes the idea of more experienced people can go through a category of use cases and model how to work on them for those with less experience. If they can demonstrate how you define more granular use cases, then she feels like she can work more independently. ** Co-chairs and DAISY will meet to discuss which category of use cases to cover at the next meeting.

Next meeting is December 13th and then January 10th.

2022-10-18 eBraille Problem Statement Use Cases

Meeting Notes

Present: Avneesh Singh, Nicole Gaines, William Freeman, George Kerscher, Anya Lehmann, Matt Garrish, Jennifer Dunham, James Bowden, Charles LaPierre, Francisco J. Martinez Matthew Harspool, Dancing Dots, Jen Goulden, Jennifer Sutton, John Ylioja, Peter Sullivan, Ka Michael Hunsaker, Mike Paciello, Orbit Research, Peter Tucic, Rianne LaPaire, Svetlana Vasilyeva, Tkacik Michael, Bert, Lara Kirwan

Use Cases

6- or 8-dot braille or both?

James will write this use case. What does the group think?

So far none of the use cases for the ebraille file format state which dot configuration we need. This affects how we encode the braille. ASCII is mainly for 6-dot braille, so 8-dot would be clunky. Do we want to focus on 6-dot, 8-dot, or both?

Peter: If this is an international group, we should account for both since many countries use 8-dot.

Jennifer: both.

Francisco: Why not have both? What about Unicode, since there are codes for both 6 and 8?

James: Clarified that we aren’t discussing the encoding, rather we’re discussing whether it should be 6- or 8-dot or both. We need a meta thing saying it’s 6 or 8, because if it’s 6, you don’t want to have blank lines between rows of braille.

William: Asked if anyone disagrees about using both 6- and 8-dot.

Dancing Dots (DD): Supports both because we need to... He doesn’t like 8-dot braille, and this technology could be a way to focus on 6-dot braille that would be smarter. Mentioned Louis braille experimented with a lot of dot configurations and concluded that 6-dot was the least ambiguous. Said 8-dot braille is “a fact of life” that we need to accommodate. Dislikes 8-dot and does not think it serves blind people well. Suggests this effort could be a way to focus on 6-dot braille that’s smarter.

James: Most literary codes are 6-dot. DD: Music code is too.

Tactile graphics

William: Concerned there isn’t a use case for tactile graphics.

George: Added one for having a hotspot label you could press to see what the graphic is. He also mentioned one related to the “tour” (aka, an audio description) that can be used in a diagram picture. There are at least two for tactile graphics; he submitted them right after the APH conference.

Peter S: Wants to see those use cases. He has two people he’d like to invite to these conversations. William welcomed that.

James: There are other issues involving graphics, but they aren’t marked as use cases. An example is the text description for graphics.

William: It’s marked as issue #24, content spec, high priority and use case. The deadline for adding use cases was last week, but should we keep the window open a little longer?

Jennifer: High level idea that may be covered by existing use cases, but she would like to add something about geometry and maps. They aren’t exactly what we’re looking for, but are these areas we should cover to see if people can think of use cases when considering geometry and maps.

Nicole: How will we indicate whether images are information-bearing versus “eye candy”? whether the image is “reworked” from being aimed at a visual learner to being displayed in a tactile format. Concerned that there will be an expectation that one could automatically generate this format from another source file format, and it would be immediately renderable without further info or the efforts of a skilled transcriber. Concerned about what it will look like if the groundwork is not laid to provide quality graphics. What workflows do we need to lay out? How will the graphics be generated?

Avneesh: Those are very good points. He’s concerned that graphics are a wide subject that needs a task force that focuses just on graphics, with the participation of the hardware manufacturers. With their help we can figure out device limitations, how the algorithms interpret the JPEG and PNG images, and whether they can be automated or not.

William: Per Nicole’s comment: He assumed we were talking about a prepared file. We need to point out now that there is the possibility that someone will create a “dirty” ebraille file that was machine-prepared where the graphics were just the images that were in the starting file (NIMAS XML, EPUB, etc.), and now the file isn’t that usable—and graphics in particular will have very little (if any) use. It may be worthwhile to have a use case about the quality of the TG and whether or not they’ve been prepared in a specific way.

Peter T: From HumanWare’s perspective, the TGs need to be prepared, and we would need something like a disclaimer to indicate that.

Charles L: We need a use case for maps. Maybe Mike May from Good Maps could provide one.

George: We will need metadata on TG, since some will be for 3D models for external printing or for displaying on of the newer dynamic tactile displays. For example, images in JPEG format will be terrible, so “you need to know what you’re getting.”

Peter S: It may be useful for the reader, depending on their available hardware, to render the TG and have available more than one version of an image to work with.

George: Asked Charles if it would be useful for the group to see all of the work they did at the diagram center with maps that could be produced? We didn’t have a dynamic tactile display—just 2D, 3D, tours, metadata associated with grade level. Said that work may need to be resurrected.

William: We will give people one more week to submit use cases about TGs and other areas, and William will send an announcement to the mailing list. Even if something is added that ends up being about the reading system and not the spec, it would still be useful for a TG task force to address.

Nicole: Avneesh, what do we need to have ready by the November DAISY board meeting? Are we on track to provide information at that meeting?

Avneesh: 90% of the technical analysis document will go to the board, but they are not technical experts. RE use cases: Said we’ve done a wonderful job with the use cases. At the same time, we need to cover the use cases more granularly. The board needs only a high-level view of what’s going on with this project, but this group needs to go into more technical detail.

Francisco: What’s the best way to send comments about the document itself (not the use cases)?

Technical analysis document review

Avneesh: This document is on OneDrive, and we need to give the team editing rights to the document. This document doesn’t go into technical integrities, but rather a high level/strategic level view. Main objective of the document: provide high level direction, provide recommendations, define where we’re going. We have analyzed the use cases, pulled out the technical requirements for them, figured out which of the existing specs support these technical requirements (PEF, PDF, HTML, DAISY XTML, etc.). In conclusion we compared the various specs and which is the good way forward. For example, for packaging we identify the specs that are already there. We are not providing a recommendation at this time because when we get into more granular use cases, we’ll be able to figure out the best packaging format. We’re providing the direction that that are three options to choose from in the future. Similarly, we are not providing specific recommendations about graphics, only that we recommend creating a TG task force that works closely with hardware and software manufacturers.

James: Linear braille formats are one file type that aren’t mentioned in the technical document. He isn’t recommending them, just mentioning them in case we want to “close them off” if we don’t want to use them. BANA and Duxbury both, and a Danish one have linear formatting.

Peter from Duxbury: Theirs was intended to be based on the BANA format but he can pretty definitively say that no one uses it.

Jennifer S: Asked James and Peter to explain what this is.

James: It is a halfway house based on the US ASCII computer code (a BANA standard), and it has markers in the text indicating this is a heading, a table, etc. It meets some but not all requirements for eBraille.

Peter: The markers are pure text.

James: We should mention this format of braille and then indicate that we will not include it.

Jennifer: Would it help to go through the BANA specs and make sure we cover what that document covers? This is not necessary. Peter: No, it’s not the worth the effort to investigate. If linear braille is needed, it can be done on the fly.

Jennifer D: The linear braille specs are not maintained any longer, and other things cover the issues better now.

Avneesh: We will mention this and then close it.

William: What are thoughts about leveraging the recommendation related to HTML/XHML and CSS for this work and keep DAISY XML as a backup.

James: Concern with XML is that it’s verbose and memory intensive. A lot of braille displays don’t have a lot of memory and pre-processing power.

Peter (Duxbury): While Peter is correct, then cannot be the primary concern in that, of course, pre-processing of something for generation of a more specific format for a particular device is always possible on a PC.

James: Yes. Would like to compare the size ratio, for example, for a typical book in standard BRF format, as it currently is, in PEF format as it currently is, as an EPUB as it currently is, just as a size comparison.

William: Size and complexity need to be considered, and we have agreed we wouldn’t limit ourselves to only what a low powered device can handle. While NIMAX XML files have a lot of images and can get very, very large, not all the images would need to be reproduced; that throws off the size a bit. Will share some of the work in this area on the mailing list. Asked team to share other info if they have it.

Francisco: Said he tends to defend BRF because “we’ve” been very happy with it. The only problem with that format is that it’s underdeveloped. But it has potential. It’s XML and you can do with it the same thing with a DAISY book. It isn’t that these file types are not capable... they just need to be implemented. EPUC and DAISY are more developed than PEF. How long would it take to develop EPUB or PEF to become the standard we’re looking for? It’s hasty to discount PEF, because it isn’t developed yet. It’s hasty to discount HTML at this point.

James: Correct that PEF handles the internationalization issue, but it does not word-wrapping, document semantics, and it doesn’t handle graphics to his knowledge. That’s three out of four. Do you agree with that?

Francisco agrees, but says it has potential.

James: Reiterated that he isn’t advocating this, but DAISY, EPUB, and HTML already have document semantics, graphics, and word-wrapping. The only thing it doesn’t have is internationalization of braille. That’s why the document leans towards the XML-based systems.

Avneesh: Two options: Keep PEF as the base and put everything that DAISY and xHTML provide into PEF. Or we keep xHTML or DAISY XML as the base and put PEF qualities into DAISY and xHTML. In our analysis, we found that putting PEF qualities into these formats is much easier than putting all the qualities of these formats into PEF.

Francisco: He just doesn’t want to lose whatever we already have with PEF. I don’t mind if it’s PEF we go for or if it’s EPUB, but PEF has something. I’m not sure if that something is in those four options/big groups that were mentioned. If whatever we get has the functionality that PEF has today plus everything else, I’m happy with that.

William: That’s a good point.

Jen: RE HTML having all these advantages but that it’s challenging to make updates because: Do the benefits of HTML outweigh the challenges?

William: The document lays out those challenges, but it may not go into specifics.

Avneesh: Extending HTML so the browsers get all of these capabilities is a huge challenge. Includes Mozilla, Microsoft, Google, etc. players. The workaround is the same as what we did in the EPUB world: We added extensions to Access HTML and CSS, so if you open those files in the browser, they will render them properly. The browsers know what is “garbage” and won’t use that. But braille displays, know and will use the attributes.

Bert: We cannot enhance PEF to include everything, but do we need to choose between HTML and PEF and CSS, or can we bundle them in the same file in a zip package?

George: Concerned about the architecture. Having all of the capabilities we want may be impossible to implement. While it’s necessary to be able to add TGs to PEF, we are going to have files that are focused on embossers that have different capabilities.

Peter S: George, we need to be careful to stipulate there are two different kinds of implementations. One is to produce and the other is to render. Asked if George was specifically asking about producing.

George: I’m afraid of a reading system that will need to be built that will use the features, but production/the authoring side of things is a big issue too.

Peter S: If you’re talking about having different versions of the file formats or devices with different capabilities, it’s very important to include metadata that will help users to find the different versions of the files.

Avneesh: Bert is the principal engineer of the DAISY pipeline project, and he is the person who is starting the transform of PEF and digital XML.

William: Should we post the technical analysis document on GitHub and edit it? If you aren’t comfortable using GitHub, you can ask questions via the mailing list.

Michael H: thinks having it on GitHub is a good idea, especially since it's version controlled. It helps us see the thought processes that happen as we come to a consensus.

Avneesh: Would it be better to put it on a wiki page instead, with markdown? The team decided that the wiki would be the better place to house this document.

James asked if we need to state that we won’t cover markdown. Avneesh said there are many challenges with markdown.

Dancing Dots: Please explain the difference between points 1 and 8: the reflow of information and the dynamic spatial arrangement of things.

James: Explained each of these items. Reflow is about ordinary text/an ordinary paragraph. If you create it for a 40-cell braille display, you want to still be able to read it on a 32-cell braille display. Line breaks will be in different places, but you don’t want “long line short line syndrome,” where one line is long, the next has 3 characters, and then you have to pan again. #8 is about how to display tables specifically. He wrote in a comment that there are many ways to display a table, not just in tabular form or what in the UK they call paragraph form (linear form in the US?).

William: It’s also things like vertically arranged math problems, e.g., nine math problems with three per row, and you need enough markup that you can reproduce in a way that makes sense on a line length that’s not necessarily the same as the line length the material was prepared for.

James: That has direct application to bar-over-bar music, where you have variable sized “chunks,” e.g., rectangles, which you need to rearrange if you don’t have enough line space.

William: That sounds like it’d be specifically interesting to Dancing Dots.

Avneesh: Is there anything else, in principle, that we need to address in the technical document (not just the wording)? Is there anything problematic that would mean we should not move ahead?

Charles: XHTML is the format we used in the EPUB specification that we kind of regretted, wants to make sure we don’t go down the same rabbit hole.

Avneesh: That’s a decision we will make later. HTML/XHMTL—We want to avoid getting into that, because last time that discussion took a year and a half last time. He will indicate that HTML or XHTML is something that will be decided later. He will post it to the wiki page tomorrow.

William: will post an analysis of the different file sizes and how they change from BRF to PEF. There isn’t a lot of room to compare NIMAS XML to EPUB, but we can at least compare NIMAS XML to PEF and BRF, and so on. If others have analyses, they would like to share, please send it to the list.

Two weeks from now is the DAISY board meeting, so we won’t meet for another month and do most of the work through GitHub and the mailing list.

2022-10-04 eBraille Problem Statement Use Cases

Present: Thomas Kahlisch, Avneesh Singh, Nicole Gaines, William Freeman, George Kerscher, Scott LaBarre, Tamara Rorie, Anja Lehmann, Matt Garrish, Jennifer Dunham, James Bowden, Charles LaPierre, Alice O’Reilly, Francisco J. Martinez Calvo, Manfred Muchenberger, Richard Orme, Matthew Horspool

Agenda

  1. Housekeeping

Reminder about recording and notetaking policy Reminder to introduce yourself briefly when speaking

  1. Finalize use cases and priorities—Main topic here will be to ask if there are any objections to the priorities currently in the GitHub project. Then we will discuss the following topics:

Number of files to be bundled (just the braille volumes, or do we need to include a TTS file and potentially other versions of the same file) Metadata—what do we need to include here? Is everything adequately represented in ticket #27? Semantic markup—what do we need to include here? Is everything adequately represented in ticket #28?

  1. Open for discussion of topics from members
  2. Discussion of next steps

##Finalize Use Cases and Priorities on GitHub

Priorities

Avneesh defined priorities High required for spec to be successful Medium may want to include in future revision and may be able to add them in the first version Low out of scope

James emailed William and Matthew about use cases. William will bring them up in the course of this meeting.

Current tickets to discuss

Manfred: Would like TTS to be a high-priority use case, rather than medium or low. William: It’s currently medium.

Three use cases to discuss:

  • Will there be a TTS file? Other versions of the file, such as uncontracted?
  • Metadata ticket #27
  • Semantic markup ticket #28

Relevance of TTS to Germany

It’s not relevant in every language but is def relevant in German contracted and uncontracted. Important for user to be able to switch between listening to it and reading it on the braille display. For example, start the book, listen to it via built-in TTS, and then switch to reading it on the braille display. We want error-free contracted braille. To make that happen, we’ll need the file of the original text and the contracted braille ASCII files. His group plans to package the braille edition of ePUB, text from commercial ebook and an ePUB to switch between the two. We were not planning to put in preprocessed TTS audio—just the TXT file that can be read by a TTS. Manfred indicated that this is important for the user and for the Library for the Blind, so it would be convenient to be able distribute this in one package.

Discussion

William: Can this be left up to the reading system? Can the reading system back-translate the braille in German? Manfred: It’s not possible.

Anja: There are notetakers that can back-translate, but the results are not really good. She uses the notetaker and has the notetaker translate into contracted braille.

Richard: current usage of digital text in ePUB or Word file, where users have the TTS, e.g., and an on-the-fly translator to read the braille. Agrees with Manfred’s comment about linking use of TTS with different contractions. In this new standard, we want to include all of this in one file. The subject of the debate is whether or not to include in the first version of this new standard/approach the ability to use multiple versions in one file. Scope question: can we go straight to supporting multiple versions in the first version, or is this a solid use case? Does the format support it (not just a reading-system issue)? Do the authoring systems support it?

Manfred: While this is complicated, but if we don’t describe the mechanism in the standards, it would be harder to do this later. Avneesh agrees.

Francisco: How difficult will this be to do in the future? Will it be easier to do from the start? Consider instances where you have multi-volume works that need several files int he same package. Thinks this is a high priority.

Avneesh: 1) What Francisco is asking for is different from what Manfred is. If you have multiple volumes, the reading of the files will go first to second to third and fourth. This is marked as high priority. 2) Manfred is asking for two parallel tracts of files, essentially packaging two applications into one. For usability it is very important. It is marked medium because we want to consider in the design, but Avneesh is concerned whether it will be possible to do this within one to one and one-half years. The reason is that multiple rendition specification is nearly dead. It is too complex. There are technical issues that need to be resolved before an agreement can be reached. There’s an issue of synchronizing different files. Multiple rendition uses Manfred’s approach, but the publishing industry has not picked up CFI software. We have to verify if this is the right approach or if we need to look mapping IDs. This needs to be part of the design consideration, but it should not be part of version 1.

Manfred: It’s true commercial publishers won’t need this—they distribute braille files, but is that who we’re expecting to use this?

Avneesh: Multiple file specifications are not technically mature. It may work with CFI, but we don’t know if will work with all languages. These are the issues that need to be resolved before we can move forward, and it is not feasible to address them within the year-and-one-half timeline.

Robert: Has a braille display and uses JAWS simultaneously. If we include the braille translator in the device, you could create an ePUB file while the device synchronizes. Would like to control the braille grade and mentions a use case about technical documents that could be hard to read in, for example, contracted UEB.

Matthew: 1) Need to distinguish this from packaging multiple versions of braille in the same package. When talking about multiple braille versions, he wasn’t referring to their being in the same package. 2) Per Robert’s point and to ask Manfred: What are the situations in which forward-translation is not sufficient? What would the use case of needing this in an ebraille file and not an ePUB file?

Manfred: We already distribute ePUB files. You can read them on a braille display, but in the case of contracted German braille, there are a lot of faults with them. The quality is not good. Is our main use case to print braille or just braille displays and if just braille displays, is it worth it to have a new file format just for that?

Ka Li: This being medium priority makes sense. For those who want to learn different languages TTS with contracted braille, have it synchronized, and to include multiple files of text and good quality transcribed braille is important.

William: There are a lot of good points for why to use the TTS file. Is it okay to keep it as medium priority with the caveat that we will include it in our discussion of the first draft of the specification. As we consider what we want to model this off of, what already exists, we would consider the addition of the TTS file and the synchronization ni the first spec. We don’t want to cut ourselves off from this option in a future revision.

Avneesh: Can we consider doing this in the first version, but if we aren’t successful will it be okay? Concerned about technical details. Strategically, if we design ebraille we should design it the way Manfred wants to package the ebraille files in ePUB or another specification, he should be able to do it.

Manfred: If SPS is the only organization that wants this, we don’t want to stop the group from doing something, but it is fine if we can put this under consideration.

Avneesh: We seem to be agreed that we will make it part of the design. When it becomes part of the specifications depends on the workload and when we can achieve it in the timeline.

Orbit Research: Supports having this capability as part of the design, although he understands it probably cannot be part of the first version.

William: It’s still medium priority, but we will def make sure it’s part of the design from the start even if it isn’t implemented in the first version.

Metadata

William: James Bowden submitted this idea. He isn’t trying to make an exhaustive list. The reason for this ticket and the markup ticket is that there were several tickets were covering different aspects of metadata. This is meant as one ticket that covers why metadata is important and how to include it and what to include.

His is one ticket that covers title, author, publisher, publish date, ISBN. It references several tickets: page layout information (characters per line, lines per page), braille grades, braille code, and producer name. Does anyone have anything else that want to add to this ticket? William will add it to the ticket as something that was discussed in today’s meeting.

Robert: There needs to be a ticket for tactile diagram that says what system it runs on. A tactile diagram that can print on a View Plus Technologies printer or an index printer is a different file format, as opposed to a 3D diagram you can print on capsule paper. There should be a tag in there that indicates those images. William added knowing the medium and hardware requirements to the ticket.

Francisco: Made a comment with a link to NLS. The metadata they use is divided into two: core metadata (8 elements) and the extended metadata elements (many are important but including them may be too much for one ticket). He recommends including the 8 core elements (or at least a few of them). While he thinks the extended metadata is important, there might not be time to include it at this phase.

Avneesh: He summarized the group’s desire to make metadata a high priority and that we should move metadata forward. Suggests that the details be addressed in a smaller group.

Charles: Agrees to set up a task force to discuss metadata. Two additional points: 1) include accessibility metadata (how to handle tactiles in this package). 2) Bookshare specifically also requires copyright date, copyright holder, and a synopsis.

Group agrees that metadata is a high priority and we will need a group in the next phase to what the actual metadata requirements will be.

Matthew: Include metadata about the file or about the book? He wants to discuss this at some point, not necessarily now. William added this question to the note. He also added that a taskforce will formed to discuss this subject at the next phase.

Semantic markup

This is marked as high priority. It is referenced somewhat in the problem statement. This ticket is meant to cover several use cases. The use case was more about the reading system, but it’s definitely considerations we want to make sure are included in the file standard. If we include semantic markup, there are navigation possibilities to consider. There was a lot of discussion in the ticket about how to show and hide elements and if you want the ability to back-translate at all. Also included were how braille code changes should be handled. If you don’t mark those up, how can you back-translate? We need to identify when a change would take place. We won’t discuss everything that needs to be included in the list, but are there are questions about anything in the ticket?

Jennifer made two points: 1) It doesn’t seem like a semantic-markup issue, but rather an issue of showing/hiding and the potential suppression of extra blank lines. It seems like a reading-system issue. Potentially notetakers or braille displays with screen readers can manage this? Is additional lines out of scope?

William: Braille displays using current braille file types do suppress blank lines. With ebraille, where do those blank lines come from? If we’re using markup and separating from its presentation, suppression of blank lines should be easier since reading systems will be supplying the blank lines.

Jennifer: The semantic idea of marking up every single line so it can have a tag to “hang its hat on” to figure out what to suppress doesn’t make sense. Not sure if this is a semantic issue at all.

Avneesh suggested Jennifer add this as a separate issue on GitHub.

Matthew: The more markup we add to the file, the less like a traditional BRF file it looks and the harder it will be to reflect what the transcriber actually wants. Transcribers may add blank lines as part of typical braille formatting.

William: BrailleBlaster uses markup in the creation of files (Duxbury may too). Even though transcribers have the possibility of using markup, a lot of them simply hard-code things using generic styles. Do you we leave the option to hard-code things? Do we have them rely on generic styles, even though it limits navigation possibilities. If we don’t include it, that limits the possibility that transcribers will use this file type.

Charles: With semantic markup where you can have preformatted text and add in the spaces and have tags around it so when a braille transcriber wants those spaces, they can put it in there and it will be tagged as such. Also, the tagging of code could solve the issue of regular text versus code. You could put on a different braille translation or not translate in the semantic markup as well. Having it in there could solve both types of issues.

William is adding all of this info to the ticket, preceded by “at the meeting.”

Avneesh: There may be special semantic-markup issues for braille. If we can identify what specific markup that is required for the braille but is not part of the DTBook XML or the HTML specification, it can be highlighted. It will things clearer, whether we can reuse xHTML or DTBook.

Manfred: We produce braille from an XML source and use some specific braille markup... that we don’t use in large print. There are several texts that are only used for braille, e.g., capital letters. Not sure if this is production-related or something we can figure out in the file form.

Avneesh asked him to send him information about that. In a conversation with William a year ago, he said there are two types of paragraphs: block and simple. In the DAISY XML and HTML, we have only one kind of paragraph. If there are other differences—something we need in braille—we should have a list of these things.

Matthew: He doesn’t disagree. He doesn’t know if DT Book has it. HTML has the notion of classes and CSS. Would it be possible to use HTML with a special type of CSS that allows us to add those extras in? It’s a special type of paragraph but it’s still a paragraph.

Avneesh: If we have a list, we can see if CSS can achieve it or not.

William added this to the ticket and will review previous drafts to see if something else can be added to this ticket. He asked others to add their comments as well.

Next Phase

Deadline to finalize use cases is October 11. Make sure you create any issues or add your comments to issues already in there by October 11.

Richard: Suggested that when we meet October 18, we can review use cases and see what is emerging in terms of technical requirements.

At the October 18 meeting, a future meeting schedule will be discussed.

Comments:

If you have further comments about any of this, post them on GitHub or send them to the mailing list.

2022-9-20 DAISY eBraille Meeting

In Attendance: Richard Orme, George Kerscher, William Freeman, Avneesh Singh, Matthew Horspool, Jennifer Sutton, Andrew Flatres, Anja Lehmann, Basile Mignonneau, Bert, Charles LaPierre, James Bowden, Jen Goulden, Jennifer Dunnam, Michael Ryan Hunsaker, Peter Tucic, Sarah Bradley, Svetlana Vasilyeva, Tkacik Michal, Venkatesh Chari, Lydia Smith

Introduction

Meetings are being recorded for the purpose of notetaking and will be deleted after notes are completed. Anja asked if anyone objected to this; no objections.

Since these meetings are very short, the expectation is that you will come prepared to participate. All are welcome to invite others. As much discussion as possible will take place on github and through the newsletter so meetings can be used to make decisions and reach consensus. Please let William know if he has forgotten someone or feel free to invite people yourself.

Question: Are we using this meeting just to assign priority? Richard agreed, and Avneesh clarified that for unclear use cases or use cases pointing toward content and reading systems or authoring, the cases ought to be split into two.

Issue 5 – Production of braille files

Labeled as an authoring issue pertaining to how easily file is created. William stated the goal is for conversion to be as easy as File>Save As eBRF in braille production software. Tags will not have to be entered manually. James Bowden said that production tools like braille translation software should adopt this new standard.

Avneesh suggested that this is more of an implementation issue, not content specifications. He suggested issue 5 be skipped and returned to once work has begun on specifications.

Issue 6 – Braille without any transcription errors

Richard explained this refers to a braille-first file format. The encoding of the file needs to be consistent such that everyone uses Unicode or ASCII, etc. Michael looked at the original post from 27 days ago, positing having a digital text with pre-transcribed braille in Unicode rather than ASCII. That part was deleted.

Question: If it must be Unicode, what priority? James suggested high priority. It should be a braille-first file format

George asked if encoding was pre-transcribed for files to be converted. James said yes, the file contains characters directly read as braille characters. All agreed to assign this use case high priority.

Question: Do we close this use case now that we commit to Unicode?

There is another issue (3 or 4) about internationalization. William pointed out that Unicode files are 3X as large as ASCII files for the same content. He suggested we not commit to Unicode but commit to “Unicode or ASCII.” William updated issue 6 to include more detail. Avneesh and George said this issue should be revisited once architectural design comes into effect.

Issue 7 – Synchronized switching between braille display and TTS audio

This is also an implementation issue, outside of specifications. William says they could make the spec using Unicode and still do TTS audio. This would shift burden to screen reader developers.

Avneesh explained that his understanding was that there would be two files (braille first and HTML) and a mechanism to switch between them. He asked if this was achievable and if this needed to be packaged in the first version of specifications (as opposed to revisiting in the future).

James brought up use case 18 (having contracted and uncontracted braille in same file). Some countries have more than two braille coding standards. There could be all kinds of graduated braille codes for a learning scheme. If you have multiple files of the same text, you could have a speech file of that text. Not the same as back-translation. Multilingual files would be an issue, as well.

Jennifer proposed that TTS synchronization should be a medium priority issue. James, Richard, and others agreed.

James pointed out that if you add a whole parallel speech stream, you will double file size. William agreed and mentioned that it will not compare to file size of tactile graphics, just something mentioned when we talked about using Unicode.

Issue 8 – Reformat tables from spatial to linear as needed

William suggested that this was a problem for the reading software. There needs to be enough info in the file to be able to reformat from spatial to linear; tools in HTML and xHTML can be used to create a table so that we will know the contents of each cell and can reflow as needed.

Michael mentioned that use case 23 also has this as an issue and said this seemed to be a matter of implementation. All agreed to assign high priority.

Issue 9 – Internal navigation via hyperlink

Question: Should we leave this to the reading system to remember where you were? Or should the author have to put in a link to the footnote and then a link back?

George clarified this was marked as a reading system requirement.

William asked what priority would having internal navigation links be? All agreed to assign high priority.

Issue 10 – Hide text based on user’s preferences

This use case pertains to things that only exist in print because of the medium’s physicality (like running heads or page numbers); in BANA, page numbers take up 6 or 7 cells. A lot of space can be saved by hiding elements. This is more a reading system issue rather than implementation.

James mentioned there are several similar use cases and suggested changing the title to “Hiding header/footers and page numbers.”

Question: Are we saying that semantics of a particular element would be the criterium for hiding or showing? Richard responded, “If we classify this as reading system, we’ve lost it. But there may be bearing on content specification.” James agreed there would be bearing on the file format. These elements need to be labeled for the reading system.

Basile asked if there needs to be a ticket about external navigation. William pointed out that whatever system we end up basing this on, we’ll have the means to travel to an external link. This needs to be a separate issue so we can prioritize and discuss.

Question: Would hiding headers, footers, and page numbers be high or medium priority? Charles replied that adding the semantics is a high priority to get into specification at the base level; we can figure out what to do with the reading system, but getting this functional with correct semantics will be much easier to do at the beginning than later. High priority to add semantics.

Avneesh asked that the title of this use case be changed to reflect what we have discussed.

Issue 11 – Open files quickly

The goal in creating our file type is that files are not so large that they open slowly. One way to do this is separating braille into separate volumes to make them easier to open.

George recalled a single braille file containing 1500 pages and being very difficult to open. It is part of the design of the file format to allow it to be broken up into whatever logical divisions we’re thinking of (like chapters). Avneesh clarified that if we have separate files, we need mechanism to read chapters in order.

All agreed to assign high priority.

Issue 12 – Automatic conversion from excellent source files to E-Braille

We want automatic conversion to be possible. George explained that it is an important design decision to pull from the largest arena of source files that could be automatically converted. This will give us the biggest collection of content very quickly; if we can convert EPUB 3 automatically, we will have millions of files available

James said that this is an implementation issue, not to do with the specs. He asked if this has more to do with translation software and what formats they can handle. George explained that it is an issue of semantics being translated. Avneesh added that we need to ensure we have compatible structure markup that maps with HTML markup. There needs to be harmonization. Everything else is related to conversion and production.

Matthew said he would check for an exisiting use case on this issue and create one if it did not exist. He also brought up that too many semantics in the new file format could prohibit conversion.

Basile asked, “Have you tried new system that converts EPUB to braille? This could be a good way to check if semantics would be compatible.” William explained that in his experience with BrailleBlaster, the print file types (EPUB and HTML) aren’t specific enough to do a perfect conversion, but there is enough mark up to make a good conversion.

Question: Do we give this a label of authoring as this is mostly about implementation or is it part of the content spec and we need to give it a priority? James impressed that we need to consider a minimal set of semantic elements we want to include in ebraille spec. Nicole agrees. This is an implementation issue. We need to be thinking about tagging and semantics from the ground up. From NIMAS standpoint, in K-12, we would want to ensure we can convert “straight out of the box.” There is a lot of content in NIMAS format.

William proposed labeling as an authoring issue and giving it a high priority. All agreed.

Issue 13 – Dynamic refreshable tactile graphics for use in digital source files

A tactile file format would be useful not only in this specification but wherever you want to have a refreshable tactile display. George said he thought this would be an independent spec unless we can find an existing standardized file format.

Question: Could SVG be used here with certain additional requirements? What additional requirements do we need in a tactile graphics format?

George detailed an instance of him creating an XML vocabulary to describe graphics. James pointed out that information could be included as metadata in an IMG tag. A JPG could be used.

Question: Do we use regular image formats and then rendering becomes a reading system thing? James replied we could use a regular file format, define it dot by dot, or many more ways.

William asked, “How much are we willing to burden the reading system when it comes to graphics?” If it is now the reading system’s job to OCR to print then translate into braille, that could be quite a burden and limit support for file type (among embossers especially). Matthew added that we need to preserve tactile graphics rather than render print ones.

Question: What’s the minimum viable graphic we need to include pre-transcribed graphics? The ability to render other image formats should be medium priority. For Richard, the minimum viable is tactile graphics created by an expert. For James, minimum viable would be monochrome, pixel by pixel, would not be scalable. Richard suggested that first stage designed in such a way that it has future extensibility.

Andrew explained that with PDFs, we can create meaningful graphics and braille symbols to multiline device (either OCR device or OCR the pdf or capture if written in a text environment).

Venkatesh proposed from his experience that popular graphic file formats should be supported as an embedded file or a link, and then we would leave the parsing of file format to the reading system. We would not limit the types of files that could be linked or embedded. Things like labels could be added in braille, either as metadata in graphic file or within ebraille file (leaving to reading system to connect metadata to graphic image display). We should not limit more capable readers.

William asked that this be its own use case (do not limit file type, explain how content spec handles images, then leave it up to reading system to create the graphic).

Question: If a single-line braille display cannot represent a TG, what is the fallback? Take the description from the alt text.

William proposed authoring and content spec and high priority. He also added a reading system tag. Avneesh asked that we split this use case into several separate issues. Richard asked that we revisit this at the beginning of next meeting. Basile will file this as an issue.

Conclusion

Question: Would there be any objection to having Anja, William, and DAISY personnel assigning priority to the remaining issues with the provision that their decisions will be discussed through github and at the next meeting?

All agreed. These priorities will just be a suggestion, all can comment.

2022-08-23 eBraille Problem Statement Use Cases

Present: Thomas Kahlisch, Avneesh Singh, Nicole Gaines, William Freeman, George Kerscher, Scott LaBarre, Tamara Rorie, Anya Lehmann, Matt Garrish, Jennifer Dunham, James Bowden, Charles LaPierre, Alice O’Reilly, Francisco J. Martinez Calvo, Manfred Muchenberger, Richard Orme

Project page, mailing list

Richard published an updated version of the problem statement. Charles recommended some changes, and he also made some punctuation changes. Next level is to create a Project page with info about the project, a mailing list, and the problem statement (including an HTML of the problem statement and a link to the GitHub). The problem statement is on GitHub. We can continue to tweak it in the coming weeks via the issue tracker. Suggestions from Manfred will be addressed in the next few weeks or so.

Meetings: who should join and how can we moderate the attendees list?

Avneesh has created two directories for the problem statement in the GitHub; one is final and one is a draft directory where further iterations of the problem statement can go on. He created a directory for use cases, where the use cases can be posted.

Avneesh said he included a Drafts folder for managing use cases for now, rather than using GitHub’s branching method, because not everyone on the committee has experience using GitHub.

Richard needs to add instructions for signing up for the mailing list and a link to the meetings. Question: Should people just need to complete a form to attend these meetings, or will there be moderation in determining who can attend? James prefers we have more information about people (their organization, whether they are a hardware/software producer, whether they are a braille reader, where they are located [country], etc.)

Richard will draft an email message that will auto-reply to the person who completed the form, informing that person they’ve requested to join the initiative. Once the person confirms they do, in fact, want to join the group, the committee will be notified. We will let them know that this is a development group and that we expect them to have expertise in the area and to contribute. Once Richard drafts the message, James will review it and provide feedback before the message is finalized.

Use Cases

Avneesh sent everyone via email the link to the Use Cases markdown file in the same thread as the GitHub repository link.

George talks about how you can clone the repo from GitHub on your computer and then have everything stored locally in an updatable way.

There is a use-case template Avneesh got from the publisher. Is this template okay, or do we need to modify it?

The group reviews the template and an example for the use case template.

Use cases template: Once examples are added using the template, we can determine if it’s sufficient. Under the Scope statement, Richard recommends adding an explanatory statement.

For each use case, there will be an introduction (a statement that follows the template), the details of the use case, and a proposal for addressing the issue (if a proposal exists). Each use case will be housed in the Issue Tracker. Once a use case is discussed and finalized, it will be moved to the Use Cases file and the issue will be closed.

To be able to post a new use case, you first need to register for GitHub. Make sure you send Avneesh your GitHub ID and/or your GitHub email so he can add you to the repository. You can set up notifications in GitHub so you can keep apprised of issues without having to log into GitHub. Avneesh has received GitHub IDs from the following: Jennifer (sent hers just prior to the meeting), Richard, Matt, and Charles

If you are not comfortable using GitHub, you may put your use cases in a Word document or TXT file and email it to Avneesh, who will convert it to a GitHub issue. Once you create a GitHub account, any issues you submit will be associated with your account.

Organizing Use Cases

Possible use case: “As a braille reader/braille display user, I want to move through my documents by heading so that I can navigate much more quickly through the document.”

Rather than being specific with the word “heading” in that use case (since you can replace that word with “paragraph,” or “table,” etc.) and ending up with numerous use cases involving quick navigation, you can instead create one use case for quick navigation and replace the word “heading” with “structural elements (e.g., heading, paragraph, table).”

Currently, based on the problem statement, there are four types of use cases:

  • internationalization
  • quick navigation
  • graphics
  • line wrap/line length

Tamara asks if we should have a separate use case for following internal links and footnotes. Richard suggested we start a use case for this and then in the discussion determine if a second or third use case is needed.

James will write the navigation use case in the next day or two.

Formatting use cases

William pointed out that when you open an issue in GitHub, there is already a template in place with markdown. You can replace the template text with your own wording. You’ll type the title in the Title field and the contents of the use case in Body field. You don’t have to use markup, but you can if you want.

  • Title: A three–four-word description (basically, a hint for what’s in the body of the use case)
  • Body: The details of the use case

Reminder: You may email your use cases to Avneesh if you are not comfortable using GitHub.

No need to provide labels or assign the issues to anyone. We’ll look into those details at a later date.

Coordinating the work related to developing use cases

Volunteers? We need at least two people. These would be co-chairs/co-coordinators/co-leads.

What characteristics do we need in a co-chair?

  • Since APH initiated this venture, it would make sense for an APH colleague to serve as one of the coordinators.
  • We should include someone who produces/reads braille in a language other than English
  • Here are other needs to consider when selecting co-leaders: ** They understand user needs. ** They understand processes. ** They have technical skills. *** How technical do the co-leads mean to be? ** We want to make sure this doesn’t appear to be a US/North America-focused initiative.

William volunteered to serve as co-lead. (Nicole thinks he’s the best bet for APH representation since he has more technical skills than she does.)

We will send a call for volunteers to serve in this role via the mailing list. Other people who are suitable for this role may surface as time passes. Anyone who volunteers now could, perhaps, plan to continue through the end of the calendar year. This doesn’t have to be a permanent commitment.

Aim to have volunteers in place within one week. That will give the volunteers time to get started and report back at the next meeting (two weeks from now).

End of agenda items.

Additional discussion

Should we reach out to other stakeholders now? Yes, we should invite device manufacturers, braille production organizations, etc. Suggestions:

  • HumanWare (William already invited)
  • Dolphin (William already invited)
  • Duxbury (William already invited)
  • braille display manufacturers
  • software developers
  • braille producers
  • braille standards organizations ** BANA (already involved) ** ICEB (already involved) ** Spanish Braille Commission (someone from the call already involved) ** World Braille Union ** World Braille Council (Avneesh planning to brief chair of the WBC about this project)
  • What about tactile graphics (TG) producers to cover the TG piece of this standard? ** Lighthouse for the Blind (San Francisco) ** ViewPlus (John Gardiner’s company)

William will share the list of entities APH has already contacted. He will send an email to himself, blind-copying everyone, so he can maintain everyone’s confidentiality.

RE the TG side: We should keep TGs groups apprised of our efforts so they can get involved if they want to.

Charles LaPierre said they’d get in touch with Jim Allen and John Gardiner.

Avneesh pointed out we need to be careful we don’t get so many people that we get bogged down in discussions of use cases. Everyone who is aligned with our objectives should be included and should be contributing. For example, if there are people who focus on paper TGs exclusively, they may not be suited for this group. We need to make the group’s objectives clear.

William: RE reaching out to a list of people anonymously: No one knows who’s on the list. Can we publicly say what entities have been invited so that it’s clear we’ve reached out to them and we don't potentially contact the same people multiple times? William has already contacted 60+ people from various organizations. Richard suggests that those on the call let William know who they’ve reached out to, and William can make sure they’re among those he’s contacted. Richard has already contacted some people; he will send William the info he wants William to share with them.

We could include progress reports in various newsletters, with a reference to eBraille page. If people are into braille, they’ll hear about this initiative for sure.

What about sharing at TPAC W3C meetings coming up in a couple of weeks- to the Publishing Working Group? Avneesh thinks the Accessibility Task Force would be a good way for letting folks know about this. This is because we know that publishers will always prefer HTML format instead of pre-formatted braille. We need to educate publishers about why we need preformatted braille, why we need to put the braille in the markup. Educating them through the Accessibility Task Force is the better choice, rather than writing everyone individually. We need to be able to make progress...having too large a group at this early stage of defining use cases could interfere with progress. We can always add more people at a later stage.

When we’re ready to add more people, we need to brief them about what’s expected. If they are okay with those expectations, then they should be allowed to join.

2022-08-12 eBraille Problem Statement Use Cases

Present: Avneesh Singh, Richard Orme, Manfred Muchenberger, James Bowden, George ?, Judy ?, Jennifer Dunham, Scott LaBarre, Tamara Rorie, Anja Lehmann, Nicole Gaines, Basile Mignonneau, Francisco J. Martínez.

Draft of problem statement

Avneesh is absent, so Richard is running the meeting. The plan is to get consensus about whether draft problem statement is good enough to move to the next stage. Everyone agreed to move forward with the meeting.

Changes made since the last meeting (these were mentioned in the email):

  • Reference braille that is being read by a screen reader, dynamically converting the text to braille, versus reading transcribed and formatted braille where it isn’t going through a screen reader and braille-translation process.
  • Strengthen the language to explain that there isn’t a robust and sophisticated way to include graphics in text-based or digital-text formats at this time, but that this is something we can identify as being a problem that needs to be addressed.
  • Drawing on the comment from James about confusion surrounding “standards”—are they technical standards or braille standards?—he referred to coding and the file format itself.
  • Added the principal braille first; mainstream, refreshable braille; and considering but not limiting ourselves to low-powered devices.

Regarding bullet 2

The statement now reads: “There is currently no standardized way for tactile graphics to be encoded for reading on digital devices in either text-based braille or digital-text formats.”

James Bowden said that the statement isn’t technically true but for the purposes of this document it is “absolutely sufficient.” (He said one can incorporate graphics into programs such as Duxbury, and they will accept certain file types.) Richard asked if his including the reference to “digital devices” is helpful in further distancing ourselves from “semi-approaches” that exist? James: Yes, the statement is sufficient as it is.

William said that if an embosser and a 20-cell braille display doesn’t support graphics, you can add alt text so that there’s something besides a just a blank space. This is not necessarily related to the problem statement, but he wants everyone to be aware and keep the information in mind when writing use cases.

Regarding bullet 1:

James referred to the introductory paragraph. James: Could the next two paragraphs more strongly state that those are the two main ways—maybe by adding a “one” or “two” or “first” and “second.” Once Richard implements suggestion, the group agreed that that part of the statement would be acceptable.

Regarding the paragraph that begins, “An improved braille file type would also make it easier to share braille across international boundaries...” Currently it reads, “A fixed file is locked to a specific region of the world due to standards and formatting practices.”

Discussion

Manfred suggests (and Richard agrees) it should say, “The file formats that are in use today are locked to a specific region of the world due to differences in encoding, braille standards, and formatting practices.”

Francisco, James, Anja, James: The emphasis should be the encoding—not the formatting.

More questions. Is it the “a” file format or “the” file format (James), and is that always the case (Scott)? Would it be better to say “often...”?

Manfred: It’s also a question of whether we state only what we can solve. What prevents file-sharing internationally is encoding (one part), and there are differences in braille standards that are contradictory sometimes. Do we want to state that? Formatting isn’t as important, but it can sometimes prevent file-sharing in different countries, depending on the content. He agrees that “encoding” is the key word. James asked Manfred to elaborate what about differences in braille standards and braille formatting that prevents braille being shared.

  • One formatting issue might be that your braille file is permanently set to 40 characters wide, but the embosser allows for only 30 characters.
  • Manfred referred to issues related to music. He said that if someone in Switzerland receives a file from the US, they would just have to know what (for example) “dot 2” represents, since it means something different in Switzerland.
  • James wanted to emphasize that he doesn’t mean to suggest that braille written in German needs to be understandable by someone who doesn’t speak German.
  • William: Imagine a braille Utopia, where we are able to share files internationally because we all have a standardized file type, but the formatting could become an issue. It may also be problematic not to have a standard for formatting when you’re in a classroom.
  • Anja said that when she was a student and had to read braille from other languages, she was told what the differences were, e.g. This issue of formatting may not be necessary to solve in this group since there are people who can help a student with understanding differences in formatting.
  • James: if there’s a paper mismatch size, then that’s a problem (Anja agrees).
  • Tamara: In print, format is not that big of a deal. We need to treat our blind students the same way—they need to learn to figure out differences in formatting, and it makes sense to start early.
  • William: Print formatting seems to be the same. James disagrees. He referred to placement of page numbers, drop headers, etc. He said BANA has the strictest guidelines he’s ever encountered.

Richard, incorporating what others in the meeting suggested, said the paragraph now begins with, “An improved braille file type would also make it easier to share braille across international boundaries.” The second sentence expands on that, and the next sentences explain why it’s relevant and why it needs to be addressed. The entire section now reads:

“The file formats that are in use today are often usable only in a specific region of the world due to differences in braille encoding and formatting. This compounds a key problem in the field: There aren’t enough braille transcribers and if braille could be shared throughout the world, the supply of braille available to individual readers would go up and that would also bring the cost down.”

Scott (who self-proclaimed he is not a braille expert) said that wording makes sense to him.

Principles

Braille First

Manfred commented on the principle “Braille first: The focus is on finding a modern solution for distributing information to be read as braille.” He recommends saying instead, “braille users first,” because “braille first” implies paper braille rather than other forms of braille.

George questioned the use of the word “encoding” here because the encoding is not the first thing. It’s just that the product delivered to the end user needs to be encoded.

There was brief discussion about whether the dots were or were not in the file or if they should be referred to as “representative of dots.”

Avneesh: Do we agree that the policy should be aligned with the mainstream?

Manfred referred to two paragraphs before the section that begins, “That should become the new encoding standard.” Isn’t that the braille file standard, not just the encoding? The group agreed it should be the “braille file standard.”

The sentence now reads, “The new braille file format should be based as much as possible on existing specifications, and if it is found that an existing file type can solve these problems then that should become the new braille file standard.”

The next principle: giving higher priority to refreshable-braille use cases

Avneesh said that while the group agrees that refreshable-braille use cases take higher priority now, but that it is possible that embossed braille will need to be the first priority two or three months from now. The principle says that if such a conflict surfaces, we will give more priority to the refreshable braille displays. James: The reason for that was the “‘navigation thing’, which is not critical at all in embossed braille but is of big importance if you’re on a braille display.”

Scott asked Jennifer Dunham—"NFB’s braille guru”—to join the group. She knows pretty much everything about braille standards and is also highly competent with technology. She will sometimes be the NFB rep on these calls.

Low-cost, low-power braille displays We need to make sure that the braille standard works with low-cost, low-power braille displays, such as the Orbit Reader.

George: By “low-powered” do we mean single-line? James: No. We mean computer hardware capability. For example, the Orbit Reader has limited onboard memory and a low-power processor—it’s not a full Linux-based system. It doesn’t have the capacity to unzip files, e.g. Canute is another example. James asked if the Canute is a raspberry pi. There’s also the limitation in parsing power.

Manfred: Is aware there are different braille displays, where some are standalone and some work only with a screen reader. Some of the standalones can read only text, (e.g, some ePUBs); there’s a big variation. The group agreed that we will aim to ensure the braille file standard works with these low-power devices, but it’s possible we cannot support all of them.

Someone asked if we would complicate things if the wording “low-computing-power devices.”

Scott: The problem is with the word “power,” and he recommends saying “less sophisticated” instead. James: How about “low power (memory, processor, etc.).” Avneesh agreed with James’s wording. Tamara said “less sophisticated” would be better, and Avneesh suggested: “less sophisticated (low memory, low processing power).” Avneesh and Anja agreed that “less sophisticated” is better for non-native English speakers.

Richard: Less sophisticated devices, but not limited by them. James: also knock out “affordable.” Richard: some of that relates to age. Tamara recommended that we use a footnote to explain this instead, so the language doesn’t clutter up the main document.

Richard: Here’s his understanding of the discussion: consider less sophisticated devices but not be limited by them. “We will note that some of the less sophisticated braille devices (e.g., low memory or processing power) would not be able to natively handle more sophisticated file formats; this can be explored and discussed in later stages.”

Richard: Currently the heading is “Problem Statement.” APH has coined this the “eBRF,” but he’s wondering if “ebraille” would be better, or if we shouldn’t get stuck on this distinction. George: eBRF is too limiting... how about “electronic” or “digital” braille. William: “Braille File Problem Statement”? James: “Electronic Braille Problem Statement”? Avneesh: This is braille on electronic devices, not interactive.

Richard: How about “Electronic Braille File Formats Problem Statement”? Group agreed yes.

Basile: Does saying “electronic braille” limit us since it doesn’t include paper braille?

The heading will read: “Braille File Formats Problem Statement.”

Next Steps

Avneesh: Do we need email approval from the board/group, or do we have the authority to move forward? Scott: The motion from the board was broad enough to permit us to keep moving forward. Richard: Agreed.

The next step is use cases. Richard: When we get to that phase, are we ready to expand the core group to include others who might be interested in working on the use cases (folk who create braille-production software or devices)?

Richard: Initially, we were going to start this discussion with a smaller group that would draft the problem statement. Do we now want to share the problem statement with other groups, such as device manufacturers, Duxbury, Bookshare, ABC, Global Book Service, etc., and ask them for their feedback?

Avneesh: We start by having device manufacturers join, then we begin the work, and then we have others join as we go along. We shouldn’t wait to start moving forward until everyone joins.

George: When considering the scope of work, use cases that are inappropropriate are also important. The graphics file format would be useable inside the standard that we’re working on, but it would also be useful in an ePUB. You could deliver a refreshable braille/tactile unit in addition to a normal ePUB, but that will probably fall out of our use-case development.

Avneesh: The purpose of the use cases is to narrow down the scope, so we can focus on those with the highest priority.

Summary of next steps:

  • Richard will ** send the problem statement to the board, so people nominated by the board can join *** deadline for providing further comments on the problem statement: week of 08/15/2022. *** Currently, Richard has the only copy of the problem statement. Where does the group want to house the problem statement? Avneesh: GitHub and the DAISY website. **** It will be posted on GitHub. And, because we want this to be a public document, it will be copied to the DAISY website. *** We should invite at this early stage device manufacturers, then people can continue to join gradually and we will develop use cases as we move along. Richard and Nicole agree with this plan. *** More info on how to use GitHub will be shared via the mailing list.
  • Avneesh will set up the repository. Format: GitHub and HTML, including creating a template that you can fill out. ** Contact George if you have questions about using GitHub (he’s been working in it for about six months). ** George: W3C has documentation on how to use it.

Avneesh acknowledged APH for taking the initiative and starting the problem statement. William thanked the group for their help too.

Meetings: 2nd and 4th Tuesdays of each month. Next meeting: August 23 at 14:00 UTC (7:00 a.m. Pacific). Richard will send a calendar invite for the next four meetings.

2022-7-25 eBraille Working Group

In Attendance: Avneesh Singh, Richard Orme, Anja Lehmnan, James Bowden, Greg Stilson, George Kerscher, Thomas Kahlisch, Tamera Rory, Basile Mignonneau, Charles LaPierre, Davy Kager, Flavia Kippele, Manfred Muchenberger

Avneesh invited all to introduce themselves.

Avneesh reviewed group’s last “very productive” meeting and invited Richard to explain changes made to the problem statement. Richard shared his screen to address changes in problem statement document; problem statement should not only cover BRF but also .brl, .bra, and .pef. They replaced all references to BRF with text-based braille formats. This makes the statement more relevant to a wider audience of organizations by expanding context to include text-based braille formats and digital-text file formats.

The third change was clarification on international sharing. It emerged in discussion that this issue was related to braille in text-based formats which are locked to regions because of different braille standards. Separating content from presentation was important, and this is clarified in problem statement.

The fourth change was to bring together benefits of producing a solution to this problem and try to capture these in navigation, reflow, ability to include graphics, and improved international sharing. Benefits are not only for multi-line braille displays but also for single line displays. Avneesh – Are all of us happy with the incorporation of this text? Any questions or do we want to improve it further?

Flavia said that DAISY consortium is making the effort to go into mainstream formats. Now we want to create a new standard for braille, but in the context part of paper, we should add something to come in line with DC’s strategy to find enlargement of standard format (preferred to defining a new specific format). Avneesh said this can be added to the principles section.

James suggested adding a bullet point about graphics to bulleted list in the Issue section. Graphics is a limitation in current text-based braille formats. He also proposed changing the wording of the third point about braille standards from “ASCII encoding” to “braille encoding.” Braille standards are not what we’re talking about; we’re talking about how characters are encoded in file. Finally, he recalled that the document mentions using mainstream formats where possible per Flavia’s comment. George agreed with making these changes.

Richard volunteered to review the wording throughout the document to ensure the use of “standard” is not confused with “braille standard.” He will substitute the word “encoding” for clarity.

Manfred said it is not clear which users are supposed to use this standard, especially regarding graphics. Do we really want to have graphics? And one standard? James replied that quick navigation is the major reason why a user would want this new format. With our new format, it will be possible to jump to next header or chapter. This is the biggest improvement for braille display users. Current hardware can’t take full advantage of graphics, but embossers and upcoming tech will.

Manfred asked if the intention is to have users print reading materials. James responded that is one possibility, but they can also be used on a braille display. Greg Stilson mentioned that APH is the largest provider of braille textbooks in the US, and these are other offered in difficult-to-read braille files (without any formatting) or as embossed braille. With this new standard, we would be able to offer the user choices. The eBRF would allow navigation and formatting even for single-line braille displays. Avneesh added that this problem statement is aspirational (not set in stone), but we will have to make decisions about these topics as we move further into development.

Charles asked what exactly is a BRF and wants to add a section to problem statement in defense of why users need the .eBRF. Why can’t the device itself translate the text into braille? James replied that we could add a glossary at the end with definitions of different file types. He also suggested adding the word “current” to modify “existing file types.” He explained existing file formats are text-only (unable to carry semantic markup or formatting). If a braille file is meant for a 40-cell display but is read on a 20-cell display, there are big problems with format. There are also issues with dot-combinations in international sharing. Graphics also cannot exist in a plain .txt file. A .pef allows semantic mark-up. Avneesh asked why it would not be sufficient to use EPUB or HTML files for a refreshable braille device. James explained there are two reasons: firstly, it is because it is in a print-first format, and secondly, there are issues with copyright.

Richard added that while existing digital formats offer navigation, reflow, and text-to-speech, they do not preserve spatial formatting of braille, and translators cannot specify characters to be used. Richard noted that they had not talked about copyright and asked if his paragraph in the context section was sufficient. Avneesh asked if the paragraph is more appealing to a braille expert but not to a novice. The document is supposed to answer questions like the one Charles asked. Charles asked if we need to talk about other types of braille hardware in the document, and Avneesh said they would address issues like that in the use case stage.

Richard noted that both Flavia’s and Charles’ questions both pertained to why digital braille files rendered on braille displays are insufficient. This transcriber piece hasn’t come through strong enough. We need to strengthen the case around loss of formatted braille.

Greg asked if there is space in the document where we can build in existing use cases as examples of problems for everyday users. Avneesh reiterated the objective of the problem statement is to ensure all our needs and expectations are identified, not to be a perfect document. Efforts may be better directed elsewhere.

Flavia agrees that we should not overload the problem statement document. She asked for whom is this paper? Is it for this group or the DAISY board? The audience will inform how much detail we need to include. Flavia asked why the quick navigation is the most important feature. James clarified he had only meant that was most important in the context of braille displays. With any existing braille files, you cannot jump to the next heading or navigate to a certain page on braille displays. Flavia agrees with that but asked if we all agree on priorities and if so, should they be outlined in the problem statement or should we address this later? Avneesh highlighted the four high priorities agreed upon at last meeting (navigation, graphics, reflow, internationalization). Flavia asked if we must fulfill all four priorities and in what order. James explained that all four of these issues are well achievable. These are expressed in the paper as benefits.

George specified that we are not laying out requirements for new specification or including use cases in the problem statement document. If you’re “converting the text on the fly”, existing specifications (EPUB, HTML) do a pretty good job. However, we don’t have the ability to preserve formatting or to include graphics on a refreshable braille display. That kind of file could be embedded in an EPUB if it was standardized (as a details element) that could be shown on demand. We do need to answer questions like Flavia’s in the future.

Richard explained that we did not include principles we’ve discussed in the problem statement document, including braille-first file formats, prioritizing digital braille over embossed braille, and not limiting the new standard to the simplest technology. In this meeting, Flavia raised the question whether we should have a principle which talks about the use of mainstream formats. Richard said this is a good fourth thing to add to the list. Avneesh asked that we make the principles very clear and visible.

Richard suggested that we may consider adding a use case to clarify why existing formats are not sufficient.

James pointed out that a mainstream screen reader cannot translate music or many other formats. There are cases where braille-first is vital. If you start shipping out ordinary EPUB files to average users, which could be used to recreate a commercial product, this could be a copyright problem. If file is braille-first, these problems disappear.

Avneesh asked if we wanted to add something about HTML and EPUB to the first paragraph. Flavia responded no but identified a contradiction between existing standards and braille-first which we must address. A novel is much easier than a textbook. Manfred explained this is different in each country. He mentioned George wanted to be able to listen to text and read it on his braille display. He suggested a file with options (braille or printed text), but how could we listen to the text if there is only braille? Avneesh said they are excluding technical challenges from the problem statement. Reading the text aloud is out of scope of this document. Manfred asked if this will be important from a user perspective, and the group replied that the issue will be addressed later in use studies.

Charles impressed the importance of potential copyright issues. Avneesh mentioned copyright is different in each country and suggested we not address it in the first step. Basile expressed concern about having the EPUB under the BRF. James said that the hope is there will be a simple “Save As” option. Avneesh asked if there is anything else we should be doing. Richard mentioned he has added notes to the document throughout the meeting. Richard said that the file type should be able to be translated by Duxbury. We want to incorporate semantic mark-up in file type. Richard will send out link to document so others can make further improvements.

Question – do we need to add a section clarifying audience of the document?

Avneesh says we should have this in our understanding. Greg explained that examples of when these changes will be relevant could help bolster our arguments to those who don’t know much about braille.

Richard wants to make the base text as good as possible but add stand-out text boxes that tell a story. It is clear that the document is not explaining the problem as effectively as we had hoped. We can collaborate on this document on the cloud (so we can work on it outside of meetings).

Tamera works for NLS and suggested the more descriptive information be included in an appendix. If we continue to add to the length to the document, no one will read it. Additional appendices can be added as needed.

Richard said we should have the document “good enough” at next meeting and move conversation to next stage.

The group decided when to meet next.

2022-7-15 DAISY eBraille Meeting

In Attendance:

  • Richard Orme—DAISY Consortium Chief Exec (UK)
  • George Kerscher —DAISY Consortium Chief Innovations Officer, part-time employee of Benetech (Missoula, Montana)
  • Nicole Gaines—APH Nimas Project Director, Director of Instructional Materials Access Center (NIMAC) (Louisville, Kentucky)
  • Tamara Rorie—National Library Service and BANA rep
  • William Freeman—APH Tactile Technology Product Manager (braille technology, accessibility testing)
  • Thomas Kahlisch–DAISY rep (Germany)
  • Francisco Martinez—National Organization of Spanish Blind Persons, DAISY board member (Madrid, Spain)
  • Nicolas Pavie- Software Engineer, Braillenet
  • James Bowden—RNIB, background in software and braille translation (UK)
  • Davie Krager— Dedicon, Product manager for text and digital products (Netherlands)
  • Greg Stilson—APH Director of Global Technology Innovation (Louisville, Kentucky)
  • Avneesh Singh—DAISY

Problem Statement

Avneesh: Goal is to review the problem statement (PS) put together by APH (he complimented the statement) to make sure everyone is in sync about what this initiative is about. He and Richard are co-facilitators. Richard will be editing the PS document live.

George: RE improving the braille ready files (BRF) to include navigation markup: The need is clear, especially now that refreshable braille displays are dropping in price. Will there be a specification for graphics to be used on these devices? Can they use JPEG and SVG?

Greg: There is not a true electronic graphics standard for dynamic refreshable displays yet. APH is working on an initiative for this, but they want to start with electronic braille. eBRF can work with JPEGs and SVGs. This may need to be clarified in the PS. The goal is to replicate the “textbook scenario,” meaning we want tactile graphics artists create the tactile graphics.

James: PS boils down to four areas:

  • Navigation
  • Graphics
  • Reflowing of text
  • Internationalization

The priority level of each varies. If you’re using refreshable braille, then navigation is the most important one.

Thomas: There’s a lot of information missing from the BRF. DAISY uses ePUB 3 because it’s a common format and you can put in the accessibility information; they think it would help to put an additional layer in the ePUB 3 to include braille. How do you do this? Will it be a direct extension to the braille displays? Can braille display producers provide this for the ePUB 3?

Avneesh: The last part of the PS says that we will use existing formatting to the extent possible, so this is already being considered.

RE the high-level statement: At another stage we will put things in ePUB 3 and Access HTML, but for now should we focus on the four parts of the PS noted above? (Thomas agrees.)

Richard: Rephrasing the context around this issue of ePUB 3 and HTML 5 may make it clearer that it is part of the overall objective. Suggested that having the BRF as the starting point shouldn’t be pressed so strongly at this point since we have other areas we want to address too. Also looking for markup, navigation, graphics, and internationalization. Want to bring down the prominence of BRF.

Davie: Speaking for the Netherlands and possibly Belgium, BRF is not used at all. They use portable embosser format (PEF) for embossers, when in digital format, they send customers the ePUB and have their screen reader translate it into braille. That way, the customer can use text-to-speech if they prefer. The Netherlands doesn’t have contracted braille or super-complicated rules (compared to the strict structuring rules that BANA requires). He would like it if the ePUB could support tactile graphics so they can be included with the braille. Question: Including tactile graphics makes sense, but what in braille can a screen reader not read for you? Can something be added to the PS to clarify the difference here?

Nicole: BRF is really important in the US. William agrees, and adds that APH’s thinking was a braille-first standard.

Greg: A screen reader can read all of the braille, but it cannot read the formatting. APH is looking for a format that will indicate spatial orientation, which is important for math (e.g., long division, matrices) and science concepts. He would like there to be spatial tags so the transcriber can format and make decisions on behalf of the user in order to ensure the braille looks proper. For example, in math and science, it is helpful to make it clear when there are columns so that the screen reader can read the formatting correctly.

Avneesh recommends that we say that BRF is “one of the popular” braille formats and not “the” braille format?

Francisco: When he first read the PS, he immediately assumed this was not the document/initiative for him because BRF is not the format he uses (nor is all of Europe). In Spain, they use a text file with ASCII characters, which has all of the problems described in the PS. To make the PS more universal, it would be better to call them text-based braille files, since every country has its own text-based format. If we want to make this available to people across the world, we need to make sure that all formats are covered—not just BRF.

One thing that is missing from the list of problem areas are PEF files, because regardless of the character set they use, they will be able to print the file.

It is suggested that we would like to combine the advantages of electronic and embossable braille so people across the globe can use it.

Avneesh: Would instead suggests we reformat the statement to specify the specific features we want.

James: The point of internationalization is covered in the PS as far as he is concerned. The advantage of a PEF file over a BRF file is that it uses Unicode braille characters rather than country-specific ASCII encoding—those are clashing requirements: Asks Thomas to clarify that he would like the new file format to emboss exactly as it’s intended on any embosser. James asserts that that clashes directly with the premise that you want to reflow the text on whatever size your line length is.

Avneesh: This has been a discussion with the technical team, as well, during early conversations with APH. It would be difficult to have reflowable text with embossing in the same file format. We can wait until later to determine if we can put them in the same format or if we need to split them. Is having one file format that does both something that we want to achieve? This is something that is currently in the PS, but perhaps not strongly.

James: The question is whether your focus is on embossing or refreshable braille. Which is our priority?

William: The creator of the file would specify page size, line lengths, number of lines, etc. You will still have the option to reflow it. This would be useful in a classroom or testing situation, where you don’t want the student to be able to manipulate line lengths; you want them using an absolute standard that could be read by the reading software.

Thomas said that the problem of embossing has already been solved with PEF files. The problem we’re addressing in this initiative is the dynamic reading of braille.

James disagrees. There is a problem with a embossing. He gave an example of 40 x 25 embossing, which is standard in the US but not in the UK. He thinks reflowing text for embossing is important.

Avneesh (bringing the group’s attention back to the first line): Can we soften this line so we don’t inadvertently discourage people from continuing to read the document?

Richard: Rather than specifically talking about BRF, he would like to focus on people who access braille via different file formats. Note that these file formats do not provide all of the features they want with digital braille—navigation, semantic markup, support for graphics, internationalization, and preservation of formatting information.

The first line currently reads, “The current braille file standard...”. He proposes we change it to: “Current braille file standards, such as the popular braille-ready file (BRF) format and portable embosser format (PEF) are used to create...”

It is suggested that doesn’t address those who read braille via text-based formats, such as ePUB, because of the probable loss of formatting. Ashneesh agrees we should address this shortcoming in that opening sentence.

Davy brought up UEB, which is not easy to automatically transcribe. In the Netherlands, they do not provide these formatting details, i.e., they don’t transcribe; rather, they give them the basic braille and expect them to figure out the formatting.

Richard doesn’t think this is a UEB issue at all. Asked Davie what they do about braille music, because there have to be braille characters.

Davie: Currently only provides on braille on paper but would like to be able to have digital music braille. While you can put Unicode braille in an ePUB, it’s not the best option. That would be a good use case for the Netherlands; most people there are used to just getting text and using computer braille.

James: Can envisage titles that are mainly text-based but still have formatted braille, such as tables, maths, and music.

William: Was thinking braille-first but is intrigued by a mixed format. This needs to be addressed with the PS.

Greg: Suggested we discuss the formats used to make hard-copy braille, its history and shortcomings, in the first paragraph of the PS. Then, in the second paragraph, note the benefits of text-based braille, such as ePUB—i.e., users have access to markup etc.—but then also identify the problem that exists when one is using a screen reader that is translating the braille.

James: Two other issues with using the screen reader to do the translations: 1) will only do a default translation. This is fine 90% of the time, but there are always special cases—e.g., learning material where you want to use contractions. 2) Copyright holders get anxious about copyright, unless it’s a specialized format such as braille first format. Thomas said it’s important to have additional rendering in the ePUB, because then it differs from the standard, commercial ePUB.

George: Lloyd Rasmussen successfully converted ePUB text to grade 2 braille. The group will need to take into consideration whether you lose text-to-speech capabilities in this scenario.

Richard summarized what the first two paragraphs should cover: Paragraph 1: text-based braille formats used for embossing and the associated the shortcomings. Paragraph 2: text-based formats, including its advantages (such as navigation) and disadvantages (such as when you’re using a screen reader solution, because you lose formatting and other choices that are made by the transcriber).

Greg suggested saying in the second paragraph, “today’s text-based mainstream standards” to clarify we’re talking about more of the publisher standards. James suggested “print-based” standard, and Greg agreed.

Avneesh, who’s coming from the standards world, is not familiar with the term “print-based standard.”

James: Using the word “print” suggests it will end up on paper—but here we’re talking about digital text.

Richard: How about: “mainstream files containing text, e.g., ePUB.” He recommends giving examples for both scenarios so that we can broaden the options, since not everyone uses BRF. Other formats include BRA, BRL, and PEF file formats. Examples for the second paragraph: ePUB, HTML, and DOCX.

Avneesh: In the first bullet point, should we replace BRF with “the traditional embosser formats”? William says yes.

Avneesh: Which is the higher priority—reflowable or embosser?

Greg: Initially, APH prioritized digital braille because markup, navigation, etc. are missing from today’s standards. They are relevant with digital braille, less relevant with embossing. He’s fine with both being of equal priority.

Davy said there are two aspects that fall together: 1) whether you want the exact same braille dots to appear regardless of where in the world you read the braille and 2) whether you want to have exact formatting, which is important for embossing, mathematics, and the other listed examples.

Richard: Bear in mind that the braille displays in use today may not be in use tomorrow Also, we need to keep multi-line braille displays in mind.

James: Four priorities again: 1) Internationalization: This is equally important on embossing and on digital braille, and we want to make sure the dots that are intended come out for the end user. 2) Navigation: This is important for “soft” braille, but not at all important for embossing 2-1-2 soft braille. 3) Graphics: It’s equally important, if your hardware is capable, whether for soft braille and for embossing 3-2 to digital. 4) Reflowable text is equally important. You want to make optimal use of your line length, 4-3 to digital.

Davy suggested a fifth priority—semantics, which would cover things like italics. James asserted that semantics, which isn’t currently in the PS, is the same thing as navigation. The advantage of marking italics (and the like) would be that you could show/hide some aspects of the document. One would have to think very carefully about what to show or hide, because complicating things like this would start to “get very, very interesting difficult situations very quickly,” but it’s an interesting possibility. It’s 5-4 to digital.

Avneesh: Semantics is important in the publishing industry, e.g., for bibliographies, glossaries, dedications, etc. Thomas said that semantics are important sometimes but not important other times. As a blind user, they said it would be important to know if they were reading a glossary or if they were reading the body of the document.

Avneesh: Is it a high priority for the first iteration of the standard?

James: It’s included as a current provision in digital text-based formats, such as HTML and DOCX. Suggested this is something whose priority could be determined once we begin work on Stage 2—the use cases. Semantics is part of navigation. Example: You can’t jump to the next table if tables aren’t marked as such.

Davy said navigation derives from semantics.

Richard said we’ve talked about the advantages of the text-based braille formats, in that they preserve the choices the transcriber makes. And we have the advantage of the digital text-based formats, which enables one to choose what Davie mentioned, e.g., what code is used. We need something that has the advantages of both choices.

Avneesh: What does the group think about text-to-speech, which George mentioned? Do we want it in the PS?

Thomas said that our target is braille—not speech output. James agrees. He believes this should be a braille-first standard.

Davy said RE text-to-speech: With ePUB you can read either way, but you don’t get fantastic braille. Later in a technical stage, we could discuss a multi-layer, where you have multiple versions (e.g., print, online, small screen, and braille) and can switch between them, but they agree this is not in the scope of the PS.

Avneesh: Are there views on the priority of using this file format for embossing?

Greg: The goal is to cover both reflowable and embossable braille; however, reflowable braille is the first priority.

James: “One could almost say that to emboss the file, we take the reflowable text, reflow it for particular line widths, and then emboss it.”

Thomas: “The highest priority is to improve the reading in a digital way, but we also need to have a concept to emboss this format.”

Richard is just “catching the points” in the following bullets; he will polish them later. Priorities:

  • braille-first: Whilst there may be some advantages that come from text, the focus is always on the braille reading experience.
  • Both reflowable and embossable braille are in scope, but the first priority is reflowable digital braille.

Avneesh: Asked William and Greg if, from APH’s perspective, the BRF file is locked to one region of the world or if it is more for formatting?

William: It’s both formatting and encoding, i.e., using ASCII instead of an international standard like Unicode but also the formatting. James: ASCII is used across the English-speaking world. William: But the formatting is still very different.

James, addressing Greg and William, and coming back to the formatting question:

James does not it matter if the file can be reformatted between, say, the UK and Germany?

William suggests that if we could have a shared language for the markup, we could have the presentation vary based on what braille region the reader is in. In the original drafts, this was done using CSS and that needs to be fleshed out still.

James asks, what about braille music? It would be hard to reformat that.

Thomas responds that this can not be easily done and that it would be easier to retranscribe the file.

William asks do we want to make it clear in the PS that there are numerous issues that we’re not able to resolve with reformatting but that we’ll solve as many as we can?

James: There are cases both for and against reformatting, and both are valid.

Avneesh: Should we address this when we get to the use cases?

Thomas said it’s important to be clear as soon as possible about what we can expect to do with these file formats. They said that James and Thomas’s example of music is both extreme and not very common. They said formatting is really, really important in maths, science, learning exercises, etc. Are we talking about plain text? More than plain text? It is important to specify this really, really soon.

James, returning to the subject of braille dots first: thinks it would be out of scope to take, for example, some English text in UEB to retranscribe that on the fly as Nemeth and another literary code.

Francisco asks what can we do when we talk about reflowing the text? The size of the line is very important. If you change that into just plain text, it may work out fine, but with certain books, it may be a problem. Have to be clear about what kind of text we expect to use for these files. Will it be good for any kind of text or any kind of book.

Avneesh: The main question we should ask as we proceed is what will this improve for the end user?

James defined flowable versus reflowable text again.

Thomas said but we won’t change Nemeth code into another code.

Francisco said this is different than localization; they defined localization—meaning if they send this file to someone in another region of the world, they’ll still be able to read it. James added “the same dots.”

Richard, per Francisco’s comment: This is not limited to text-based materials, just that there should be formatted braille and the formatting should be retained.

Francisco mentioned that graphics are not reflowable, so we would probably lose the graphics.

James: Graphics can come as re-scalable, but not reflowable. In some cases, you want to be able to rescale an image, but in others you do not want to do that.

Nicole, circling back to where APH began: The core use case we’re looking for is providing K-12 instructional materials in a multi-line braille display with graphics that are integrated and re-scalable. She referred to a project APH is working on where students would be able to zoom in and out of graphics, so the scalability issue is really important for use cases for APH.

Greg: It depends on the type of graphic. For those created by a TG artist, we don’t want the student to zoom in because we want the student to learn about the TG at the size provided. However, you do want them to be able to rescale a PNG file or other file type, where reading it at a particular scale is not important.

Davy from the Netherlands: Graphics integration is the buy-in for the Netherlands.

Avneesh: There is a paragraph about providing graphics in a multi-line braille display. Do we want to specify where graphics are important, such as in science, maths, arts, and educational materials? Richard: we could add a list that gives examples of areas where graphics are needed, then talk about the advantages to end user.

James: Another thing to consider, possibly not for this meeting, is the usage of graphics. There would have to be a default when you emboss. Says we need to include a list of applicable situations, e.g., curricular areas and access to graphics.

Avneesh: What benefit would this file format be on a single-line braille display? It would be helpful for navigation, reflowable text, and internationalization. There’s disagreement on this point about whether this needs to be mentioned in the PS (con: Thomas, pro: Tamara, Richard).

Richard: What computing power would be needed?

Avneesh: Right now there are braille displays that can only open plain txt files.

James mentions you can make a plain text file with semantic information, you just use control codes.

James: We need hardware manufacturers, software developers, and braille-production agencies to be on board with this initiative.

William: Someone recommended we include simpler, text-based formats in the zipped bundle (BRF, PEF, etc.), so ways older devices can support this file format in the interim, until we transition to this being the main file format. James: You’ve already blown older displays out the window, because they don’t have the processing power needed. Avneesh: Does this need to be added as a “good to have” option? James: We might constrain ourselves if we include this. Richard: This is a question to be answered, rather than something that needs to be part of the PS. We need to ensure that we don't constrain ourselves to older hardware and that is generally agreed upon.

Next steps:

  • Richard will incorporate everything into a new PS document.
  • We need to have another conversation about this following those changes.
  • Meet on July 25.
Clone this wiki locally