-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
auto-advance doesn't allow parts of canvases to advance #1632
Comments
I worry that this (and Could we not meet this same need by using canvas-on-canvas annotations, with a |
Along with #1612 about inheritance of behaviors, is @workergnome's suggestion (get rid of auto-advance and just use a single canvas with a longer duration) an easy way out of several thorny issues? |
It would be an easy way out, but we would lose some of the power of the model to present the object. As a sound archive, it is important that I can convey in the model the distinct physical aspects of the real world object, as well as making it easy to navigate and experience for the web user. Views vs structures at work for audio - each tape side in this long recording is a canvas: But they are auto-advance canvases; they keep playing. We want that continuity of sound, but we also want to convey that there are a whole load of tape sides here. I've added some notes about this issue to this document: The heading "Problems with auto-advance" is the target of that link - I didn't want to dump it all here. This is UV specific - how the UV generates user experience from the model - but it shows why auto-advance is a very useful tool (and uncovers some unaddressed things too - UV isn't looking for or dealing with auto-advance on ranges at all). Adopting the canvas on canvas approach would mean only one canvas in Elsewhere the document describes how the UV already synthesises a single virtual canvas from a run of auto-advance canvases, and renders that single canvas when navigating using ranges - and only in that mode. This approach solves many usability issues for complex content. |
I think the disagreement here is what a canvas is. @tomcrane, you say in your doc "Canvases represent distinct views of the object." I would say that "Canvases are abstract 2D spaces for displaying content", and that time-based canvases are 3D spaces. The discrepancy between these interpretations, I think, is if IIIF is truly presentational, or if we're also using to to model Real World Objects. We can create the desired behavior (via a 3D space containing a series of other 3D spaces), as an abstract presentation of the content, tailored for that specific view--it's just not a one-to-one match with our conception of the Real World Object. |
I don't think this is a disagreement, I think it's two aspects of the same thing. One aspect is the model the spec gives us, the other is the application of that model by implementers of the spec, who are really keen on modelling their Real World Objects using the spec, to produce an experience of those objects for users, but not to enforce a specific user experience for that object in all contexts. That experience is usually not some high fidelity reproduction of a material object. A IIIF client cannot be subjected to any sort of visual confirmation that it has produced the "right" result (like a CSS test). So the model is not so abstractly presentational or behavioural in the way, say, a model driving a game engine is. There is no correct user experience for IIIF, other than the implied "if you are going to implement this feature then you must respect its MUSTs" - which still doesn't prescribe a specific UI. I agree with you that the Canvases the spec gives us are abstract 2D spaces for assembling content. Shared, and simple abstract spaces. The spec is therefore presentational. But then that model, driven by use cases, existing practice and emerging requirements, is applied to the creation of digital surrogates for Real World Objects where those RWOs often comprise "a series of pages, surfaces, or extents of time"[1]. People want to do that a lot, and the spec is not so abstract that it doesn't go out of its way to make that as simple as possible and no simpler, for a stack of known scenarios. Even with a split between the spec (presentational; minimal examples for the purpose of syntax) and cookbook (lots of examples of how to apply the spec to RWOs, useful patterns, encouragement of common practice, by and for the benefit of the community), the language of the spec itself is still full of mentions of RWOs to convey what the spec is for. The community is opinionated that it wants to use this spec to model RWOs, and sometimes born-digital dimensioned content... to repeat my user story:
Maybe I should rephrase that:
We have the means to do this, through the (still unrelentingly abstract!) Canvas. A Manifest's Canvases are discrete, dimensioned extents. Community practice, encouraged by shared recipes, uses these discrete extents in particular ways for different kinds of commonly encountered content. And complex viewers like the UV and Mirador use the discrete extents for one kind of navigation/representation of the Manifest, and Pages are the obvious discrete extents for books. Tape sides are good candidates to map to discrete extents; we're saying something if It's not the spec saying you must do this; the spec, as we agree, is just providing these abstract discrete extents. If we want to produce the specific user experience of these 10 sides of tape played as a single extent of time, we can certainly do that in the way you describe, but we've then asserted just one extent in That is a separate issue from the meaning of [1] from the Introduction, which I think is correct in its stance. |
A note from the community call discussion about
Does this mean that That introduces other problems of state, that don't apply for spatial dimensions. Not problems for the model once we've sorted out what auto-advance means and updated the definition(s). but problems for client implementations that run into the event-related issues @workergnome mentions. This 2D spatial example may be useful for comparison: This happens to gather the target extents of the range from their canvases and assemble them. But it could have highlighted the two page parts in their whole canvases, that would also be a legitimate rendering. It's an feature of the client (a very simple client in this case); choose (or write) a client to do what you want. Both use cases are accommodated without having to add new behaviours. Here's the problem. From the point of view of description of content, time is just one more dimension. No different from adding a z dimension. We're just saying this content is here, in this space, at this time. Annotations work the same way for more dimensions. A static observation of that description just says where and when everything is. We can accommodate any complexity of ranges describing where that stuff is. As JSON, as data, as model, it's no problem at all. It's just stuff addressing dimensions. But the user experience of content with a temporal dimension is fundamentally different - state is change by the passing of time (reaching the end of canvases, entering and leaving extents that ranges point at). A client has to react to elapsing time, not just user action. This is just the way the Universe works for us! This is the source of most of the complexity we have to deal with for these complex AV use cases. It's not a modelling issue - we can be clear about the assembly of content in space/time, about what's there. Our content doesn't have to do anything at particular locations or time, it just is there, invariant. IIIF as a model doesn't need to do anything else. Content is always just content. Instead it's an issue for interpreting the model to create user experience, which is raising these nuggets of awkwardness. I don't think they are showstoppers though. |
Eds call -- Currently the spec has a bug which means that the auto-advancement cannot work from segments of a canvas, only at the end of a canvas. The implementation of this is complicated, but the specification needs to allow it. |
Closed by #1681 |
A canvas might represent a long stretch of content, such as a tape recording of oral histories with several histories being present. Auto-advance should allow the particular history to be pieced together from the end of the segment of the canvas, rather than only taking effect at the end of a canvas.
Thus the point at which the play-head advances is determined by the encapsulating resource (e.g. Range) not the Canvas with content, contrary to the definition here.
The text was updated successfully, but these errors were encountered: