Handle canvases with "complex" scenes #9
Replies: 3 comments 3 replies
-
I think this will be one of the last ones to work on. Likely pulling together all of the other DX into an opinionated render-everything component. Waits to be seen if temporal elements will come into this too - to drive experience like the fire demo. RenderingIn terms of rendering, currently we can:
With some experiments:
Where painting here means adding to an object at x, y, w, h with zoom etc. Data sourceThe presentation 3 parser holds off parsing annotations and leaves them intact. So currently when working with them, you are working directly with the source. The upgrade path that happens before will upgrade OA annotations to W3C annotations however. When deciding what to render, the only thing we really need is the ordered list of annotations and then a set of strategies for understanding various types of annotations. The universal viewer is really the only viewer that currently looks at the individual content in the annotations and will change which internal viewer it uses based on that. Here is the source for that: This works by checking the If that is enough, there's a starting point here. It helps to disambiguate the various valid values of annotation bodies. const matcher = matchAnnotationBody({
String: function () { ... },
ChoiceBody: function () { ... },
SpecificResource: function () { ... } ,
ContentResource: function () { ... } ,
OtherContentResource: function () { ... } ,
});
// ...
matcher(annotation.body); // List of annotation parsed bodies From this point we are really relying on all CompositionSo far there have been broadly "right" answers to questions - how to identify data and how to render specific data. Composition is where we will have to make decisions. I don't have those answers, but do have some ideas on what we will run into:
These may manifest as default UI elements, possibly customisable, or as DOM events that can be picked up externally. |
Beta Was this translation helpful? Give feedback.
-
@tomcrane is there an up-to-date P3 fire demo? I can put together a code sample that composes their elements (manually) |
Beta Was this translation helpful? Give feedback.
-
Proposal From the developer point of view this is identical to the scenarios in #1, #2, #4. This is the point of canvas panel, it doesn't require you to evaluate the scene just make something visible. So a canvas that had multiple image content resources in different places, each with image services, would require no more declarative or programmatic markup. If it has choices, or linking annos, or text, or other content that you want to present more UI for or handle user interaction with, then the developer needs to do more work. But these are handled in other discussions. |
Beta Was this translation helpful? Give feedback.
-
Where a complex scene is anything in the 0.0..01% scenarios mentioned in #1, #2, #3.
That is, the canvas we are passing in to be rendered is not just one image filling the whole canvas, with (optionally) one image service available for that image.
The canvas has any combination of
(and more to come)
This depends on the developer "stance".
As a developer, I have loaded a manifest, obtained a canvas, and pass the canvas to a renderer:
<canvas iiif-content=... />
I don't want to evaluate the scene myself to determine what sub-layout I'll need to do... I want
<canvas...
> to do that for me.I don't know what's on the canvas... my custom code stops before we get into that.
Even with Choice I can still be oblivious of what's actually on the canvas and expect the component to render a default scene.
I could choose to become aware that the Canvas has
Choice
present and react to that, if I want (see Choice #tbc)Beta Was this translation helpful? Give feedback.
All reactions