Skip to content
This repository has been archived by the owner on Apr 10, 2018. It is now read-only.

Add root object for 3D opacity (and other properties) #545

Closed
lbud opened this issue Oct 20, 2016 · 9 comments
Closed

Add root object for 3D opacity (and other properties) #545

lbud opened this issue Oct 20, 2016 · 9 comments

Comments

@lbud
Copy link
Contributor

lbud commented Oct 20, 2016

Motivation

Currently we render individual extruded layers to a separate texture, and then render that texture at an opacity specified by that layer's paint.fill-opacity value back to the map. We do this in order to e.g. not render interior walls in buildings.

This causes problems when creating multiple extruded layers, as the full layers are painted back to the map separately with no respect to other properties, defying the laws of physics and rendering wonky af. This problem will continue to snowball as we add more 3D features.

Design Alternatives

❌ do nothing 🙅⛄️
❌ abandon the texture approach and render only at full opacity
❌ abandon the texture approach and render interior walls
✅ render all extruded/z-offset layers to the same separate texture, and then render back to the map at a root-level opacity

👇

options (flexible on wording), all starting at root level:

A.

{
  "3d-opacity": <number>
}

B.

{
  "3d-properties": {
    "opacity": <number>
  }
}

C.

{
  "3d-properties": {
    "opacity": <number>,
    "light": {
        ... light properties folded in here, since they only affect extrusions
    }
}

Design

I'd prefer not to do A, as further 3D properties may continue to clutter the root object; this feels short-sighted.

  • Breaking: fill-opacity in extruded layers already created will need to be ignored

I want to prefer B. The corollary to the above is that using a 3d-properties object will leave us the option to easily add more 3d properties later without further cluttering the root object.

  • Breaking: (same as above)

However, I instinctively think C is the right answer: light properties already only affect 3d layers, so in the long run this makes the most sense, but is also the most breaking option.

  • Breaking: (same as above)
  • Breaking: light properties get moved around

Implementation

I think it shouldn't be too hard to switch to a single 3d texture that we only render back to the map after all layers are rendered.

Considerations

Calling setPaintProperty to transition from a flat fill layer with any degree of transparency to an extruded fill layer will lose its specified transparency.

It sucks that these are breaking changes, but it would be best to do this ASAP as at this point relatively few people (early adopters) have created styles with extrusions, they're not in Studio yet, and they aren't in default styles yet.

@mapbox/gl

@jfirebaugh
Copy link
Contributor

jfirebaugh commented Oct 21, 2016

It seems to me that the ideal solution is to generate a single 3d mesh for each extruded layer, and render each mesh with the specified opacity for that layer. No intermediate texture required.

The question is, how hard an algorithmic problem is generating a mesh from a set of extrusions?

@ansis
Copy link
Contributor

ansis commented Oct 21, 2016

It seems to me that the ideal solution is to generate a single 3d mesh for each extruded layer, and render each mesh with the specified opacity for that layer. No intermediate texture required.

It's more complicated than that

With 3D rendering you need to render closer features on top of further features. There are generally two ways to do this:

  • Use the depth buffer to stop yourself from drawing further fragments on top of closer fragments. Closer fragments overwrite further ones leaving you with only the closest fragments at the end.
    • Pros: fast, relatively simple
    • Cons: you can only render opaque 3D things because you're only rendering the closest fragment properly
  • Sort all the triangles across all 3D things and draw them in order
    • Pros: let's you have per-object opacity
    • Cons: expensive and complicated. You need to sort tons of triangles on every frame. Triangles that intersect need to be split.

I don't think the second approach is viable at all. Sorting is way too slow for us. The depth buffer approach limits us to opaque objects (just like most video games), which means any opacity needs to be applied to all 3d things together after they've all been rendered. This leaves us with the options @lbud outlined

@lbud
Copy link
Contributor Author

lbud commented Oct 21, 2016

(I hesitantly add here that I can see a case for, say, opaque floating labels atop the entire extrusion layer for something like 3D leader line labels…and I'm open to the possibility of somehow designing a way to render those as opaque on top of the texture, but I don't have a vision for how that would happen yet.)

(I also hesitate to add that I just had a vision for pitch functions, where you could transition extrusion opacity as a function of pitch. /me hides)

@incanus
Copy link
Contributor

incanus commented Oct 21, 2016

I just had a vision for pitch functions, where you could transition extrusion opacity as a function of pitch

@1ec5
Copy link
Contributor

1ec5 commented Oct 21, 2016

further 3D properties may continue to clutter the root object

I don’t see B as necessarily being any more organized than A, since properties are already namespaced by hyphen-delimited prefixes. By the same logic, symbol layers should contain text and icon objects to contain the current text-… and icon-… properties, respectively.

@jfirebaugh
Copy link
Contributor

It's more complicated than that

😓

If a style has multiple extruded layers interleaved with unextruded layers, and the extruded layers are rendered to their own texture, at what layer index is that texture rendered to the main scene? (Index of the first extruded layer, last extruded layer, after all non-extruded layers?)

Now I am starting to recall the old composite layer type and why it worked the way it did.

@kkaefer
Copy link
Member

kkaefer commented Oct 24, 2016

FWIW, mapbox/mapbox-gl-native#6596 is a prerequisite for this.

@ansis
Copy link
Contributor

ansis commented Nov 3, 2016

I wrote:

With 3D rendering you need to render closer features on top of further features. There are generally two ways to do this:

I should have added a couple qualifiers:

There are two ways to do this in webgl with regular blending.

Drawing transparent things without sorting them is called order independent transparency. Since we can't afford to sort we would need to use one of these approaches.

approaches not possible in webgl

With some opengl extensions it's possible to keep all the fragment values for a pixel, sort them by depth, and then blend them. This doesn't matter because we can't use them.

approaches that use different blending

For blending we currently use gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA) which is newColor + oldColor * (1 - newColor.a). This is the standard blending function. It combines the new color with the old color, scaling the old color down based on how transparent the new color is. This blending is order-dependent. If you change the order you draw things you get a different result, which makes it problematic for non-sorted geometry.

This is the standard blending because it fits the basic model of light shining on things and reflecting off of them. You can also blend colors by adding or by multiplying. Both of these are order-independent (because A * B * C is the same as A * C * B). When used directly, multiplicative blending simulates light shining through objects from a source at the back and additive simulates things giving off light. On their own neither lets you draw something that helps distinguish which objects are in front.

But approaches like Weighted, Blended Order-Independent Transparency (WBOIT) weight fragments based on distance before additively blending them. The values are normalized in a later pass. It's just an approximation but it means closer objects contribute more to the end value than further objects. It's got downsides, like the order of objects close together can be hard to distinguish, but overall it's a way of supporting opacity with some level of order distinction.

What I think this means

  • WBOIT gives us a way of doing per-feature 3D transparency. Imagine http://www-personal.umich.edu/~yonghah/rooms3d/ with transparency
  • a global 3D opacity could still be desirable (you want to show the 2D map behind a building, but not other buildings)
  • the questions about when to composite the grouped 3D thing onto the 2D stuff still exists
  • 2D layers should NOT use WBOIT because it will look worse because it's an approximate
  • 2D layers with z-offset would be doable if we have WBOIT (but this might get complicated with terrain)
  • having any semi-transparent 3D layer would add a decent but mostly-affordable performance cost

@lucaswoj
Copy link

lucaswoj commented Feb 1, 2017

This issue was moved to mapbox/mapbox-gl-js#4134

@lucaswoj lucaswoj closed this as completed Feb 1, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants