Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Describe numbers encoding in JSON #1574

Closed
wants to merge 2 commits into from
Closed
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions specification/2.0/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,10 @@ To simplify client-side implementation, glTF has additional restrictions on JSON

> **Implementation Note:** This allows generic glTF client implementations to not have full Unicode support. Application-specific strings (e.g., values of `"name"` properties or content of `extras` fields) may use any symbols.
3. Names (keys) within JSON objects must be unique, i.e., duplicate keys aren't allowed.
4. Numbers defined as integers in the schema must be written without fractional part (i.e., `.0`). Exporters should not produce integer values greater than 2<sup>53</sup> because some client implementations will not be able to read them correctly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the purpose of prohibiting a fractional part? It may be parsed differently in a typed language, I suppose? If any mainstream JSON serializer doesn't do this, it won't be within a developer's ability to change it. If it's already universal in common libraries, as it is for JS, that's fine.

Note that JSON also allows an exponent part: https://stackoverflow.com/a/19554986/1314762. Should that be mentioned?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From JSON-Schema spec:

Some programming languages and parsers use different internal representations for floating point numbers than they do for integers.

For consistency, integer JSON numbers SHOULD NOT be encoded with a fractional part.

We had this issue with files made by the Blender exporter at one point. JSON.stringify is fine. Note, that this affects only properties marked as "integer" in the schema such as:

"buffers": [
  {
    "byteLength": 25.0
  }
]

I'm fine with changing "must" to "should" to align better with JSON-Schema language. Exponent notation is usually associated with floats, so it shouldn't be used for integers as well.

Copy link
Contributor

@javagl javagl Mar 6, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might only be remotely related, but related: There has been some discussion about this at KhronosGroup/glTF-Validator#8

The point that people might not be able to modify the behavior of JSON serializers may be an issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that loaders using typed languages that rely on "integer" schema type are a bigger concern here rather than half-baked serializers. For example, "componentType":5123.0 crashes some loaders.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, i'm fine with either "must" or "should", whichever you prefer. Let's also mention exponents here.

5. Floating-point numbers must be written in a way that preserves original values when these numbers are read back.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As an implementer I don't know what to do with this as a normative requirement... no one should be writing a JSON serializer from scratch, and I would not know how to evaluate this requirement on a JSON library, other than (as you suggest below) looking for something common and well-tested. Some developers (e.g. on the web) will not have a choice in their JSON implementation.

Perhaps this should just be an implementation note – that implementers should be aware of this when choosing a serialization method, and that common JSON libraries handle it automatically.

Is this something we can detect in validation?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All "reasonable" approaches (such as JSON.stringify on the Web, platform-provided JSON implementations in languages like Python, commonly-used C++ libraries) are already aligned with this requirement.

Validating it would require comparing glTF JSON with some "source" data.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nevertheless, I think this is too vague to be a normative requirement, and should probably be an implementation note.

It is probably also worth saying at the top of this section that all of these "additional restrictions on JSON" are already implemented by common JSON libraries, even if they are not required by the JSON spec.


> **Implementation Note:** This is typically achieved with algorithms like Grisu2 used by common JSON libraries. This restriction enables efficient and predictable data round-trip with binary JSON representations such as UBJSON.

## URIs

Expand Down