You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All "Long" instruments in the API accept long as an input value. long does not have a well defined bit depth and can represent either 64 bit or 32 bit integers depending on the compiler implementation (and configuration).
Specs don't say anything about the type/size of the measurement value. The above link is explicitly for attribute values. Ideally, we can support both fixed (32-bit, 64-bit) and architecture-dependent (long, int) data types at the API surface. At the SDK level, the generated metrics can be stored as a 64-bit integer.
You're right. My mistake. I can't find anything that explicitly requires specific value types.
My goal was only to prevent unintended integer overflow on 32 bit compilers, since counters that pass the 32bit boundary are not-uncommon (eg. timestamps, memory usage, network bytes read/written)
My goal was only to prevent unintended integer overflow on 32 bit compilers, since counters that pass the 32bit boundary are not-uncommon (eg. timestamps, memory usage, network bytes read/written)
Yes, this is a valid concern, we should fix this at the API level. Thanks for raising the issue.
All "Long" instruments in the API accept
long
as an input value.long
does not have a well defined bit depth and can represent either 64 bit or 32 bit integers depending on the compiler implementation (and configuration)."Long" instruments should be changed to accept
int64_t
integer values, which are unambiguously 64 bit depth as requested by the API specification: https://opentelemetry.io/docs/reference/specification/common/#attributeThe text was updated successfully, but these errors were encountered: