Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Telemetry Alert Range #2205

Closed
1 task done
jmxp69 opened this issue Aug 12, 2022 · 21 comments · Fixed by #2251
Closed
1 task done

Telemetry Alert Range #2205

jmxp69 opened this issue Aug 12, 2022 · 21 comments · Fixed by #2251
Labels
enhancement ✨ New feature or request

Comments

@jmxp69
Copy link

jmxp69 commented Aug 12, 2022

Is there an existing issue for this feature request?

  • I have searched the existing issues

Is your feature request related to a problem?

RSSI Low Alarm can be set no higher than 75 and no lower than 15.
RSSI Critical Alarm can be set no higher than 72 and no lower than 12.

ELRS RQLY is used for RSSI and the low alarm can safely be set to 80[%] with the critical alarm set to 70[%]. Some while 70% is within range, some users may opt for additional margin of safety and wish to use 75% or even higher.

Describe the solution you'd like

RSSI Low and Critical Alarm values should allow entries to reflect more modern protocols and not just scaled FrSky values.

Describe alternatives you've considered

Disabling telemetry alarms and using logical switch ranges with special functions instead. This is not how we behave like citizens though...(borrowing from a friend).

Additional context

No response

@jmxp69 jmxp69 added the enhancement ✨ New feature or request label Aug 12, 2022
@rburrow87
Copy link

To expand on this, would it be possible for it to indicate which telemetry sensor it's using so the user can make an informed decision about the warning/critical values?

I realize this page is probably being rewritten, but as an example:

image

Or maybe even select which sensor in case they want to use RSSI/1RSS rather than LQ for the built-in alarm? 😬

@mha1
Copy link
Contributor

mha1 commented Aug 14, 2022

I second that. The way RSSI alarm us handled is a FrSky artefact misused for other protocols. Some protocols provide RSSI, some a strangely 0-100% scaled RSSI (my physics professor would have a allergic reaction) whilst some protocols provide a much more reasonable link quality indicator of some sort. Being able to select the sensor with adjustable low/critical thresholds would be much better.

The second best solution would be to allow deactivation of the built in telemetry alarm. This would allow users to define their own mechanisms like logical switches plus special function or write a LUA script.

@raphaelcoeffic
Copy link
Member

Selecting the sensor could prove a bit tricky, and I would like to avoid the mess with sensor "re-discovery" and the like. The way it is right now, is that each protocol provide a measurement, which is automatically used (which is good).

At best (from the implementor's POV) we should do the following:

  • extend the range of values configurable (8 bit at the very least).
  • allow for dBm support as in Betaflight (one checkbox, only for CRSF / Ghost).

@jmxp69
Copy link
Author

jmxp69 commented Aug 15, 2022

ELRS telemetry alarms are based on RLQY which is expressed in terms of percentage/packet loss ratio. If the protocol provides RSSI that should be based on dBm. So the label should follow the protocol submitted measurement right?

I do like Rob's idea that the signal strength alarm could also indicate the sensor submitted by the protocol rather than hard coding "low alarm". That helps to remove doubt as to what the operator is seeing. I would bet a dollar 99% of the EdgeTX user base wouldn't know RQL is the value submitted with ELRS. For myself, I always just assumed that was RSSI.

@raphaelcoeffic
Copy link
Member

Some of the mess is also caused by the fact that it is not setup for the module, but model. So in theory you could have 2 different values if you have 2 modules ON in that model.

In practice, telemetry only really works when either only one module sends it, or both modules use the same protocol. Behavior with 2 modules sending telemetry is not really properly tested/defined. However, we should prepare for that, as I intend to make it work properly.

@mha1
Copy link
Contributor

mha1 commented Aug 18, 2022

After further thinking I'd like to distance myself from my previous comment. To excuse myself here's my reasoning

The current implementation:

  • relies on telemetry, i.e. works only if the protocol in use for a given model has a telemetry capability and provides telemetry data that is suitable to evaluate the RF uplink(!).
  • data sources allowing evaluation of the RF uplink can be Receiver Signal Strength (RSSI, strength of the TX signal seen by the RX) measured in [dbm] or much better a receiver generated evaluation of the signal quality (e.g. decodable packtes divided by expected packtes in a certain timeframe) in [%].
  • the data source used for the evaluation of the RF link is determined by the protocol specific telemetry decoding software part of EdgeTX by calling telemetryData.rssi.set(). The user selected thresholds for low/critical warnings are applied on the value the specific protocol sends in the calls of telemetryData.rssi.set(). Examples are
    • RX_QUALTY_INDEX for crossfire
    • Rqly for HoTT
    • QoS Frameloss for Spektrum
    • LQ for GHOST
    • RSSI for FrSKY D (scaled to 0..100)
    • LQI for MLink
  • Nearly all of the protocols use a link quality indicator in [%]

Conclusion:

  • Change "RSSI" in the telemetry tab of model settings to what it really is to avoid confusion: "RF Link Evaluation". RF Link Evaluation is a term independent of the source. If the source is better (link quality vs RSSI), the evaluation will be more realistic.
  • don't make "RF Link Evaluation" source user selectable as it is best determined by the protocol specific telemetry decoding software developer and nearly all of the telemetry capable protocols are using link quality (far superior to RSSI) with a range of 0..100% meaning 100% super, 0% bad.
  • Some documentation on which protocol uses some sort of link quality and proposed low/critical values would be nice.

@raphaelcoeffic
Copy link
Member

@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).

@mha1
Copy link
Contributor

mha1 commented Aug 18, 2022

@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).

Exactly. I'd make it a uint8_t and allow a 0..100 selection. And please change the text to stop implying the warnings are based solely on "RSSI". Make it a neutral text. I am sure you can find something better than my "RF Link evaluation"

@gagarinlg
Copy link
Member

@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).

Exactly. I'd make it a uint8_t and allow a 0..100 selection. And please change the text to stop implying the warnings are based solely on "RSSI". Make it a neutral text. I am sure you can find something better than my "RF Link evaluation"

I would prefer something like "RF uplink quality", or something like that, to make the direction clear.

My personal preference would be to also have RSSI in dBm, SNR in dB and a bit error rate available.

@mha1
Copy link
Contributor

mha1 commented Aug 18, 2022

@mha1 thx for the very good summary! So the thing that would need fixing is actually the range of values that can be set, which is quite restricted right now (6 bits, as an offset from from the preset 42/45).

Exactly. I'd make it a uint8_t and allow a 0..100 selection. And please change the text to stop implying the warnings are based solely on "RSSI". Make it a neutral text. I am sure you can find something better than my "RF Link evaluation"

I would prefer something like "RF uplink quality", or something like that, to make the direction clear.

My personal preference would be to also have RSSI in dBm, SNR in dB and a bit error rate available.

Could be done but this would be a much bigger change as each of the protocol telemetry decoders would have to send additional information about what type of data source (LQI, RSSI, SNR, ...), the unit of the data source [db, dbm, %] and expected range of values of the data source it is using to feed "RF Link quality"

@eshifri
Copy link
Contributor

eshifri commented Aug 19, 2022

Some (older) modules (e.g. FlySky) provide link quality (Sgnl) in the range of [0, 10]. Re-scaling it to 0 - 100 will not give enough resolution.

@mha1
Copy link
Contributor

mha1 commented Aug 24, 2022

Doesn't look like this discussion ended in some form of implementable specification, does it?

If we want to cover all user requirements stated above I'd say it is necessary to extend each protocol telemetry decoder to communicate more details about "RF link quality":

  • type of data source (LQI, RSSI, SNR, Sgnl, ...)
  • unit of the data source [db, dbm, %, none]
  • expected range of values of the data source, e.g. for unit x range from good to bad:
    • dbm: -20 to -120
    • %: 100 to 0
    • Sgnl: 10 to 0
    • db: really used?

Additionally the UI needs to display the communicated "RF link quality" data source type with unit and must allow setting low/critical thresholds according to the communicated expected range.

This might be a pseudo code example for a protocol using some sort of link quality indicator executed in the init section of the decoder:

telemetry.Data.RFqualityType.set(RFLQ_TPYE_LQI);
telemetry.Data.RFqualityUnit.set(UNIT_DBM);
telemetry.Data.RFqualityRangeUpper.set(100);
telemetry.Data.RFqualityRangeLower.set(0);

For a protocol using RSSI:

telemetry.Data.RFqualityType.set(RFLQ_TPYE_RSSI);
telemetry.Data.RFqualityUnit.set(UNIT_DBM)
telemetry.Data.RFqualityRangeUpper.set(-20)
telemetry.Data.RFqualityRangeLower.set(120)

@mha1
Copy link
Contributor

mha1 commented Aug 24, 2022

@pfeerick googly eyes? what do you mean? you no like it?

Here's the list of telemetry decoders that currently set the "RSSI" value. For most it is clear what they use as data source, for some we need to find the right interpretation. So from the telemetry decoders standpoint the changes shouldn't be to difficult. class TelemetryData (notice how this is located in frsky.h) needs to be updated to hold the type, unit, upper and lower range boundary. I'd initialize this in class TelemetryData's constructor to a LQI type with unit % and range 100..0 to catch forgotten decoders. I don't know how much of a change this would mean for the UI.

image

@pfeerick
Copy link
Member

@pfeerick googly eyes? what do you mean? you no like it?

Just following the conversation 😉 I like that thinking, it looks good. 😸

@mha1
Copy link
Contributor

mha1 commented Aug 25, 2022

@pfeerick I think the UI part will be the more challenging. Twelve files use the RSSI value the telemetry deocders send with telemetryData.rssi.set(value). Remember this is most of the time not RSSI but LQI All of the assume it is a real RSSI and display RSSI sometimes with unit db(?! - thank you FrSky) like the range check pop up. Some even perform some scaling (e.g. top 5 bar indicator) and even worse implicitly or explicitly assume a 100..0 range, e.g. for luaGetRSSI() (scaled RSSI in dbm?!). Cleaning up in 2.9?

image

@pfeerick
Copy link
Member

At this point, most likely 2.9 at the earliest, yes. Getting close to needing to think about bugfix only PRs and must-be-in-2.8 PRs.

Aw, so you weren't putting your hand up to work on this? 😆 Like most parts of how the code has evolved from being (as well as being frsky centric) lots of interesting code choices were made. So it will be fun to clean it all up, but worth it in the end.

@mha1
Copy link
Contributor

mha1 commented Aug 26, 2022

Sure, I'd be happy to work on this as it is kinda confusing right now. But I sure need some assistance for the UI parts. Not only logic but also some visible things need to be worked on.

But the most important thing is to define how things shall work. Currently "RSSI" subsumes a variety of different data sources with quite different physical properties and therefore different semantics. It's like deriving information about "Fruit" without specifying if you are talking about a banana or a tomato (yes, it's a fruit). But making the source known requires a deeper understanding of bananas and tomatos. Example range popup. It currently just identifies the source as a pseudo-RSSI with unit db (doesn't seem to bother anyone) and shows a range from 0..+100. This pop up couldn't be more wrong.

How should this pop up be presented when tackling this? If the source is a link quality indicator we'd have to present this pop up as LQI [%] with range (100..0). If it was a real RSSI we'd present the pop up as RSSI [dbm] with a range of -something..-something_smaller depending on protocol. If the source might be SNR we'd present the pop up as SNR [db] range idk etc. Same question for the "strength bars" top bar widget.

What I am getting at is that we would expect more knowledge about how the presented values need to be interpreted and how they should be presented to the user. I suppose not everyone knows -90dbm means lower power than -30dbm. Squeezing everything into one pseudo-"RSSI" creates the illusion of easy interpretation and comparability whilst offering more insight requires more knowledge. I am the worst UI usability designer, so discussion is welcome.

@eshifri
Copy link
Contributor

eshifri commented Aug 26, 2022

I think there are two different/opposite approaches:

  1. UI always shows something in the [0, 100] range and user does not need to know what this "something" is - just a generic measure of link quality. It is up to protocol/module/telemetry handler to decide and to convert the actual physical input to the relative scale. UI becomes generic and is the same for all protocols.
  2. User gets to know what the value being displayed is. In this case every telemetry handler in addition to value must supply the name (string) of telemetry parameter ("RQLY", "Sgnl", "RSSI", etc...) and the unit (string). This may be more informative, but the values will be different for different protocols and may be confusing for people flying with different RXs.

@gagarinlg
Copy link
Member

How about updating the telemetry handlers in a way, that they all produce a cooked link quality information and where available also the protocol specific values and their respective units?
Then people wanting to use a simple quality index value can use thi and people with a better understanding of RF can have the detailed information.

@eshifri
Copy link
Contributor

eshifri commented Aug 26, 2022

This means implement both options and allow user to choose.
Possible of course, but this means a lot more changes for both telemetry handlers and UI.
And, probably, changes in the storage.

@raphaelcoeffic
Copy link
Member

Let's have it in 2 steps:

The primary goal of what we discussed with @jmxp69 was to be able to use the built-in warnings for CRSF / ELRS. As of now, it does not really make sense, as you cannot setup values like 70%-80%, which are currently the advised values.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement ✨ New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants