Skip to content
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.

Use VSS as mapping format for dbcfeeder #52

Merged
merged 1 commit into from
Feb 16, 2023

Conversation

erikbosch
Copy link
Contributor

@erikbosch erikbosch commented Feb 1, 2023

This is an implementation of #41

Apart from changing the config there are also some general refactoring:

  • Most python files moved to a dbcfeederlib folder. This is to simplify inclusion in unit testing
  • Added some unit tests, it is still quite basic but at least some to verify syntax parsing
  • Added a "on_change" config.

Testing fully performed:

  • Sending signals to KUKSA.val Databroker (native and as Docker container)
  • Sending signals to KUKSA.val Server (native and as Docker container)

Testing partially performed:

@erikbosch erikbosch force-pushed the erikbosch/erik_dbc branch 4 times, most recently from 118ae5b to e978f7e Compare February 3, 2023 09:01
@erikbosch erikbosch force-pushed the erikbosch/erik_dbc branch 2 times, most recently from cdf39b0 to e5a2729 Compare February 13, 2023 13:15
@erikbosch erikbosch force-pushed the erikbosch/erik_dbc branch 2 times, most recently from 9839dd2 to 205d0e9 Compare February 14, 2023 09:53
log.warning(f"Value ignored for dbc {vss_observation.dbc_name} to VSS {vss_observation.vss_name},"
f" from raw value {value} of type {type(value)}")
elif not vss_mapping.change_condition_fulfilled(value):
log.info(f"Value condition not fulfilled for VSS {vss_observation.vss_name}, value {value}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have not tested it yet, but might the log level be too high? In the past we observed if just printing some message xxx times per second will slow down the feeder a lot/generate a lot of load. And just from looking at the code, this is not an error condition, but something that we might expect to encounter regularly. So maybe verbose?.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could be reduced to debug.

For now if running on info-level you get typically one line every time a signal is sent to broker (and if here is value is discarded), which could be quite many. So the question is - how verbose do we want to be on info-level, do we expect info-level to be usable for "real deployments" where performance is important?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking info should be usable for "real deployments", and might only be more verbose during initialisation etc. if this is not how things currently are, I can also live with letting "info" chat about every processed message, but then at least we should make sure the default log level when starting from command line or from container is set to warning

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As of today default is INFO for dbcfeeder. I have no problems using DEBUG on recurring messages. But then , if you would be interested in knowing if any signals (at all) has been sent to databroker then you would need to change to DEBUG, and then you would get a lot of other messages as well.

new_val = item["to"]
vss_value = new_val
break
else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be more clean/future proof to really check there is a math transform? And in else print something like "there seem to be transforms I just don't understand them"

I am thinking that might be more robust, if we add further transform types in the future and overlays/config files/dbcfeeder implementations get mixed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea is that extract_verify_transform shall do the static check that mappings are understood, so here only well defined mappings shall exist. But we could add an else-path here anyway. As long as we do not have too smart linters (that complains that the code is dead) it shall be ok.

then you can annotate that file and use the annotated file in both KUKSA.val and the feeder.

Annotating an existing VSS JSON file has however some drawbacks. If the JSON file is regenerated
to support a new VSS version then the annotations must be manually transferred to the new vSS JSON.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

type vSS->VSS

The `on_change: True` argument specifies that the VSS signal only shall be sent if the DBC value has changed.
If none of them are specified it corresponds to `interval_ms: 1000, on_change: False`.
If only `on_change: True` is specified it corresponds to `interval_ms: 0, on_change: True`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if on-change is true and an interval is given (not sure that is alway useful), how is the logic? I.e.

I open the door-> set open

Then the signal toggles millsions of times within one second, (interval_ms: 1000), so I assume it is not forwarded?
Then after the 1000ms IF the signal is coming again, is "on-change" based on the last reported or on the last observed value? Observed might be a problem because door->open=reported to VSS, door->closed not bcasue within timeout, and then will never be reported, if compare to last observed state.

Might be a corner case, but at least we should now - and preferably describe - what happens here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I try to describe it a bit further down, will update it to make it clearer. The filtering is done in two phases. This partially depends on that we have a queue in between. I assume math-transform can be costly from performance perspective, and that is why we only do it time condition is fulfilled.

On_Change is always compared with last value sent to KUKSA.val. So theoretically, f you are very quick in opening/closing your door and you have an interval of 1000 ms then there is a risk that the opening not will be reported to KUKSA.val as we only "poll/sample" values every 1000 ms.

  • If there is a time condition the time of the observation is compared with the stored value.
    If the time difference is bigger than the explicitly or implictly defined interval
    the stored time for the VSS-DBC combination is updated
    and evaluation continue with the next step.
  • The DBC value is then transformed to VSS value. If transformation fails the signal is discarded.
  • After transformation, if there is a change condition, the stored value is compared with the
    new value. If they are equal the signal is discarded.


Transformation may fail. Typical reasons may include that the DBC value is not numerical,
or that the transformation fails on certain values like division by zero.
If transformation fails the signal will be discarded.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

discarded->ignored

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Edit: I see the discardedwording is just often/consistently. Will not mark every occurrence. can live with it :) I just would have written ignored

"$line$": 117,
"mapping": [
{
"$file_name$": "test.vspec",
Copy link
Contributor

@SebastianSchildt SebastianSchildt Feb 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the $file_name$ things an artifact from the vss-tools? It looks like those "scratchpad" data the parser attaches for giving better error messages. Do we consider it a bug upstream and will fix it there? Because in "vanilla" VSS vspec these things are cleaned up.
Although they are not in the "real" .json below if I see correctly?!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it has been fixed in COVESA/vss-tools#211. So if using an older vss-tools you get them. I have not bothered to update the included JSON files here, it should not matter.

@erikbosch
Copy link
Contributor Author

PR updated. Based on our discussions on printouts I created a short status printout, also showing max queue size so far. this would give at least some indication if your feeder works as expected

2023-02-14 17:37:31,844 INFO dbcfeederlib.databroker: Vehicle.Powertrain.Transmission.IsParkLockEngaged was already registered with type BOOLEAN
2023-02-14 17:37:31,847 INFO dbcfeederlib.databroker: Vehicle.Trailer.IsConnected was already registered with type BOOLEAN
2023-02-14 17:37:31,945 INFO dbcfeeder: Starting to process CAN signals
2023-02-14 17:37:31,948 INFO dbcfeeder: Number of VSS messages sent so far: 1, queue max size: 10
2023-02-14 17:37:31,950 INFO dbcfeeder: Number of VSS messages sent so far: 2, queue max size: 10
2023-02-14 17:37:31,955 INFO dbcfeeder: Number of VSS messages sent so far: 4, queue max size: 10
2023-02-14 17:37:31,965 INFO dbcfeeder: Number of VSS messages sent so far: 8, queue max size: 10
2023-02-14 17:37:32,060 INFO dbcfeeder: Number of VSS messages sent so far: 16, queue max size: 10
2023-02-14 17:37:32,374 INFO dbcfeeder: Number of VSS messages sent so far: 32, queue max size: 10
2023-02-14 17:37:33,056 INFO dbcfeeder: Number of VSS messages sent so far: 64, queue max size: 10
2023-02-14 17:37:34,444 INFO dbcfeeder: Number of VSS messages sent so far: 128, queue max size: 10
2023-02-14 17:37:37,255 INFO dbcfeeder: Number of VSS messages sent so far: 256, queue max size: 10
2023-02-14 17:37:42,844 INFO dbcfeeder: Number of VSS messages sent so far: 512, queue max size: 10
2023-02-14 17:37:53,989 INFO dbcfeeder: Number of VSS messages sent so far: 1024, queue max size: 10
^C2023-02-14 17:38:07,494 INFO dbcfeeder: Received signal 2, stopping...
2023-02-14 17:38:07,494 INFO dbcfeeder: Shutting down...

@SebastianSchildt
Copy link
Contributor

Is this expected:

(dbc2val) scs2rng@RNG-C-001JT dbc2val % python --version
Python 3.8.13
(dbc2val) scs2rng@RNG-C-001JT dbc2val % python dbcfeeder.py
Traceback (most recent call last):
  File "dbcfeeder.py", line 45, in <module>
    from dbcfeederlib import dbc2vssmapper
  File "/Users/scs2rng/Documents/Dev/kuksa.val.feeders/dbc2val/dbcfeederlib/dbc2vssmapper.py", line 158, in <module>
    class Mapper:
  File "/Users/scs2rng/Documents/Dev/kuksa.val.feeders/dbc2val/dbcfeederlib/dbc2vssmapper.py", line 165, in Mapper
    mapping : dict[str, list[VSSMapping]] = {}
TypeError: 'type' object is not subscriptable

@SebastianSchildt
Copy link
Contributor

Does not work on Python 3.8, needs 3.10 (did not test 3.9). Is this documented?

@SebastianSchildt
Copy link
Contributor

Default loglevel seems to be Info when run locally but warning in container. Should probably be the same? Info for both, now that info is less noisy?

@erikbosch
Copy link
Contributor Author

Our only statement on Python version is "Check that at least python version 3 is installed". In CI we use whatever Python-version is default on ubuntu-latest. Docker file use Python 3.9 so that 3.9 has been tested. I assume we likely can get it to run on Python 3.8, but we should better document what versions we intend to support. Python 3.11 is out - if we are to test all it will be a bit cumbersome to test all Python versions.

@SebastianSchildt
Copy link
Contributor

I think for this specifically it is fine if we state 3.9 as minimum (I am not sure we would loose some typing features going back?).

I think with the current state of feeders, what we should test is, what we also put in the container. Just document, so people know what to expect

Was just surprised, as for the viss-client we made the choice 3.8 should be enough (there I think it makes sense)

@erikbosch
Copy link
Contributor Author

There are some minor changes between Python 3.8 and 3.10, I sometimes experience them for vss-tools, where CI use Python 3.8 (and I use 3.10 when running locally). This stackoverflow describe the root cause also this time, it was easy to adapt to 3.8, but I will add something on supported versions.

Copy link
Contributor

@SebastianSchildt SebastianSchildt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Works for me
🐘

@SebastianSchildt SebastianSchildt merged commit c773d43 into eclipse-kuksa:main Feb 16, 2023
@erikbosch erikbosch deleted the erikbosch/erik_dbc branch February 21, 2023 09:27
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants