-
Notifications
You must be signed in to change notification settings - Fork 22
[dbc2val] Support disabling strict parsing of CAN frames #57
[dbc2val] Support disabling strict parsing of CAN frames #57
Conversation
If I read this correctly, I am not sure it is related to the can frames as such, but rather to not-wellformed DBC files? Because I saw it is applied on database load and the documentation says
Not arguing it is a good option, but is the wording correct? There was other an issue earlier when the actual CAN trace contains truncated entry. In that case the DBC is valid, you would try to apply it to a CAN frame and the decoder would see the DBC entry describing more entries than are in the CAN frame: see here eclipse-archived/kuksa.val#374 The potential solution discussed there was setting decode = self.db.decode_message(msg.arbitration_id, msg.data, allow_truncated=True) (although in that case we opted to repair our traces instead) Guess, what I am really asking is whether you fix the same problem in a different way, or whether it is another problem? Are there any examples of a DBC entry and CAN frame that work only with |
I guess you are right. We have been running into problems when using a specific DBC file for trucks which produced an error during startup of the feeder, indicating that a signal did not fit into a particular message. Based on that, I will rephrase the help text of the switch and the commit message. |
pyserial ~= 3.5 | ||
cantools ~= 38.0 | ||
pyyaml ~= 6.0 | ||
can-j1939 ~= 2.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know, but according to the readme can-j1939 https://github.com/juergenH87/python-can-j1939 seems to be a maintained fork of j1939.
License-wise it is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have not done a code introspection but from the README and the code examples, it looks to me as if can-j1939 has been forked off the original j1939. Also, my local tests with a CAN dump file, a custom DBC and mapping file have been successful. The API used by dbc2val also doesn't seem to be broken. So FMPOV this should be a suitable replacement. Do you have any insight pointing in another direction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No insights. I just learned about can-j1939 from your PR. If this is newer, better supported, and something that works for you, by all means bring it in (I don't think anyone else is currently actively working on the j1939 parts, so not much danger of breaking things)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing to can-j1939 might actually be good, with j1939 there is a dependency problem, see #56. The same exception does not occur if using can-j1939, and no other exception either. I have however not fully tested the j1939 functionally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like there have been some subtle changes between j1939 and can-j1939. I am currently adapting the code accordingly ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I have pushed some fixes and removed a lot of code which I believe to be obsolete.
@@ -1,10 +1,95 @@ | |||
pyserial | |||
cantools | |||
python-can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is python-can removed? We have some import can
in the code which seem to rely on this. (On the other hand it is implicitly included from cantools, so it will be installed anyways)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is still being pulled in as a transitive dependency of can-j1939 and cantools. So it seemed feasible to let these two deps determine the particular version that works for them.
However, I see your point that if our code explicitly depends on it, we should probably also explicitly define the dependency. In that case, should I derive the version (range) from the version that has been resolved to work for can-j1939 and cantools?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not have a strong opinion. If we want to explicitly state it with version we can take what has been derived. But then maybe add a comment that we typically can use what works for can-j1939 and cantools. Or possibly specifying it without version and just state in a comment that version is implicitly selected by the other packages, or just mention it in a comment so that it is understandable why it isn't listed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added python-can to the requirements.in file.
b747cb6
to
f0691f1
Compare
@@ -0,0 +1,25 @@ | |||
# This file contains top level dependencies pinning a major version. | |||
# For developing the dbc2val package it makes sense to use this file | |||
# with 'pip install -r requirements.in' in order to benefit from more |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to agree on how we want to handle dev-dependencies. So far requirements-dev.txt and requirements.txt for dbc2val has been quite similar. But as part of #52 I will need to add pylint and pytest, but they should only be used for building/testing, they are not required for running and does not for instance need to be installed in the docker container.
Shall we then (as part of #52) re-introduce a file requirements-dev.in
or similar that only contains the extra dependencies for build/test, i.e. when developing you should do pip install -r requirements.in requirements-dev.in
or similar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not expert in Python build systems but something a little more sophisticated than plain pip install
would probably be nice. Hatch seems to be quite popular for these purposes and also seems to provide mechanisms for creating (virtual) environments easily based on metadata define in a standard pyproject.toml file. It also provides means for specifying additional dependencies required only for testing etc. Maybe it would be worth a shot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In one of our other repos we use a setup.cfg file where options.extras_require
is used for test-dependencies
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In kuksa-client the last thing we tried was pip-compile from https://pypi.org/project/pip-tools/ or https://github.com/jazzband/pip-tools
We don't have much experience yet, but it was chosen as middleground between "naked" requirements and full pipenv Pipfile/Pipfile.lock combo.
So the requirments are generated and checked in as in here https://github.com/eclipse/kuksa.val/blob/master/kuksa-client/requirements.txt.
In that case the source was setup.cfg, but a requirements.in is also possible (although I think then you need to have requirements.in and requirements-dev.in separately, with a toml might be better)
No hard opinion here, just one datapoint :) My only opinion is, a requirements file without ANY hint as to what version we would like is not a best-practice 😃
f0691f1
to
5ff1e6c
Compare
@SebastianSchildt @erikbosch |
As we have merged #52 first, this PR needs to be rebased. That should be doable. As a user, there is an important change in the architecture: Mappings are now expressed directly in terms of VSS overlays. Maybe important for @LuLeRoemer Basically you would need to change the format of the mapping file https://github.com/eclipse/kuksa.val.feeders/blob/main/dbc2val/mapping.md While there are no real "release" here currently, there is a tag https://github.com/eclipse/kuksa.val.feeders/tree/0.1.2 if you require keep using the old mapping format, we however encourage you upgrade as we consider attaching mappings directly to VSS is the ways forward for data-providers/feeders @erikbosch Can you support in rebasing and potential pitfalls regarding the new mapping syntax if required? |
Yes, as I am the reason for the need for (a not trivial) rebase I can do a try and put the result in a separate PR or a separate branch. |
Ok, I will rebase and also migrate to the new way of specifying the mapping. |
The current requirements.txt does not specify any version for dependencies which results in the current latest version available frmo PyPi being pulled in. In order to make the build/setup process reproducible, the dependencies should be pinned to specific versions. This can easily be achieved by using the 'pip-compile' command of the 'pip-tools' module which takes dependency declarations from the 'requirements.in' file and compiles these into a 'requirements.txt' file containing specific versions of the dependencies. Also replaced the dependency on j1939 with can-j1939. The GitHub repo of the former has not seen any updates for five years while the repo of the latter seems to be under active development. Signed-off-by: Kai Hudalla <kai.hudalla@bosch.io>
@sophokles73 - concerning the dev-dependencies, in the recently merged PR I added dependencies to pytest/pylint. A possible way forward to take advantage of your changes but waiting with more advanced refactoring of dependency handling could be to keep the dev-variant but only for tooling needed for test/dev. A possible solution could be seen in #58 - we need to make sure that the "dev-dependencies" are installed in CI. If you like you could possibly use the slightly modified commit in #58, or make similar changes in your own version of it. When done I can close #58. |
@erikbosch |
The dbcfeeder now supports a new command line switch '--lax-dbc-parsing' which can be used to disable strict parsing of DBC files. This is helpful if the DBC file contains message length definitions that do not match the signals' bit-offsets and lengths. Processing DBC frames based on such DBC message definitions might still work, so providing this switch might allow using the (erroneous) DBC file without having to fix it first. Signed-off-by: Kai Hudalla <kai.hudalla@bosch.io>
5ff1e6c
to
a1c3fac
Compare
Yes, unless you want to solve it in some other way to make sure the CI does not fail |
I have rebased and incorporated the changes proposed by Erik. So FMPOV we should be good to merge ... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did two sanity checks on latest version
- Native connection to databroker
- Build and run container against databroker
Both worked well
@SebastianSchildt any additional thoughts or can we merge this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(Still) works for me, Thanks
lgtm 🍸
The dbcfeeder now supports a new command line switch '--lax'
which can be used to disable strict parsing of CAN frames.
This is helpful with CAN dump files that contains frames that do
not contain the default 8 bytes of data.