Skip to content
This repository has been archived by the owner on Sep 2, 2024. It is now read-only.

Commit

Permalink
Add authorization support for KUKSA.val Databroker
Browse files Browse the repository at this point in the history
Also major refactoring to move client-specific code out of main file.
  • Loading branch information
erikbosch committed Apr 13, 2023
1 parent 0d83e1e commit 81ec953
Show file tree
Hide file tree
Showing 17 changed files with 620 additions and 288 deletions.
2 changes: 2 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[flake8]
max_line_length = 120
16 changes: 16 additions & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
name: pre-commit

on: [pull_request]

jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
# required to grab the history of the PR
fetch-depth: 0
- uses: actions/setup-python@v3
- uses: pre-commit/action@v3.0.0
with:
extra_args: --color=always --from-ref ${{ github.event.pull_request.base.sha }} --to-ref ${{ github.event.pull_request.head.sha }}
17 changes: 17 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
exclude_types: ["dbc"]
- id: end-of-file-fixer
exclude_types: ["dbc"]
- id: check-yaml
- id: check-added-large-files

- repo: https://github.com/pycqa/flake8
rev: '6.0.0'
hooks:
- id: flake8
7 changes: 7 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,10 @@ Name | Description
[SOME/IP feeder](./someip2val) | SOME/IP feeder for KUKSA.val Databroker
[DDS Provider](./dds2val) | DDS provider for KUKSA.val Databroker
[Replay](./replay) | KUKSA.val Server replay script for previously recorded files, created by providing KUKSA.val Server with `--record` argument

## Pre-commit set up
This repository is set up to use [pre-commit](https://pre-commit.com/) hooks.
Use `pip install pre-commit` to install pre-commit.
After you clone the project, run `pre-commit install` to install pre-commit into your git hooks.
Pre-commit will now run on every commit.
Every time you clone a project using pre-commit running pre-commit install should always be the first thing you do.
2 changes: 1 addition & 1 deletion dbc2val/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ This file lists important changes to dbc2val
## Refactoring and changed configuration format (2023-02)

Feeder refactored and new mapping format based on VSS introduced, see [documentation](mapping.md).
Old mapping format no longer supported.
Old mapping format no longer supported.
6 changes: 4 additions & 2 deletions dbc2val/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,11 @@ RUN python3 -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"

RUN /opt/venv/bin/python3 -m pip install --upgrade pip \
&& pip3 install --no-cache-dir -r requirements.txt
&& pip3 install --pre --no-cache-dir -r requirements.txt

RUN pip3 install wheel scons && pip3 install pyinstaller patchelf==0.17.0.0 staticx
# staticx v0.13.8 cannot use pyinstaller 5.10.0
# see https://github.com/JonathonReinhart/staticx/issues/235
RUN pip3 install wheel scons && pip3 install pyinstaller==5.9.0 patchelf==0.17.0.0 staticx

# By default we use certificates and tokens from kuksa_certificates, so they must be included
RUN pyinstaller --collect-data kuksa_certificates --hidden-import can.interfaces.socketcan --clean -F -s dbcfeeder.py
Expand Down
64 changes: 47 additions & 17 deletions dbc2val/Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,11 @@ $ python -V
3. Install the needed python packages

```console
$ pip install -r requirements.txt
$ pip install --pre -r requirements.txt
```

*Note - `--pre` currently needed as dbcfeeder relies on a pre-release of kuksa-client*

4. If you want to run tests and linters, you will also need to install development dependencies

```console
Expand Down Expand Up @@ -104,19 +106,32 @@ A smaller excerpt from the above sample, with fewer signals.

## Configuration

| Command Line Argument | Environment Variable | Config File Property | Default Value | Description |
<<<<<<< HEAD
| Command Line Argument | Environment Variable | Config File Property | Default Value | Description |
|:----------------------|:--------------------------------|:------------------------|:---------------------------------|-----------------------|
| *--config* | - | - | - | Configuration file |
| *--config* | - | - | *See below* | Configuration file |
| *--dbcfile* | *DBC_FILE* | *[can].dbc* | | DBC file used for parsing CAN traffic |
| *--dumpfile* | *CANDUMP_FILE* | *[can].candumpfile* | | Replay recorded CAN traffic from dumpfile |
| *--canport* | *CAN_PORT* | *[can].port* | | Read from this CAN interface |
| *--use-j1939* | *USE_J1939* | *[can].j1939* | `False` | Use J1939 when decoding CAN frames. Setting the environment value to any value is equivalent to activating the switch on the command line. |
| *--use-socketcan* | - | - | `False` | Use SocketCAN (overriding any use of --dumpfile) |
| *--mapping* | *MAPPING_FILE* | *[general].mapping* | `mapping/vss_3.1.1/vss_dbc.json` |Mapping file used to map CAN signals to databroker datapoints. Take a look on usage of the mapping file |
| *--mapping* | *MAPPING_FILE* | *[general].mapping* | `mapping/vss_3.1.1/vss_dbc.json` | Mapping file used to map CAN signals to databroker datapoints. |
| *--server-type* | *SERVER_TYPE* | *[general].server_type* | `kuksa_databroker` | Which type of server the feeder should connect to (`kuksa_val_server` or `kuksa_databroker`) |
| - | *VDB_ADDRESS* | - | `127.0.0.1:55555` | The IP address/host name and port number of the databroker (only applicable for server type `kuksa_databroker`) |
| - | *DAPR_GRPC_PORT* | - | - | Override broker address & connect to DAPR sidecar @ 127.0.0.1:DAPR_GRPC_PORT |
| - | *VEHICLEDATABROKER_DAPR_APP_ID* | - | - | Add dapr-app-id metadata |
| - | *KUKSA_ADDRESS* | *[general].ip* | `127.0.0.1` | IP address for Server/Databroker |
| - | *KUKSA_PORT* | *[general].port* | `55555` | Port for Server/Databroker |
| *--tls* | - | *[general].tls* | `False` | Shall tls be used for Server/Databroker connection? |
| - | - | *[general].token* | *Undefined* | Token path. Only needed if Databroker/Server requires authentication |
| - | *VEHICLEDATABROKER_DAPR_APP_ID* | - | - | Add dapr-app-id metadata. Only relevant for KUKSA.val Databroker |

*Note that the [default config file](config/dbc_feeder.ini) include default Databroker settings and must be modified if you intend to use it for KUKSA.val Server*

If `--config` is not given, the dbcfeeder will look for configuration files in the following locations:

* `/config/dbc_feeder.in`
* `/etc/dbc_feeder.ini`
* `config/dbc_feeder.ini`

The first one found will be used.

Configuration options have the following priority (highest at top).

Expand Down Expand Up @@ -218,6 +233,26 @@ docker run --net=host -e LOG_LEVEL=INFO dbcfeeder:latest --server-type kuksa_da
docker run --net=host -e LOG_LEVEL=INFO dbcfeeder:latest --server-type kuksa_val_server
```

### KUKSA.val Server/Databroker Authentication when using Docker

The docker container contains default certificates for KUKSA.val server, and if the configuration file does not
specify token file the [default token file](https://github.com/eclipse/kuksa.val/blob/master/kuksa_certificates/jwt/all-read-write.json.token)
provided by [kuksa-client](https://github.com/eclipse/kuksa.val/tree/master/kuksa-client) will be used.

No default token is included for KUKSA.val Databroker. Instead the user must specify the token file in the config file.
The token must also be available for the running docker container, for example by mounting the directory container
when starting the container. Below is an example based on that the token file
[provide-all.token](https://github.com/eclipse/kuksa.val/blob/master/jwt/provide-all.token) is used and that `kuksa.val`
is cloned to `/home/user/kuksa.val`. Then the token can be accessed by mounting the `jwt`folder using the `-v`
and specify `token=/jwt/provide-all.token` in the [default configuration file](config/dbc_feeder.ini).


```console
docker run --net=host -e LOG_LEVEL=INFO -v /home/user/kuksa.val/jwt:/jwt dbcfeeder:latest
```

*Note that authentication in KUKSA.val Databroker by default is deactivated, and then no token needs to be given!*

## Mapping file

The mapping file describes mapping between VSS signals and DBC signals.
Expand All @@ -234,26 +269,22 @@ To set the log level to DEBUG
$ LOG_LEVEL=debug ./dbcfeeder.py
```

Set log level to INFO, but for dbcfeeder.broker set it to DEBUG
Set log level to INFO, but for dbcfeederlib.databrokerclientwrapper set it to DEBUG

```console
$ LOG_LEVEL=info,dbcfeeder.broker_client=debug ./dbcfeeder.py
$ LOG_LEVEL=info,dbcfeederlib.databrokerclientwrapper=debug ./dbcfeeder.py
```

or, since INFO is the default log level, this is equivalent to:

```console
$ LOG_LEVEL=dbcfeeder.broker_client=debug ./dbcfeeder.py
$ LOG_LEVEL=dbcfeederlib.databrokerclientwrapper=debug ./dbcfeeder.py
```

Available loggers:
- dbcfeeder
- dbcfeeder.broker_client
- databroker
- dbcreader
- dbcmapper
- can
- j1939
- dbcfeederlib.* (one for every file in the dbcfeeder directory)
- kuksa-client (to control loggings provided by [kuksa-client](https://github.com/eclipse/kuksa.val/tree/master/kuksa-client))

## ELM/OBDLink support

Expand Down Expand Up @@ -284,4 +315,3 @@ large-sized messages that are delivered with more than one CAN frame because the
than a CAN frame's maximum payload of 8 bytes. To enable the J1939 mode, simply put `--use-j1939` in the command when running `dbcfeeder.py`.

Support for J1939 is provided by means of the [can-j1939 package](https://pypi.org/project/can-j1939/).

38 changes: 24 additions & 14 deletions dbc2val/config/dbc_feeder.ini
Original file line number Diff line number Diff line change
Expand Up @@ -17,22 +17,32 @@ server_type = kuksa_databroker
# VSS mapping file
mapping = mapping/vss_3.1.1/vss_dbc.json

[kuksa_val_server]
# kuksa_val_server IP address or host name
# Same configs used for KUKSA.val Server and Databroker
# Note that default values below corresponds to Databroker
# Default values for KUKSA.val Server is commented below

# IP address for server (KUKSA.val Server or Databroker)
ip = 127.0.0.1
# ip = localhost

# Port for server (KUKSA.val Server or Databroker)
port = 55555
# port = 8090
# protocol = ws
# insecure = False
# JWT security token file
# token=../../kuksa_certificates/jwt/super-admin.json.token

[kuksa_databroker]
# kuksa_databroker IP address or host name
# ip = 127.0.0.1
# port = 55555
# protocol = grpc
# kuksa_databroker does not yet support security features
# insecure = True

# Shall TLS be used (default False for Databroker, True for KUKSA.val Server)
tls = False
# tls = True

# Token file for authorization.
# Default behavior differ between servers
# For KUKSA.val Databroker the KUKSA.val default token not included in packages and containers
# If you run your Databroker so it require authentication you must specify token
# The example below works if you have cloned kuksa.val in parallel to kuksa.val.feeders
#token=../../kuksa.val/jwt/provide-all.token
# For KUKSA.val Server the default behavior is to use the token provided as part of kuksa-client
# So you only need to specify a different token if you want to use a different token
# Possibly like below
# token=../../kuksa.val/kuksa_certificates/jwt/super-admin.json.token

[can]
# CAN port, use elmcan to start the elmcan bridge
Expand Down
2 changes: 0 additions & 2 deletions dbc2val/createvcan.sh
Original file line number Diff line number Diff line change
Expand Up @@ -68,5 +68,3 @@ fi
virtualCanConfigure

echo "createvcan: Done."


Loading

0 comments on commit 81ec953

Please sign in to comment.