-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: get token by token_id API and add transactions count to token return #189
Conversation
This pull request introduces 1 alert when merging 87a1666 into f9ec34d - view on LGTM.com new alerts:
|
@@ -51,18 +50,12 @@ def get_token_information(self, token_id: str) -> dict: | |||
'field': 'address.keyword' | |||
} | |||
}, | |||
'transaction_sum': { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not only not needed as this information will be gathered by using the token index but it was invalid, as the aggregation is being limited by the max aggregation size from elastic search (limited at 1000 rows)
This pull request introduces 1 alert when merging 3858cbf into f9ec34d - view on LGTM.com new alerts:
|
… is now returned by the token index
3858cbf
to
9ee1c88
Compare
8284036
to
de57330
Compare
471f188
to
821048d
Compare
821048d
to
ad0f604
Compare
handlers/token_api.py
Outdated
response = token_api.get_token(token_id) | ||
|
||
if len(response['hits']) == 0: | ||
raise ApiError("Token not found") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will raise 500, right? should we raise ApiError("not_found")
to return 404?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes we should, thanks!
Done in d45ebd0
serverless.yml
Outdated
- name: request.querystring.search_text | ||
- name: request.querystring.sort_by | ||
- name: request.querystring.order | ||
- name: request.querystring.search_after | ||
- name: request.header.origin |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- name: request.querystring.search_text | |
- name: request.querystring.sort_by | |
- name: request.querystring.order | |
- name: request.querystring.search_after | |
- name: request.header.origin | |
- name: request.path.token_id | |
- name: request.header.origin |
@@ -268,6 +268,31 @@ functions: | |||
- name: request.querystring.search_after | |||
- name: request.header.origin | |||
|
|||
# This get_token will get tokens from the elasticsearch index | |||
es_get_token_handler: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this project style is to use snake case for lambda names but AWS translates to camel case replacing each underscore with Underscore
for some internal names (e.g. policy statement name) this makes the name longer and easier to hit the limit, failing the deploy.
I think we should create new lambdas with camel case to avoid these issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, that's weird!
I'll open an issue to refactor lambda names to camelCase
Acceptance Criteria
transactions_count
attribute to the token responsetransactions
aggregation from the token information APISecurity Checklist