Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test logger calls in tests for NodeProcessor #2463

Merged
merged 7 commits into from
Mar 17, 2024

Conversation

luminousleek
Copy link
Contributor

@luminousleek luminousleek commented Mar 15, 2024

What is the purpose of this pull request?

  • Documentation update
  • Bug fix
  • Feature addition or enhancement
  • Code maintenance
  • DevOps
  • Improve developer experience
  • Others, please explain: Improve tests

Partially addresses #2099
Related to #2053

Overview of changes:

  • Mocked the logger
  • Added expectations to test if the logger is called with the correct input
  • Added constants for expected logger input

Anything you'd like to highlight/discuss:

Mocking vs Spying

I chose to mock the logger instead of spy on it so as to decouple the logger from this test. If I had spied on the logger, the actual implementation of the logger would have been used, which may also consume (slightly) more resources. Also, errors from winston might also interfere with the test (even though that's very unlikely).

One side-effect of mocking the logger is that the logger output will not be printed in the console, as seen in the screenshot below (left is spied, right is mocked).

Screenshot 2024-03-15 at 6 58 16 PM

If the logger output is still needed to be printed in the console, then a mock implementation of the warn method could look something like this

jest.mock('../../../src/utils/logger', () => ({
  warn: jest.fn(msg => console.log(`warn: ${msg}`)),
}));

and would result in this being printed to the console

Screenshot 2024-03-15 at 9 22 20 PM

which does look pretty clunky. Do let me know which approach you prefer.

Inconsistent naming in NodeProcessor.data.ts

The naming conventions of the test inputs in NodeProcessor.data.ts are inconsistent. Many of the elements there have a test to check if the slots do override the attributes of the elements, but they are named in two separate ways:

  1. PROCESS_[ELEMENT]_ATTRIBUTES_NO_OVERRIDE
  2. PROCESS_[ELEMENT]_[ATTRIBUTE]_SLOT_TAKES_PRIORITY

I think that's why the author of #2053 wrote another constant called PROCESS_PANEL_HEADER_SLOT_TAKES_PRIORITY when the constant PROCESS_PANEL_ATTRIBUTES_NO_OVERRIDE tests for the exact same thing. That's why I have an expect statement after each of these two tests, because in both times the logger is called with the same input.

A PR could be done to make the naming convention consistent, as well as to remove the unnecessary test.

Logger warnings still inconsistent

With the merging of #2053, Panels, Dropdowns, Modals and Popovers print warnings to the console if there are slots overriding the element attributes. However, other elements such as Quizzes and QOptions do not print warnings when this happens. This is because in the code of MdAttributeRenderer.ts, not every attribute is checked to see if there is an overriding slot, so the logger will not warn if the unchecked attributes are overridden.

A simple (though untested) fix could be to have a new method as follows

processAttribute(node, attribute, isInline, slotName = attribute) {
    if (!this.hasSlotOverridingAttribute(node, attribute, slotName) {
        this.processAttributeWithoutOverride(node, attribute, isInline, slotName);
    }
}

and replace all the processAttributeWIthoutOverride methods with this.

Testing instructions:

Proposed commit message: (wrap lines at 72 characters)

Test logging calls in NodeProcessor

Previously, calls to the logger were not tested for in testing
NodeProcessor.

Let's mock the logger and test if it is being called correctly.

Checklist: ☑️

  • Updated the documentation for feature additions and enhancements
  • Added tests for bug fixes or features
  • Linked all related issues
  • No unrelated changes

Reviewer checklist:

Indicate the SEMVER impact of the PR:

  • Major (when you make incompatible API changes)
  • Minor (when you add functionality in a backward compatible manner)
  • Patch (when you make backward compatible bug fixes)

At the end of the review, please label the PR with the appropriate label: r.Major, r.Minor, r.Patch.

Breaking change release note preparation (if applicable):

  • To be included in the release note for any feature that is made obsolete/breaking

Give a brief explanation note about:

  • what was the old feature that was made obsolete
  • any replacement feature (if any), and
  • how the author should modify his website to migrate from the old feature to the replacement feature (if possible).

@luminousleek luminousleek mentioned this pull request Mar 15, 2024
14 tasks
@kaixin-hc kaixin-hc added this to the v5.3.1 milestone Mar 15, 2024
Copy link

codecov bot commented Mar 15, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 48.97%. Comparing base (8a5688f) to head (cab2eb5).
Report is 4 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #2463      +/-   ##
==========================================
+ Coverage   48.90%   48.97%   +0.06%     
==========================================
  Files         124      124              
  Lines        5247     5254       +7     
  Branches     1112     1112              
==========================================
+ Hits         2566     2573       +7     
- Misses       2373     2376       +3     
+ Partials      308      305       -3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@LamJiuFong
Copy link
Contributor

LamJiuFong commented Mar 15, 2024

@luminousleek Thanks for the great work! I have a few ideas:

  1. I think it is fine to not print out the message to the console, using toHaveBeenCalledWith() to check the input of the logger is enough to ensure the correctness of the method.

  2. Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files?
    For your reference, jest-manual-mocks

  3. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :

afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Please feel free to add your comments!

@luminousleek
Copy link
Contributor Author

  1. Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files?
    For your reference, jest-manual-mocks

Yeah I think this is a good idea, will probably do it in another PR. Thanks for the link.

  1. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Yup I agree, the length issue didn't occur to me until you mentioned it. Thanks for commenting on this PR!

@itsyme
Copy link
Contributor

itsyme commented Mar 16, 2024

@luminousleek Thanks for the great work! I have a few ideas:

  1. I think it is fine to not print out the message to the console, using toHaveBeenCalledWith() to check the input of the logger is enough to ensure the correctness of the method.
  2. Since we are probably going to use the mock logger and the mock functions in other test files, do you think it is a good idea to create a manual mock instead so that we do not have to reimplement the mock functions again in other files?
    For your reference, jest-manual-mocks
  3. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Please feel free to add your comments!

Great point about the clearing mocks! I think that is particularly important to note as bugs that arise from not clearing mocks may be harder to spot when it happens. Great catch!

@yucheng11122017
Copy link
Contributor

Hi @luminousleek thank you very much! This looks like excellent and detailed work.

  • With regards to mocking vs spying, I think your reasons make sense
  • With regards, to naming conventions, yeah let's open an issue for this and tackle it in the future.
  • Lastly, with regards to the warnings, I'm actually not too sure why for some we don't have this warning and why for some we do. If one of the other developers can advice, that would be great. But if not, I think what you suggested (implementing warnings for all) would be the best way going forward.

Copy link
Contributor

@yucheng11122017 yucheng11122017 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@luminousleek
Copy link
Contributor Author

luminousleek commented Mar 16, 2024

  1. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Added the afterEach teardown hook, @yucheng11122017 do please take a look (and sorry for doing this after your review)

Edit: after looking at some code on StackOverflow regarding this, it seems that they put the jest.clearAllMocks() method in a beforeEach hook instead of an afterEach hook. I think this makes a bit more sense in the edge case that this test suite runs after another test suite which didn't reset the mocks after the other tests were run.

@itsyme
Copy link
Contributor

itsyme commented Mar 17, 2024

  1. If I'm not wrong, the mock logger is shared among all the tests in a file, in other words, the mock logger's data is not resetting for every test and is actually accumulated. I am thinking of adding something like :
afterEach(() => {
    jest.clearAllMocks();
});

to reset the logger to a clean state after every test. Might not be very useful for now but I think it is safer to do so, because issues might arise in the future (eg. checking mock.calls.length)

Added the afterEach teardown hook, @yucheng11122017 do please take a look (and sorry for doing this after your review)

Edit: after looking at some code on StackOverflow regarding this, it seems that they put the jest.clearAllMocks() method in a beforeEach hook instead of an afterEach hook. I think this makes a bit more sense in the edge case that this test suite runs after another test suite which didn't reset the mocks after the other tests were run.

Good catch! I agree with the use of beforeEach as I feel that this would ensure that the tests in the file are guaranteed to have cleared mocks before each test is run rather than depending on another test file to ensure that the mocks are cleared. Here's an example for clarity:

Assume test 1 and test 2 are in two separate files and are run sequentially,

In beforeEach:
Test 1 clear mocks -> Test 1 runs -> Test 2 clear mocks -> Test 2 runs

In afterEach:
Test 1 runs -> Test 1 clear mocks -> Test 2 runs -> Test 2 clear mocks

In the case of using afterEach you can see that the clearing of mocks relies on the previous test 1 to clear it, whereas in beforeEach if the Test 2 run fails because mocks were not cleared, it is easier to tell that the mocks are not cleared as the Test 2 file does not have the beforeEach teardown hook. This makes each test file responsible for the clearing of mocks for their own tests.

Copy link
Contributor

@itsyme itsyme left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Good catch of the use of beforeEach!

@itsyme itsyme added the r.Patch Version resolver: increment by 0.0.1 label Mar 17, 2024
@yucheng11122017 yucheng11122017 merged commit f0759d3 into MarkBind:master Mar 17, 2024
9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
r.Patch Version resolver: increment by 0.0.1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants