Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New fuzz harness + refactoring into fuzz/ #2273

Merged
merged 3 commits into from
Oct 20, 2021

Conversation

phretor
Copy link
Contributor

@phretor phretor commented Oct 15, 2021

  • Added fuzz_XMLprofiles harness
  • Prepared fuzz_processCDRMsg for increased coverage
  • Moved all harnesses into fuzz/
  • Refactored make files

I'll send a PR to google/oss-fuzz with these commits to conclude integration of refactored fuzzers. Will keep an eye on the build process and fix it if something will go wrong.

https://github.com/google/oss-fuzz/compare/master...phretor:fast-dds/refactor?expand=1

Signed-off-by: Federico Maggi fede@maggi.cc

@phretor
Copy link
Contributor Author

phretor commented Oct 15, 2021

Once #2273 will be merged, this google/oss-fuzz#6607 should succeed.

@phretor
Copy link
Contributor Author

phretor commented Oct 15, 2021

I checked the output of all 3 failed tests (fastdds_aarch64, fastdds_mac, and fastdds_windows), but none of the failures seem caused by this commit.

Can someone take a look?

@MiguelCompany MiguelCompany self-requested a review October 18, 2021 05:41
Copy link
Member

@MiguelCompany MiguelCompany left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Failed tests are not related to the changes on this PR.

Apart from the comments below, please remove uncrustify.cfg (we have it on a different repo)

fuzz/C++/fuzz_processCDRMsg/fuzz_processCDRMsg.cxx Outdated Show resolved Hide resolved
fuzz/C++/fuzz_processCDRMsg/fuzz_processCDRMsg.cxx Outdated Show resolved Hide resolved

return true;

#ifdef EMULATE_WRITER
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of this code? It seems the created writer is not being used after it is created ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can leave it or we can drop it. My intention was to leave it there for future testing, but now is too unstable when running in a fuzzing environment because threading makes fuzzing runs non-deterministic. I don't know Fast-DDS that well to be able to bypass threading and leave the rest untouched, but if you can see a way to integrate a "full" RTPS system around the simple processCDRMsg, I think this could increase coverage and possibly find more bugs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that could be achieved create a custom transport. You will need to implement TransportInterface and TransportDescriptorInterface.

You would then need to set useBuiltinTransports = false on RTPSParticipantAttributes, and also add the new transport descriptor on userTransports.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. In particular this lines are the ones related with setting the custom transport.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK. I'm eyeing at the SharedMemTransportDescriptor as a good starting point for a custom transport that doesn't involve any threading, but I won't be able to integrate it soon, unless you think that SharedMemTransportDescriptor is ready for the purpose.

In any case, since the integration of this harness is blocking the integration of the new XML harness, I think I'll just remove this extended feature in this PR, so it can hopefully be integrated, and postpone it to a separate PR.

- Appended `uncrustify.cfg` pattern into `.gitignore`
- Moved all harnesses into `fuzz/`
- Refactored make files

I sent a PR to google/oss-fuzz with these commits to conclude integration of refactored fuzzers. Will keep an eye on the build process and fix it if something will go wrong.

https://github.com/google/oss-fuzz/compare/master...phretor:fast-dds/refactor?expand=1
Signed-off-by: Federico Maggi <fede@maggi.cc>
@phretor phretor force-pushed the fuzz/new-harnesses branch from cba765e to 1a14650 Compare October 18, 2021 16:08
@phretor
Copy link
Contributor Author

phretor commented Oct 18, 2021

Updated as requested. Failing jobs should be unrelated to this PR.

Copy link
Member

@MiguelCompany MiguelCompany left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems you are still keeping src/cpp/rtps/messages/fuzz_processCDRMsg.cpp

Signed-off-by: Federico Maggi <federico@maggi.cc>
@phretor
Copy link
Contributor Author

phretor commented Oct 19, 2021

It seems you are still keeping src/cpp/rtps/messages/fuzz_processCDRMsg.cpp

Whoops! Too many git resets and I forgot about this :-) should be removed now.

Signed-off-by: Federico Maggi <federico@maggi.cc>
Copy link
Member

@MiguelCompany MiguelCompany left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@MiguelCompany MiguelCompany merged commit b2bbea8 into eProsima:master Oct 20, 2021
@phretor phretor deleted the fuzz/new-harnesses branch October 20, 2021 12:33
@phretor phretor restored the fuzz/new-harnesses branch October 20, 2021 13:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants