You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm Joyce and I'd like to suggest the adoption of a security policy that not only allows security researchers to privately report security vulnerabilities in LLaMA C++, but also informs users of common security practices they should consider when using it.
A Security Policy is a GitHub standard document (SECURITY.md) that can be found in the "Security Tab" to instruct users how to report vulnerabilities in a safe and efficient way. It appears in both the About section on the right and as a tab option together with README and LICENSE.
Motivation
This information will benefit:
the user, that will have guidelines on how to safely run a model for their application
the project, that can avoid receiving false positive vulnerability reports while allowing security researches to disclose newly found vulnerabilities
Possible Implementation
I'll send a PR along with this issue with some suggestions such as data privacy, untrusted models, untrusted inputs, etc.
Disclosure: I work for Google at Google’s Open Source Security Upstream Team to suggest security improvements to open sources.
The text was updated successfully, but these errors were encountered:
Feature Description
Hi, I'm Joyce and I'd like to suggest the adoption of a security policy that not only allows security researchers to privately report security vulnerabilities in LLaMA C++, but also informs users of common security practices they should consider when using it.
A Security Policy is a GitHub standard document (SECURITY.md) that can be found in the "Security Tab" to instruct users how to report vulnerabilities in a safe and efficient way. It appears in both the About section on the right and as a tab option together with README and LICENSE.
Motivation
This information will benefit:
Possible Implementation
I'll send a PR along with this issue with some suggestions such as data privacy, untrusted models, untrusted inputs, etc.
Disclosure: I work for Google at Google’s Open Source Security Upstream Team to suggest security improvements to open sources.
The text was updated successfully, but these errors were encountered: