Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Local File Fetch #11925

Closed
kitsonk opened this issue Sep 6, 2021 · 23 comments · Fixed by #12545
Closed

Proposal: Local File Fetch #11925

kitsonk opened this issue Sep 6, 2021 · 23 comments · Fixed by #12545
Assignees
Labels
ext/fetch related to the ext/fetch feat new feature (which has been agreed to/accepted) web related to Web APIs
Milestone

Comments

@kitsonk
Copy link
Contributor

kitsonk commented Sep 6, 2021

Context

Currently, fetch() does not allow file:// schemes to be fetched. The inability to do this becomes a significant usability issue, making code less portable. It has become quite common in Deno to write isomorphic code that uses import.meta.url as a base for accessing a resource when writing server code. Currently a user would have to code path to determine if the resource they are trying to access is local or on the network at a code level.

It was requested in #2150 is the 10th most 👍 open issue at the time of this writing.

That being said, the web platform does not support it, because of the security considerations and undefined behavior. Specifically the Fetch Living Standard says:

For now, unfortunate as it is, file URLs are left as an exercise for the
reader.

When in doubt, return a network error.

The specification does not prohibit file protocols, but the entire behavior is undefined.

And Node.js libraries providing the fetch() API have decided not to implement local file URLs, partly because of the undefined behavior and the security concerns that come with Node.js having a trust-by-default model.

Firefox

Firefox is the only mainstream browser to support local "file://" URLs well and would be the only one considered as setting precedence with local file fetch().

Since Firefox 67, by default, local files create their own unique opaque origin, instead of sharing an origin (ref). This means the only file that can be fetched is the file itself. For example scripts that come from file:///example/test.html can only do fetch("file:///example/test.html") or fetch("./test.html"). Every other one will display Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at in the console and throw a network error.

If you change the config option privacy.file_unique_origin to false, then you can fetch local files from pages that have a local origin. There are the observed behaviours of fetching local files this way:

  • URLs are relative to the source file/window.location
  • The window.location.origin is opaque, irrespective of it being unique per file or not.
  • If the file is present on the file system, the fetch is resolved as a 200.
  • Irrespective of the method, the response is the same. This means HEAD returns with a body.
  • There are no headers set on the response (including no content-size).
  • If the file is not present on the file system, the Cross-Origin Request Blocked error is logged to the console, and a network error is thrown. (I suspect the behaviour around the Cross-Origin Request Blocked is part of the security mitigation to limit the ability of scripts to try to detect if the privacy.file_unique_origin is true or false).

Since you have to do advanced configuration to be able to enable this, it feels like its behaviours shouldn't overly influence, as most code in the wild wouldn't expect to be able to fetch local files.

Solution

Scheme Fetch

Building upon 4.2. Scheme fetch and switch on the current URL's scheme, and running the associated steps:

"file"

  1. Run these steps, but abort when the ongoing fetch is terminated:
    1. If request's method is not GET, then return a network error.
    2. Set result to the result of the resolution of op_open_async with an options argument of path set to the result of pathFromURL() for the url and options set to read to true.
    3. If the result is an error, return a network error.
    4. Otherwise, set file to the an instance of FileStream constructed with the rid from result.
    5. Let response be a new response whose status message is OK.
    6. Set response's body's stream to the file's stream.
    7. Return response.
  2. If aborted, then:
    1. Let aborted be the termination's aborted flag.
    2. If aborted is set, then return an aborted network error.
    3. Return a network error.

Body used

TBC steps to close the FileStream.

FileStream

An internal class which encapsulates a success result of op_open_async and performs op_read_async to read and enqueue chunks.

class FileStream {
  constructor(rid: number);

  readonly closed: boolean;
  readonly rid: number;
  readonly readable: ReadableStream<Uint8Array>;

  close(): Promise<void>;
}

The readable should have a similar chunk queuing strategy to reading the body of a network request, in that it uses "byte" type and the pull() algorithm which provides back pressure on the stream.

Considerations

  • Security considerations should be returned as thrown errors form the op_open_async and be surfaced in a meaningful way. Because this builds upon the Fetch specification, any error condition, including security or non-found (for like a blob URL) return network errors.
  • Relative URLs are not supported. Determining a base is complex and is prone to a lot of issues. If we have an established standard around the root of a program, related to auto-discovery of a project configuration and have an established pattern around an implicit window.location and worker .location, then we could consider it, but using import.meta.url is the only viable near-term solution.
  • Additional features, like range requests, last modified headers, supporting HEAD requests, etc. could be added in the future. It feels appropriate to simply focus on a minimum-viable solution.
  • It is best to leave content-type up to the consumer, as it becomes an very opinionated thing to determine content-types based on extensions in the file system.
  • Unlike blobs, the content-length header is not set, as the implementation reads the file in a streaming fashion, meaning that the length of the content can change as the content is being read. Therefore consumers who need to "forward" the content length will need to calculate the length through some other mechanism. (Also, note that this header is not set on Firefox when fetching local files).
@kitsonk kitsonk added feat new feature (which has been agreed to/accepted) web related to Web APIs ext/fetch related to the ext/fetch labels Sep 6, 2021
@Jamesernator
Copy link

  • It is best to leave content-type up to the consumer, as it becomes an very opinionated thing to determine content-types based on extensions in the file system.

I think it would be nice to at least have consistency with import for MIME types, this would only affect .js/.ts and soon .json (#7623).

This would be particularly important if Deno ever supported something like service workers, in which case Deno's fetching on import would be observable (assuming web compatible behaviour).

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 6, 2021

@Jamesernator internally, for local files, a content-type/media type/mime type is not assigned to modules, and it won't be with import assertions either. So there is no consistency to be had. Setting a content-type only assists a very limited set of use cases, and is inconsistent with the one reference implementation behaviour. All the other times when the content-type is set, it is never derived from the file extension. It is supplied by either a server or is part of the URL (in the case of data and blob URLs).

Setting it by extension is problematic, especially since the extension ts in IANA is video/mp2t. So if we set it, we would have to provide a mechanism to "override" it likely, as it won't suit some people in some use cases.

@Jamesernator
Copy link

Jamesernator commented Sep 6, 2021

Setting it by extension is problematic, especially since the extension ts in IANA is video/mp2t. So if we set it, we would have to provide a mechanism to "override" it likely, as it won't suit some people in some use cases.

To be clear, I'm not suggesting you recognize IANA types at all. But rather whenever Deno would interpret a file as having some (content) type, it should treat it as such consistently. i.e. If Deno treats file:///path/to/module.js as being JS (i.e. equivalent to content type text/javascript), it should do so everywhere (i.e. including in fetch).

internally, for local files, a content-type/media type/mime type is not assigned to modules, and it won't be with import assertions either. So there is no consistency to be had. Setting a content-type only assists a very limited set of use cases, and is inconsistent with the one reference implementation behaviour. All the other times when the content-type is set, it is never derived from the file extension

How does Deno intend to distinguish JSON modules from JS modules if not extensions? Note that the spec specifically forbids using the assertion itself to determine the module type, and for good reason, it completely defeats the point of the feature being an assertion.

@lucacasonato
Copy link
Member

@Jamesernator Internally we have an enum called MediaType. This has values like "javascript", "wasm", or "typescript". Every module Deno loads gets assigned a media type. For local files we determine it based on file extension. For remote files we interpret the content-type on a response and derive a MediaType from that.

The key here is that the conversion from content-type to MediaType is one way. You can not convert back from a MediaType to a content-type. The reason this is one way, is that you would lose information in the process of content-type -> MediaType -> content-type (for example the text encoding attribute on a content-type).

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 6, 2021

And to be pedantic, content-type is only a factor in determining the MediaType for a remote module, as we represent .d.ts files as a seperate media type from .ts files for various handling purposes, and so we will also analyze the apparent extension of the URL to influence it.

How does Deno intend to distinguish JSON modules from JS modules if not extensions?

Like we do other modules, derive the MediaType. Where for remote modules it is mostly content-type, for data and blob URLs it is content-type and for local files it is extension.

@Jamesernator
Copy link

The key here is that the conversion from content-type to MediaType is one way.

This is still fine.

The observability comes up with something like service workers, and while I know this is still a proposed feature that might not happen, it would be nice to be consistent both internally and with the web.

Just to give an example suppose we have the following module and a way to run it with a service worker (say deno --service-worker ./sw.js ./main.js):

// main.js
await import("./foo.js");
await fetch(new URL("./foo.js", import.meta.url));
// sw.js
self.addEventListener("fetch", (fetchEvent) => {
    fetchEvent.respondWith(async () => {
        const response = await fetch(fetchEvent.request);
        if (fetchEvent.request.destination === "worker") {
            if (response.headers.get('content-type') === "text/javascript") {
                // Send this info off for logging
                systemLog.log(`Fetched JS worker: ${ fetchEvent.request.url }`);
            } else if (response.headers.get('content-type') === "application/json") {
                // This makes no sense, log an error
                systemLog.error(`Tried to load JSON as worker ${ fetchEvent.request.url }`);
            }
        }
        return response;
    }); 
});

Now consider 3 ways of using ./main.js:

  1. Served to Deno from the service worker
  2. Served to Deno from a remote server with the same behaviour as the service worker
  3. Served to a WebPage from the service worker

As currently stated, the first one would behave differently to the second two, as Deno is essentially doing the following (simplified based on what you've described):

// This is very simplified

const extensionMap = {
  "ts": "typescript",
  "wasm": "wasm",
  "js": "javascript",
  "json": "json",
}

const contentTypeMap = {
  "text/javascript": "javascript",
  "text/typescript": "typescript",
  "application/wasm": "wasm",
  "application/json": "json",
};

async function resolveModule(url) {
    // Note that in this condition we completely ignore the headers
    if (url.protocol === "file:") {
        const extension = url.pathname.split(".").pop();
        const mediaType = extensionMap[extension];
        return { mediaType, module: await fetch(url).then(res => res.arrayBuffer()) };
    } else if (url.protocol === "http:" || "https:") {
        const response = await fetch(url);
        const mediaType = contentTypeMap[response.headers.get('content-type')];
        return { mediaType, module: await response.arrayBuffer() };
    }
    // etc
}

Notice that on the web OR if main.js is received from a remote server then Deno would actually do the correct behaviour, but if a service worker were supported and Deno used it for file:///path/to/main.js, well because it doesn't consider the local file "fetch" as a regular fetch it would not go through analytics.

This is fairly annoying, as while it is work-around-able (i.e. in this situation the service worker could DUPLICATE Deno's resolution logic), it would be more ideal if Deno put this resolution logic higher up so that service workers don't need to know how Deno will handle it.

Now obviously this issue is primarily concerned with service workers, but decisions made now may affect the ability for service workers to correctly integrate with Deno later.

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 6, 2021

I think you are trying to solve problems that don't exist.

Content via fetch() will never be treated as a module by Deno. import("./foo.js"); does not return the content of ./foo.js, it evaluates the module and returns the instantiated module, fetch(new URL("./foo.js", import.meta.url)); returns the contents of the file. Those are two totally different things and the lack of a content-type with the fetch does not in the slightest impact the ability of fetch() to return the content as the body of the response.

@Jamesernator
Copy link

Jamesernator commented Sep 6, 2021

import("./foo.js"); does not return the content of ./foo.js, it evaluates the module and returns the instantiated module, fetch(new URL("./foo.js", import.meta.url)); returns the contents of the file. Those are two totally different things and the lack of a content-type with the fetch does not in the slightest impact the ability of fetch() to return the content as the body of the response.

The key point here is that import("./foo.js") TRIGGERS a fetch on the web, service worker's "fetch" event is NOT only triggered by calls to fetch(), it is called for ANY resource loaded by ANY part of the browser infrastructure.

Ideally if Deno does ever support service workers such that fetches are observable, then Deno would also trigger a fetch event for import("./foo.js") in the same way that browsers do.

And yes like I've said this issue is basically contingent on Deno implementing service workers, but the point is that module loading on the web and within Deno would lead to observably different behaviour inside a service worker depending on whether it was loaded in Deno vs on the web.

Service workers are already really tricky to get right even ignoring platform differences, the more platform differences there are the harder it would be to correctly utilize service worker features to write service worekrs that are polymorphic between Deno and the Web.

Now perhaps polymorphic service workers, or service workers at all, is not a goal Deno wants to pursue, but decisions such as this issue WILL affect the behaviour of, and ability to write code for, service workers if Deno were to support them later.

Now I don't know if these things even would be a small or large problem, the difficultly here is it's really hard to evaluate what working with service workers for Deno would be like without an implementation, as such I feel like it would be considerably safer to be as closer to typical web behaviour as possible so that surprises don't come up later.

But it would sad if Deno implemented fetch for file:/// URLs as described in the OP, later implemented service workers and found that service workers really needed these content-type headers to be set correctly for significant use cases of service workers in Deno, but couldn't change it as code depended too much on content-type not being present or whatever. I would say that I believe that adding a content-type later if there was demand would be safe, but I see wayyyy too much similarly small things blocking TC39 proposals or features in Node.js moving forward in for what would otherwise be a good solution, but can't be implemented due to significantly popular libaries relying on weird (however tiny) behaviour.

Having said that I'm not familiar enough with Deno's policy on breaking backwards compatibility either, it may be the case that Deno is happy to break code that does exceedingly fragile things, or Deno might be more cautious like the Web is. I don't know where Deno lies, but if Deno does lie towards the more cautious side like the Web does with backwards compatibility, then these sort've things really need to considered with a fine-tooth comb to ensure future compatibility.

Of course if Deno doesn't have as cautious an approach as the Web with breaking backwards compatibility over time, then basically all of my comments here can be disregarded as Deno won't be as constrained to revise these sort've small decisions in future.

@kitsonk
Copy link
Contributor Author

kitsonk commented Sep 6, 2021

Ideally if Deno does ever support service workers such that fetches are observable, then Deno would also trigger a fetch event for import("./foo.js") in the same way that browsers do.

Again, you are solving problems that don't exist. There is no one working on service workers anytime soon, the core team isn't going to work on it, and we would strongly caution anyone else wanting to work on it, because there are some complex issues that need to be addressed, including the fact that the way we handle imports of modules and fetch are two entirely different code paths.

If service workers did come, imported file URLs would work in the same way they do today. file:// URLs use the extension exclusively to determine the MediaType and remote URLs use a combination of content-type and the apparent path extension. It is not unreasonable that if a service worker is sending a response for a remote file that it wants to make it look like a local file, or a local file that it wants to make to look like a remote file, it ensures the the content-type is present or not and correct or not as it sees fit.

@piscisaureus piscisaureus self-assigned this Sep 8, 2021
@kitsonk kitsonk self-assigned this Oct 15, 2021
@kitsonk kitsonk added this to the 1.16.0 milestone Oct 18, 2021
kitsonk added a commit to kitsonk/deno that referenced this issue Oct 26, 2021
kitsonk added a commit to kitsonk/deno that referenced this issue Oct 28, 2021
kitsonk added a commit to kitsonk/deno that referenced this issue Oct 28, 2021
kitsonk added a commit to kitsonk/deno that referenced this issue Oct 29, 2021
kitsonk added a commit to kitsonk/deno that referenced this issue Oct 29, 2021
kitsonk added a commit to kitsonk/deno that referenced this issue Nov 1, 2021
kitsonk added a commit that referenced this issue Nov 1, 2021
Closes #11925
Closes #2150

Co-authored-by: Bert Belder <bertbelder@gmail.com>
@benjamingr
Copy link
Contributor

Hey, security concerns were raised when implementing this for Node's fetch. What does Deno do when I:

  • Run with --allow-net (my apps makes web requests) and --allow-read (my apps needs to read config files)
  • I get a URL from a user of my app
  • it's a file URL, so by default when I fetch it the user can read any file on my system and possibly exfiltrate data.

Isn't this a problem? What is the expected way to use this safely?

@nayeemrmn
Copy link
Collaborator

  • Run with --allow-net (my apps makes web requests) and --allow-read (my apps needs to read config files)

@benjamingr --allow-read=config.json. Of course there were FS bindings before local file fetch so it's an old issue.

@benjamingr
Copy link
Contributor

@nayeemrmn yes but it's a lot more likely to accept user input to fetch than for Deno.readFile. Users will need to be extra careful here assuming a lot of apps run with --allow-read?

@lucacasonato
Copy link
Member

lucacasonato commented Feb 17, 2022

@benjamingr Local file fetch only requires --allow-read permissions. It does not care about network permissions (as there is no network activity happening).

If you pass a URL to your app unfiltered, then there is probably a pretty big issue in your system. You can prevent local file fetches by just checking that the scheme is only http:// or https://. Fetch can also fetch blob: and data: URLs (always has been able to), so data could be exfiltrated there too for unsanitized inputs.

@benjamingr
Copy link
Contributor

If you pass a URL to your app unfiltered, then there is probably a pretty big issue in your system.

If you write a server that proxies data or a server that access's a URL and assesses something about it (e.g. accessibility) then there is a good chance you accept URLs for user input (happy to share more use cases if that helps).

Fetch can also fetch blob: and data: URLs (always have been able to)

Sure though data URLs don't present more risk of exposing data from the server since if the user can put whatever they want in the target URL data doesn't give them additional capabilities (neither does blob afaict).

checking that the scheme is only http:// or https://

The question is whether requiring anyone using fetch to do this isn't a bit of a security footgun?

I'm not sure I know the answer it's just a concern raised I wanted to bring with you since people have expressed concern over this in Node.js.

@nayeemrmn
Copy link
Collaborator

I'm sure the concern still exists here in some form but Deno has always recommended that people use allow lists with --allow-read=... especially when paired with --allow-net. I guess the security model was based on mitigating the effect of dependency-based attacks, let alone executing on unsanitised inputs directly. Needing to pass such flags is also a hint to the user that untrusted parties may be able to flex them. Unfortunately there isn't a solution that can be applied in node file fetch.

@alexgleason
Copy link
Contributor

alexgleason commented Aug 14, 2023

Totally agree @benjamingr. It's an insane choice to allow fetching file URIs. There are many, many circumstances where this is a major security vulnerability. Here are just a few:

  • Link previews - whenever a user posts a link to my site, I display a preview of that link to them. Now users can get a preview of my configuration files and secrets by supplying a file URI!

  • Media proxy - when displaying offsite content, we pass the full URL through a proxy because otherwise extensions like Privacy Badger will block that content. Now users can use the media proxy to view any files on my server by passing a file URI to it!

WTF! How on earth did this pass? #12545 should be reverted. Imagine if Chrome or FireFox let random websites fetch anything from your hard drive. The Deno experience should not be insecure by default. Deno permissions don't even help this, because of course you have to allow Deno to read your app config, thereby making it fetchable with file URIs.

Why was this solution not good enough?: https://deno.land/x/file_fetch@0.2.0

It was proposed before the merge. It makes Deno FS calls when you supply a file URI.

If you want to do weird fetch stuff, you should create a library to wrap fetch. It was already done. Now that it's merged the opposite is true: we will have to create a custom "secure" fetch wrapper function in all our projects.

EDIT: Here it is:

/**
 * Mitigates Deno's vulnerable fetch implementation.
 * https://github.com/denoland/deno/issues/11925#issuecomment-1678084346
 */
const safeFetch: typeof fetch = (...args) => {
  const url = getUrl(args[0]);
  if (url.startsWith('https://')) {
    return fetch(...args);
  } else {
    throw new Error('Invalid URL');
  }
};

function getUrl(value: string | URL | Request): string {
  if (typeof value === 'string') return value;
  if (value instanceof URL) return value.toString();
  if (value instanceof Request) return value.url;
  throw new Error('Invalid value');
}

export { safeFetch };

EDIT2: See also: https://gitlab.com/soapbox-pub/mostr/-/merge_requests/66/diffs

@elaine-jackson
Copy link

Allowing file system access through the fetch API in Deno is a significant security concern, and I would strongly urge you to reconsider this patch.

Permitting file system access in this manner exposes sensitive files and directories to unauthorized access. This is particularly concerning because the fetch API is traditionally associated with retrieving resources over a network, not direct access to the local file system.

In Deno, strict security measures are enforced through a secure sandbox environment, which necessitates explicit permission for file system access. This recent change bypasses these controls, leading to a conflict with Deno's underlying security philosophy.

If accessing files through this approach does not require the user to pass additional permissions to the Deno compiler, it violates the promised secure sandboxing and undermines the trust users have placed in Deno's security model.

@alexgleason
Copy link
Contributor

I created a deno-safe-fetch package: https://gitlab.com/soapbox-pub/deno-safe-fetch

import { safeFetch } from 'https://gitlab.com/soapbox-pub/deno-safe-fetch/-/raw/develop/mod.ts';

// Use it normally:
const response = await safeFetch('https://example.com');

// This throws an error:
const file = await safeFetch('file:///etc/passwd');

Replace fetch globally:

import 'https://gitlab.com/soapbox-pub/deno-safe-fetch/-/raw/develop/load.ts';

@lucacasonato
Copy link
Member

lucacasonato commented Aug 15, 2023

@alexgleason I think your concerns are the result of a misunderstanding of both fetch, and the Deno security model. Firstly, I'd like to address your concern that this results in new security vulnerabilities not present previously.

Prior to landing this PR, fetch supported fetching not just public content on the internet, hosted via HTTP or HTTPS, but additionally supported:

  • blob: urls
  • data: urls
  • http: resources on the internal network, not publicly accessible
  • https: resources on the internal network, not publicly accessible

Unless you already do filtering on the url that you pass to fetch, you are already vulnerable to classes of the attack you described above in your media proxy or link preview example. For example, if you are running on GCP or AWS, the instance metadata server is available as a "privileged" localhost endpoint that when fetched may leak sensitive information about the instance your code is running on. Unless you do filtering on the domain name / IP address in code, or use --allow-net to restrict access, you are vulnerable to the classes of attacks you describe above.

With the introduction of file:, the story does not meaningfully change. The same principles apply: unless you either limit access in user code (through filtering of url), or filtering at the runtime layer (through --allow-read), you are vulnerable to exfiltration or privileged access attacks.

The module you present (deno-safe-fetch) is a good first step - it works towards fixing the problem described above in the way that is intended (filtering the URL). While this adds a layer to your onion, I don't think this is enough for the workloads you describe. For example, you should encourage users to run with a limited --allow-net, and/or use a network namespace to restrict the isolate to only public networking, and/or perform requests to untrusted sources via an HTTP(S) proxy like smokescreen.

Finally, I'd like to mention that our behaviour is entirely compliant with the Fetch spec - grep "file: URLs are left as an exercise for the reader".

@lucacasonato
Copy link
Member

In Deno, strict security measures are enforced through a secure sandbox environment, which necessitates explicit permission for file system access. This recent change bypasses these controls, leading to a conflict with Deno's underlying security philosophy.

@elaine-jackson You are misunderstanding. Deno requires --allow-read to fetch file: urls. fetch("file:///etc/passwd") requires the same permissions as Deno.readFile("/etc/passwd").

@alexgleason
Copy link
Contributor

alexgleason commented Aug 15, 2023

you are already vulnerable to classes of the attack you described

Well, thanks for letting me know, but how is that acceptable?

EDIT: I updated the deno-safe-fetch module to mitigate these issues: https://gitlab.com/soapbox-pub/deno-safe-fetch/-/blob/develop/mod.ts

@alexgleason
Copy link
Contributor

@ry Have you seen this? Because I think this is the setup to a talk called "Things I Regret About Deno"

@lucacasonato
Copy link
Member

@alexgleason Your mitigation does not solve the problem by the way, because I can just have a DNS entry on a custom domain that points to 127.0.0.1, or CNAMEs to another internal endpoint, or anything else. You can not protect against localhost access attacks using text based filtering of the hostname.

I think you have fundamentally misunderstood what the security model of the fetch API in Deno is. Just like with SQL injection, a combination of input sanitization, proper API use, and I/O configuration are critical to ensure you are not vulnerable to attacks. For details on all three in this scenario, reference my comment above.

As the conversation has veered off the original topic, I'd urge you to open a GitHub discussion if you'd like to discuss this further.

Please also take note of our code of conduct (https://deno.co/coc) - discussion should be civil, and there is seldom a "right" answer to a technical question. Every design or implementation choice carries a trade-off and numerous costs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ext/fetch related to the ext/fetch feat new feature (which has been agreed to/accepted) web related to Web APIs
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants