Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proxy endpoint doesn't respect IPFS header policies for hijacked paths #382

Closed
hsanjuan opened this issue Apr 15, 2018 · 3 comments
Closed
Assignees
Labels
exp/intermediate Prior experience is likely helpful kind/bug A bug in existing code (including security flaws) P2 Medium: Good to have, but can wait until someone steps up

Comments

@hsanjuan
Copy link
Collaborator

IPFS enforces a bunch of stuff regarding CORs, CRLF attacks protection etc. Hijacked requests don't do any of that.

We need to let the user provide IPFS-like API configuration for the proxy, with whatever headers there are etc, and work just like ipfs would work, with the difference that we would still be hijacking the requests.

@hsanjuan hsanjuan added kind/bug A bug in existing code (including security flaws) help wanted Seeking public contribution on this issue exp/intermediate Prior experience is likely helpful status/ready Ready to be worked P2 Medium: Good to have, but can wait until someone steps up labels Apr 15, 2018
@LEonGAo1991
Copy link
Contributor

@hsanjuan Can I give a try on this one?

@hsanjuan
Copy link
Collaborator Author

hsanjuan commented May 8, 2018

@LEonGAo1991 sure. Can you figure out which options need to be added first and what approach you plan to take (apply to all proxy requests vs. apply to those handled by cluster only etc) and post it here? I haven't given too much thought to this yet..

@hsanjuan hsanjuan assigned hsanjuan and unassigned hsanjuan May 8, 2018
@hsanjuan hsanjuan added this to the Composite clusters milestone Jun 26, 2018
@hsanjuan
Copy link
Collaborator Author

Possible approach:

  • Store ipfs headers and values in a http.Header object.
  • If the object is nil, before any non proxied request, make an /id ipfs request to ipfs.
  • Extract headers from that response and store them in the Header object.
  • The list of headers to extract can be part of a pre-configured list: those related to CORS at least, Trailer too (why does /id has a Trailer?). Since the user can configure custom headers, we might as well copy everything except Date, Server etc.
  • Use the Header object to copy any headers from it to the headers of the http.Response in non-proxied requests.

@hsanjuan hsanjuan added status/in-progress In progress and removed help wanted Seeking public contribution on this issue status/ready Ready to be worked labels Dec 18, 2018
hsanjuan added a commit that referenced this issue Dec 18, 2018
…jacked ones.

This commit makes the proxy extract useful fixed headers (like CORS) from
the IPFS daemon API responses and then apply them to the responses
from hijacked endpoints like /add or /repo/stat.

It does this by caching a list of headers from the first IPFS API
response which has them. If we have not performed any proxied request or
managed to obtain the headers we're interested in, this will try triggering a
request to "/api/v0/version" to obtain them first.

This should fix the issues with using Cluster proxy with IPFS Companion and
Chrome.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
@ghost ghost assigned hsanjuan Dec 18, 2018
hsanjuan added a commit that referenced this issue Dec 18, 2018
…cked ones.

This commit makes the proxy extract useful fixed headers (like CORS) from
the IPFS daemon API responses and then apply them to the responses
from hijacked endpoints like /add or /repo/stat.

It does this by caching a list of headers from the first IPFS API
response which has them. If we have not performed any proxied request or
managed to obtain the headers we're interested in, this will try triggering a
request to "/api/v0/version" to obtain them first.

This should fix the issues with using Cluster proxy with IPFS Companion and
Chrome.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
@hsanjuan hsanjuan mentioned this issue Dec 20, 2018
hsanjuan added a commit that referenced this issue Dec 20, 2018
Fix #382: Extract headers from IPFS API requests and apply them to hijacked ones.
@ghost ghost removed the status/in-progress In progress label Dec 20, 2018
hsanjuan added a commit that referenced this issue Jan 10, 2019
This changes the current strategy to extract headers from the IPFS daemon to
use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a
mess and what we were doing is not really reliable, specially when it comes to
setting CORS headers right (which we were not doing).

The new approach is:

* For every hijacked request, make an OPTIONS request to the same path, with
the given Origin, to the IPFS daemon and extract some CORS headers from
that. Use those in the hijacked response * Avoid hijacking OPTIONS request,
they should always go through so the IPFS daemon controls all the
CORS-preflight things as it wants.  * Similar to before, have a
only-once-triggered request to extract other interesting or custom headers
from a fixed IPFS endpoint.  This allows us to have the proxy forward other
custom headers and to catch `Access-Control-Expose-Methods`. The difference is
that the endpoint use for this and the additional headers are configurable by
the user (but with hidden configuration options because this is quite exotic
from regular usage).

Now the implementation:

* Replaced the standard Muxer with gorilla/mux (I have also taken the change
to update the gxed version to the latest tag). This gives us much better
matching control over routes and allows us to not handle OPTIONS requests.

* This allows also to remove the extractArgument code and have proper handlers
for the endpoints passing command arguments as the last segment of the URL. A
very simple handler that wraps the default ones can be used to extract the
argument from the url and put it in the query.  Overall much cleaner this way.

* No longer capture interesting headers from any random proxied request.  This
made things complicated with a wrapping handler. We will just trigger the one
request to do it when we need it.

* When preparing the headers for the hijacked responses:
  * Trigger the OPTIONS request and figure out which CORS things we should set
  * Set the additional headers (perhaps triggering a POST request to fetch them)
  * Set our own headers.

* Moved all the headers stuff to a new headers.go file.

* Added configuration options (hidden by default) to:
  * Customize the extract headers endpoint
  * Customize what additional headers are extracted
  * Use HTTPs when talking to the IPFS API
    * I haven't tested this, but I did not want to have hardcoded 'http://' urls
      around, as before.

* Added extra testing for this, and tested manually a lot comparing the
daemon original output with our hijacked endpoint outputs while looking
at the API traffic with ngrep and making sure the requets happen as expected.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
hsanjuan added a commit that referenced this issue Jan 10, 2019
This changes the current strategy to extract headers from the IPFS daemon to
use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a
mess and what we were doing is not really reliable, specially when it comes to
setting CORS headers right (which we were not doing).

The new approach is:

* For every hijacked request, make an OPTIONS request to the same path, with
the given Origin, to the IPFS daemon and extract some CORS headers from
that. Use those in the hijacked response * Avoid hijacking OPTIONS request,
they should always go through so the IPFS daemon controls all the
CORS-preflight things as it wants.  * Similar to before, have a
only-once-triggered request to extract other interesting or custom headers
from a fixed IPFS endpoint.  This allows us to have the proxy forward other
custom headers and to catch `Access-Control-Expose-Methods`. The difference is
that the endpoint use for this and the additional headers are configurable by
the user (but with hidden configuration options because this is quite exotic
from regular usage).

Now the implementation:

* Replaced the standard Muxer with gorilla/mux (I have also taken the change
to update the gxed version to the latest tag). This gives us much better
matching control over routes and allows us to not handle OPTIONS requests.

* This allows also to remove the extractArgument code and have proper handlers
for the endpoints passing command arguments as the last segment of the URL. A
very simple handler that wraps the default ones can be used to extract the
argument from the url and put it in the query.  Overall much cleaner this way.

* No longer capture interesting headers from any random proxied request.  This
made things complicated with a wrapping handler. We will just trigger the one
request to do it when we need it.

* When preparing the headers for the hijacked responses:
  * Trigger the OPTIONS request and figure out which CORS things we should set
  * Set the additional headers (perhaps triggering a POST request to fetch them)
  * Set our own headers.

* Moved all the headers stuff to a new headers.go file.

* Added configuration options (hidden by default) to:
  * Customize the extract headers endpoint
  * Customize what additional headers are extracted
  * Use HTTPs when talking to the IPFS API
    * I haven't tested this, but I did not want to have hardcoded 'http://' urls
      around, as before.

* Added extra testing for this, and tested manually a lot comparing the
daemon original output with our hijacked endpoint outputs while looking
at the API traffic with ngrep and making sure the requets happen as expected.
Also tested with IPFS companion in FF and Chrome.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
hsanjuan added a commit that referenced this issue Jan 10, 2019
This changes the current strategy to extract headers from the IPFS daemon to
use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a
mess and what we were doing is not really reliable, specially when it comes to
setting CORS headers right (which we were not doing).

The new approach is:

* For every hijacked request, make an OPTIONS request to the same path, with
the given Origin, to the IPFS daemon and extract some CORS headers from
that. Use those in the hijacked response

* Avoid hijacking OPTIONS request, they should always go through so the IPFS
daemon controls all the CORS-preflight things as it wants.

* Similar to before, have a only-once-triggered request to extract other
interesting or custom headers from a fixed IPFS endpoint.  This allows us to
have the proxy forward other custom headers and to catch
`Access-Control-Expose-Methods`. The difference is that the endpoint use for
this and the additional headers are configurable by the user (but with hidden
configuration options because this is quite exotic from regular usage).

Now the implementation:

* Replaced the standard Muxer with gorilla/mux (I have also taken the change
to update the gxed version to the latest tag). This gives us much better
matching control over routes and allows us to not handle OPTIONS requests.

* This allows also to remove the extractArgument code and have proper handlers
for the endpoints passing command arguments as the last segment of the URL. A
very simple handler that wraps the default ones can be used to extract the
argument from the url and put it in the query.  Overall much cleaner this way.

* No longer capture interesting headers from any random proxied request.  This
made things complicated with a wrapping handler. We will just trigger the one
request to do it when we need it.

* When preparing the headers for the hijacked responses:
  * Trigger the OPTIONS request and figure out which CORS things we should set
  * Set the additional headers (perhaps triggering a POST request to fetch them)
  * Set our own headers.

* Moved all the headers stuff to a new headers.go file.

* Added configuration options (hidden by default) to:
  * Customize the extract headers endpoint
  * Customize what additional headers are extracted
  * Use HTTPs when talking to the IPFS API
    * I haven't tested this, but I did not want to have hardcoded 'http://' urls
      around, as before.

* Added extra testing for this, and tested manually a lot comparing the
daemon original output with our hijacked endpoint outputs while looking
at the API traffic with ngrep and making sure the requets happen as expected.
Also tested with IPFS companion in FF and Chrome.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
hsanjuan added a commit that referenced this issue Jan 11, 2019
License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>
hsanjuan added a commit that referenced this issue Jan 14, 2019
Fix #382 (again): A better strategy for handling proxy headers
coder-lb pushed a commit to elastos/Elastos.Hive.Cluster that referenced this issue Jan 21, 2019
* Add the build tools download gateway.

Signed-off-by: Yi Wang <wangyi8848@gmail.com>

* add hive.cluster mark.
due to this change, hive.cluster will be isolated from typical ipfs.cluster.

Signed-off-by: Yi Wang <wangyi@storswift.com>

* Fix HTTPs with DNS multiaddresses

Before we resolved all /dns*/ multiaddresses before we used them.

When using HTTPs, the Go HTTP Client only sees the resolved IP address
and it is unable to negotiate TLS with a cerficate because the request
is not going to the hostname the certificate is signed for, but to
the IP. This leverages a recent feature in go-multiaddr-net
and uses directly the user-provided hostname.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix interpreting Host parameter correctly.

We should deprecate passing in Host/Port in the config,
but in the meantime, it hardcoded /dns4/, meaning that if
someone placed an ipv6 address in there things would break badly
and weirdly.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Added tests for /monitor/metrics/{metrics_type}

Added API and client tests for GET /monitor/metrics/{metrics_type}

Fixes ipfs-cluster#587

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* We are using https://github.com/chriscool/sharness

We aren't using https://github.com/mlafeldt/sharness, code reference
below
https://github.com/ipfs/ipfs-cluster/blob/fdfe8def9467893d451e1fcb8ea3fb980c8c1389/Makefile#L67

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Sharness tests for ipfs-cluster-ctl health metrics

Added sharness tests for `ipfs-cluster-ctl health metrics <metricname>`

Fixes ipfs-cluster#587

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Added tests for /monitor/metrics/{metrics_type}

Move ctl-health sharness tests to apprpriate file

Since the API is using the RPC mock to request metrics and it always
returns a mocked test metric we might just do c.Metrics("somemetricstype")
and check that there is no error. Here we just want to check that the
client is hitting an API endpoint (and understands the response).

Fixes ipfs-cluster#587

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Fix ipfs-cluster#632: Handle "stream-channels" in /add endpoints

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix $632: Test stream-channels=false in /add endpoint

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#632: Apply -Q option to --enc=json when adding.

This will only print the latest JSON update when adding
with -Q.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#445: Implemented status filter for ipfs-cluster-ctl

License: MIT
Signed-off-by: Paul Jewell <sona1111@zoho.com>

* Fix ipfs-cluster#445: Reduced length of filter help string

License: MIT
Signed-off-by: Paul Jewell <sona1111@zoho.com>

* Fix ipfs-cluster#445:

-Fixed logic issue in match condition of 'filterStatus' function
-Added and verified success of test provided by @lanzafame
-Attempted to condense code and apply other cleanup provided by @lanzafame

License: MIT
Signed-off-by: Paul Jewell <sona1111@zoho.com>

* Fix ipfs-cluster#445:

-Changed some 'snake case' to 'camel case' in accordance with code climate

License: MIT
Signed-off-by: Paul Jewell <sona1111@zoho.com>

* Status filters for `ipfs-cluster-ctl status`

Added filter option to `ipfs-cluster-ctl status`

When the --filter is passed, it will only fetch the peer information
where status of the pin matches with the filter value.
Valid filter values are tracker status types(i.e., "pinned",
"pin_error", "unpinning" etc), an alias of tracker status type (i.e.,
"queued" or "error"), comma separated list of tracker status type
and/or it aliases(i.e., "error,pinning")

On passing invalid filter value no status information will be shown

In particular, the filter would remove elements from []GlobalPinInfo
when none of the peers in GlobalPinInfo match the filter. If one peer
in the GlobalPinInfo matches the filter, the whole object is returned,
including the information for the other peers which may or not match it.

filter option works on statusAll("GET /pins"). For fetching pin status
for a CID("GET /pins/<cid>"), filter option would have no effect

Fixes ipfs-cluster#445

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Status filters for `ipfs-cluster-ctl status`

Added clients tests

Fixes ipfs-cluster#445

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Status filters for `ipfs-cluster-ctl status`

Added a fail case where an invalid filter is passed in.
Update `api.rpcClient.Call` to `api.rpcClient.CallContext` and pass
in the Request context r.Context() so that context can be cancelled
when the request is cancelled by caller

Fixes ipfs-cluster#445

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Status filters for `ipfs-cluster-ctl status`

Passing `make check`

Fixes ipfs-cluster#445

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Status filters for `ipfs-cluster-ctl status`

Improved matching of filters and tracker status

Fixes ipfs-cluster#445

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Wrap help message for less than 120 characters

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Status filters for `ipfs-cluster-ctl status`

Optimized filter to tracker status matching by using bitwise
comparisions

License: MIT
Signed-off-by: Kishan Mohanbhai Sagathiya <kishansagathiya@gmail.com>

* Fix ipfs-cluster#632: Make sure StreamChannels is enabled in rest/client.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Feat ipfs-cluster#632: Keep default /add behaviour outside of conditional block

Per review comment.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Feat ipfs-cluster#445: Use TrackerStatus as filter. Simplify and small misc.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Feat ipfs-cluster#445: Clarify about 0 filter value.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* add uid register and uid name changing API.

Signed-off-by: Yi Wang <wangyi@storswift.com>

* add hive api: UidNew, UidLogIn, FilesCp, FilesFlush, FilesLs

Signed-off-by: Yi Wang <wangyi@storswift.com>

* Feat ipfs-cluster#445: Catch invalid filter strings in ipfs-cluster-ctl.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Issue ipfs-cluster#445: Fix test

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Feat ipfs-cluster#445: Fix string test with TrackerStatus == 0

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* add FilesMkdir, FilesMv, and FilesRm API.

Signed-off-by: Yi Wang <wangyi@storswift.com>

* Fix ipfs-cluster#632: Add --no-stream to ipfs-cluster-ctl

tl;dr: this solves the user's immediate need and, even if not tne strictest
solution, it is the simplest one.

I think we should not have the server buffer output when we can do it rather
easily client side and have the clients use their own memory for the task even
if `stream-channels=false` would do this.

We can always change the approach, but this is the minimal solution to
json array with all the AddedOutput things.

We might not buffer at all and hack a `[`, `,` separating elements and `]`
at the end, when json encoding is enabled, etc. But that would not be clean,
specially if we mean to support more output formats at some point.

Enabling supporting stream-channels=false in the api/rest/client means adding
new methods, with tests, modifying the interface etc etc. for what is
essentially a presentation issue in "ctl" in the end. Similarly we could
buffer inside the client, but it is so trivial that I don't see advatange.
We should also start thinking about moving to streaming API endpoints and
when that moment arrives we shall revisit this discussion.

I have removed the hacky manual output parts by declaring a custom
addedOutputQuiet type which wraps added output and is understood by
the formatters.go helpers.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* add FilesRead, FilesStat, FilesWrite.

Known bug: FilesWrite may occur writing interrupt.

Signed-off-by: Yi Wang <wangyi@storswift.com>

* Issue ipfs-cluster#632: Fix wrong name for flag. Address review comments

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#382 (again): A better strategy for handling proxy headers

This changes the current strategy to extract headers from the IPFS daemon to
use them for hijacked endpoints in the proxy. The ipfs daemon is a bit of a
mess and what we were doing is not really reliable, specially when it comes to
setting CORS headers right (which we were not doing).

The new approach is:

* For every hijacked request, make an OPTIONS request to the same path, with
the given Origin, to the IPFS daemon and extract some CORS headers from
that. Use those in the hijacked response

* Avoid hijacking OPTIONS request, they should always go through so the IPFS
daemon controls all the CORS-preflight things as it wants.

* Similar to before, have a only-once-triggered request to extract other
interesting or custom headers from a fixed IPFS endpoint.  This allows us to
have the proxy forward other custom headers and to catch
`Access-Control-Expose-Methods`. The difference is that the endpoint use for
this and the additional headers are configurable by the user (but with hidden
configuration options because this is quite exotic from regular usage).

Now the implementation:

* Replaced the standard Muxer with gorilla/mux (I have also taken the change
to update the gxed version to the latest tag). This gives us much better
matching control over routes and allows us to not handle OPTIONS requests.

* This allows also to remove the extractArgument code and have proper handlers
for the endpoints passing command arguments as the last segment of the URL. A
very simple handler that wraps the default ones can be used to extract the
argument from the url and put it in the query.  Overall much cleaner this way.

* No longer capture interesting headers from any random proxied request.  This
made things complicated with a wrapping handler. We will just trigger the one
request to do it when we need it.

* When preparing the headers for the hijacked responses:
  * Trigger the OPTIONS request and figure out which CORS things we should set
  * Set the additional headers (perhaps triggering a POST request to fetch them)
  * Set our own headers.

* Moved all the headers stuff to a new headers.go file.

* Added configuration options (hidden by default) to:
  * Customize the extract headers endpoint
  * Customize what additional headers are extracted
  * Use HTTPs when talking to the IPFS API
    * I haven't tested this, but I did not want to have hardcoded 'http://' urls
      around, as before.

* Added extra testing for this, and tested manually a lot comparing the
daemon original output with our hijacked endpoint outputs while looking
at the API traffic with ngrep and making sure the requets happen as expected.
Also tested with IPFS companion in FF and Chrome.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* ipfsproxy: add ExtractHeaderTTL option to config

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#382: Add TTL for cached headers

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix typos in comments

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#639: Import rs/cors with Gx

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#639: Add CORS options to restapi

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#639: restapi: Handle CORS preflight requests (OPTIONS)

This adds support for handling preflight requests in the REST API
and fixes currently mostly broken CORS.

Before we just let the user add custom response headers to the
configuration "headers" key but this is not the best way because
CORs headers and requests need special handling and doing it wrong
has security implications.

Therefore, I have added specific CORS-related configuration options
which control CORS behavour. We are forced to change the "headers"
defaults and will notify the users about this in the changelog.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* ipfsproxy: fix tests for new configuration keys

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* ipfsproxy: fix typos in comments

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* restapi: minor codeclimate issue

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#639: Do not break start by complaining of unset CORSMaxAge

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix ipfs-cluster#639: Enforce basic auth for all requests when enabled

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Sharness: update configuration files used in sharness

Maintenance

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* config: Fix confusing errors

The JSON parsing of the config could error, but we skipped error checking and
use Validate() at the end. This caused that maybe some JSON parsing errors
where logged but the final error when validating the configuration came from
somewhere different, creating very confusing error messages for the user.

This changes this, along with removing hardcoded section lists. This also
removes a Sharder component section because, AFAIK, it was a left over
from the sharding branch and in the end there is no separate sharding
component that needs a config.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* sharness: Fix test typo causing an empty grep

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Config: Interpret empty duration strings as duration = 0.

In practice, we allowed this already, because parsing failed
but Validate() succeeded.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* config: do not handle "" durations (and do not error)

They should not be interpreted as 0, since that may overwrite
defaults which are not 0. We simply need to do nothing.

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Update sharness/t0032-ctl-health.sh

Fix the pid extraction in test.

Co-Authored-By: hsanjuan <hsanjuan@users.noreply.github.com>

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* sharness: test should check agains cluster peer ID

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Release 0.8.0-rc1

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* gx publish 0.8.0-rc1

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Changelog for 0.8.0 release

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Release 0.8.0

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* gx publish 0.8.0

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix changelog date

License: MIT
Signed-off-by: Hector Sanjuan <code@hector.link>

* Fix bug in filesWrite and add NamePublish.

Signed-off-by: Yi Wang <wangyi@storswift.com>

* add uid/info API

Signed-off-by: Yi Wang <wangyi@storswift.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
exp/intermediate Prior experience is likely helpful kind/bug A bug in existing code (including security flaws) P2 Medium: Good to have, but can wait until someone steps up
Projects
None yet
Development

No branches or pull requests

2 participants