Here's a quick overview for the typical process to go through to alter or add some functionality for the server:
- Start with
server.go
. TheMakeRouter
function is where all the endpoints are defined. All the handler functions are methods on the server so we can pass in the database reference and the logger as necessary.- If adding a new endpoint, add it in
MakeRouter
, and make a new handler function; you can look at the existing ones for examples. - If changing an existing endpoint then look up the handler function set to handle the endpoint in question.
- If adding a new endpoint, add it in
- In a handler function, you probably want to interact with the database
somehow. Put the database function in whatever file is mostly closely related
(e.g.
listUsersFromDb
goes inusers.go
), and call it in the handler function, passing inserver.db
. For almost all the models there is one version which works with JSON, and one which is how entries are retrieved from the database; the latter are named in the pattern ofxFromQuery
. - Do whatever logic on the database results.
- Ultimately for an endpoint returning a response we want to put together something which can be marshalled into JSON. Take that and do a call like this:
_ = jsonResponseFrom(result, http.StatusOK).write(w, r)
That's it!
Let's consider the case where we've encountered some error while handling a
request. In general, functions called from the server handlers should return an
*ErrorResponse
with a non-nil value when some error has occurred (user-induced
or otherwise). The basic pattern for server handlers using these is something
like the following:
func (server *Server) handleSomething(w http.ResponseWriter, r *http.Request) {
errResponse := utilityFunction()
if errResponse != nil {
errResponse.log.write(server.logger)
_ = errResponse.write(w, r)
return
}
// handling normal flow here...
return
}
In the functions which return an ErrorResponse
, you can "log" things using
ErrorResponse.log
, which implements the logging interface (see logging.go
)
and saves all the logs until we call ErrorResponse.log.write(Logger)
, where
it writes out all the saved logs into the provided logger. This occurs as in the
pattern above where we write the response's logs into the server logger.
As mentioned in the previous section, for most models used in arborist there's a
pattern of having two structs to handle it, one with JSON tags and another which
can accept a database query. The query one, named with a FromQuery
suffix by
convention, should have a standardize()
method which converts it to the JSON
version. Take the User
structs as an example (at the time of writing):
type User struct {
Name string `json:"name"`
Email string `json:"email,omitempty"`
Groups []string `json:"groups"`
Policies []string `json:"policies"`
}
type UserFromQuery struct {
ID int64 `db:"id"`
Name string `db:"name"`
Email *string `db:"email"`
Groups pq.StringArray `db:"groups"`
Policies pq.StringArray `db:"policies"`
}
func (userFromQuery *UserFromQuery) standardize() User {
user := User{
Name: userFromQuery.Name,
Groups: userFromQuery.Groups,
Policies: userFromQuery.Policies,
}
if userFromQuery.Email != nil {
user.Email = *userFromQuery.Email
}
return user
}
The UserFromQuery
struct is used for database operations:
users := []UserFromQuery{}
err := db.Select(&users, stmt)
and the User
one for returning JSON responses (where typically we got the
User
struct from calling standardize()
on the UserFromQuery
version):
userFromQuery, err := userWithName(server.db, username)
user := userFromQuery.standardize()
_ = jsonResponseFrom(user, http.StatusOK).write(w, r)
See the SQL section and read through all the explanation on the migration scripts. We've taken the approach of using raw SQL plus some utility wrappers instead of an ORM so changes may need to be made to the queries on some endpoints.
This page is a useful overview of sqlx
usage, the package which arborist uses for the database interface.
Be careful with sql.DB
transactions; namely, be sure to close them if
returning early because of errors or similar, otherwise the transaction holds
its connection open. Similarly, when working with some sql.Rows
always call
.Close()
so the open connection is returned to the pool.
Go's SQL package handles the connection pool implicitly, though the size of the pool is configurable. See here for a bit more detail.
Reference previous migrations
for examples on how to write migration scripts
correctly. The crucial points are
- Create a subdirectory in
migrations
named in the format{YYYY}-{MM}-{DD}T{HH}{MM}{SS}Z_{name}
, which is the ISO date format followed optionally by a human-readable name describing the migration. - This subdirectory must contain an
up.sql
and adown.sql
which apply and revert the migration, respectively. - The
up.sql
script must update the singular row ofdb_version
to increment the integer version ID, and change theversion
text column to reflect the exact folder name.
Test a migration by applying up.sql
and down.sql
sequentially to ensure
both work as expected.
In the arborist deployments used in Gen3, at the time of writing, the easiest way to apply migration scripts is a command such as the following:
gen3 psql arborist -f <(g3kubectl exec $(gen3 pod arborist) -- cat /go/src/github.com/uc-cdis/arborist/migrations/.../up.sql)
For another example, to redo the 2019-06-04T173047Z_resource_triggers
migration, these commands would work:
gen3 psql arborist -f <(g3kubectl exec $(gen3 pod arborist) -- cat /go/src/github.com/uc-cdis/arborist/migrations/2019-06-04T173047Z_resource_triggers/down.sql)
gen3 psql arborist -f <(g3kubectl exec $(gen3 pod arborist) -- cat /go/src/github.com/uc-cdis/arborist/migrations/2019-06-04T173047Z_resource_triggers/up.sql)
- The schema has some triggers to prevent the built-in groups,
anonymous
andlogged-in
, from being deleted. If for some reason you want to clear out all the groups, use this:
DELETE FROM grp WHERE (name != 'anonymous' AND name != 'logged-in')
For testing an HTTP server, we use the httptest
module to "record" requests
that we send to the handler for our server. The httptest.ResponseRecorder
stores the response information including .Code
and .Body
which can be
returned as a string or bytes.
This is a basic pattern for a test to hit a server endpoint (in this example,
sending some JSON in a POST
):
// arborist-specific
server := arborist.NewServer()
// ^ more setup for database, etc
logBuffer := bytes.NewBuffer([]byte{})
handler := server.MakeRouter(logBuffer)
// dump logBuffer to see server logs, if an error happens
// generic
w := httptest.NewRecorder()
req := newRequest("POST", "/some/endpoint", nil)
handler.ServeHTTP(w, req)
At this point we can inspect the recorder w
for what we care about in the
response. Suppose we expect to get some JSON in the response from this request.
Our test would look something like this (here, we use the testify/assert
package for convenience):
// one-off inline struct to read the response into
result := struct {
A string `json:"a"`
B int `json:"b"`
}{}
// Try to read response bytes into result JSON.
err := json.Unmarshal(w.Body.Bytes(), &result)
if err != nil {
t.Error("failed to read JSON")
}
assert.Equal(t, "what we expect", result.A, "result had the wrong value for a")
Run this to both generate a coverage output file usable by go tools,
coverage.out
, and open it using the go coverage tool to visualize line-by-line
coverage.
make coverage-viz
The coverage.out
file can also be used with the usual go testing and coverage tools.