diff --git a/docs/get-started/foundations/projects/index.mdx b/docs/get-started/foundations/projects/index.mdx index 18e94c8f4..ea4667c36 100644 --- a/docs/get-started/foundations/projects/index.mdx +++ b/docs/get-started/foundations/projects/index.mdx @@ -28,7 +28,7 @@ Here are some examples of basic Nitric project structures: -```text JavaScript +```text my-project/ ├── nitric.yaml ├── nitric.aws.yaml @@ -41,7 +41,7 @@ my-project/ -```text JavaScript +```text my-project/ ├── nitric.yaml ├── nitric.aws.yaml @@ -54,7 +54,7 @@ my-project/ -```text Python +```text my-project/ ├── nitric.yaml ├── nitric.aws.yaml @@ -67,7 +67,7 @@ my-project/ -```text Go +```text my-project/ ├── nitric.yaml ├── nitric.aws.yaml diff --git a/docs/get-started/quickstart.mdx b/docs/get-started/quickstart.mdx index 5ac5e8ce9..4f0b2e402 100644 --- a/docs/get-started/quickstart.mdx +++ b/docs/get-started/quickstart.mdx @@ -94,7 +94,7 @@ Navigate to the new project directory and install the dependencies: -```bash TypeScript +```bash cd hello-world npm install @@ -104,7 +104,7 @@ npm install -```bash JavaScript +```bash cd hello-world npm install @@ -114,7 +114,7 @@ npm install -```bash Python +```bash cd hello-world uv sync @@ -131,7 +131,7 @@ If there are other dependency managers you would like to see supported, please l -```bash Go +```bash cd hello-world go mod tidy @@ -141,7 +141,7 @@ go mod tidy -```bash Dart +```bash cd hello-world dart pub get @@ -157,9 +157,9 @@ Your project should now look like this: -```txt TypeScript +```txt +--services/ -| +-- hello.ts +| +-- api.ts +--node_modules/ | ... +--package-lock.json @@ -172,9 +172,9 @@ Your project should now look like this: -```txt JavaScript +```txt +--services/ -| +-- hello.js +| +-- api.js +--node_modules/ | ... +--package-lock.json @@ -187,7 +187,7 @@ Your project should now look like this: -```txt Python +```txt +--services/ | +-- api.py +--.env @@ -204,7 +204,7 @@ Your project should now look like this: -```txt Go +```txt +--services/hello/ | +-- main.go +--go.mod @@ -218,9 +218,9 @@ Your project should now look like this: -```txt Dart +```txt +--services/ -| +-- hello.dart +| +-- api.dart +--analysis_options.yaml +--pubspec.lock +--pubspec.yaml @@ -270,7 +270,7 @@ Start by opening the `hello` service in your editor and adding a new route to th -```typescript !! title:services/hello.ts +```typescript !! title:services/api.ts import { api } from '@nitric/sdk' const helloApi = api('main') @@ -289,7 +289,7 @@ helloApi.get('/goodbye/:name', async (ctx) => { }) ``` -```javascript !! title:services/hello.js +```javascript !! title:services/api.js import { api } from '@nitric/sdk' const helloApi = api('main') @@ -308,7 +308,7 @@ helloApi.get('/goodbye/:name', async (ctx) => { }) ``` -```python !! title:services/hello.py +```python !! title:services/api.py from nitric.resources import api from nitric.application import Nitric @@ -357,7 +357,7 @@ func main() { } ``` -```dart !! title:services/hello.dart +```dart !! title:services/api.dart import 'package:nitric_sdk/nitric.dart'; void main() { diff --git a/docs/guides/dart/flutter.mdx b/docs/guides/dart/flutter.mdx index 907b33015..a15f40447 100644 --- a/docs/guides/dart/flutter.mdx +++ b/docs/guides/dart/flutter.mdx @@ -78,7 +78,7 @@ code word_generator Let's start by building out the backend. This will be an API with a route dedicated to getting a list of all the favorites and a route to toggle a favorite on or off. These favorites will be stored in a [key value store](/keyvalue). To create a Nitric project add the `nitric.yaml` to the Flutter template project. -```yaml {{ label: "nitric.yaml" }} +```yaml title:nitric.yaml name: word_generator services: - match: lib/services/*.dart @@ -1414,7 +1414,7 @@ nitric stack new dev aws You'll then need to edit the `nitric.dev.yaml` file to add a region. -```yaml {{ label: "nitric.dev.yaml" }} +```yaml title:nitric.dev.yaml provider: nitric/aws@1.11.1 region: us-east-1 ``` @@ -1428,7 +1428,7 @@ Because we've mixed Flutter and Dart dependencies, we need to use a [custom cont Flutter application, this step is unnecessary. -```yaml {{ label: "nitric.yaml" }} +```yaml title:nitric.yaml name: word_generator services: - match: lib/services/*.dart @@ -1442,7 +1442,7 @@ runtimes: Create the Dockerfile at the same path as your runtime specifies. This Dockerfile is fairly straightforward, taking its -```dockerfile {{ label: "docker/flutter.dockerfile" }} +```dockerfile title:docker/flutter.dockerfile FROM dart:stable AS build # The Nitric CLI will provide the HANDLER arg with the location of our service @@ -1487,7 +1487,7 @@ ENTRYPOINT ["/app/bin/main"] We can also add a `.dockerignore` to optimize our image further: -```txt {{ label: "docker/flutter.dockerignore" }} +```txt title:docker/flutter.dockerignore build test @@ -1505,15 +1505,15 @@ web windows ``` -### AWS +### Deploy + +Now that the application has been configured for deployment, let's try deploying it with the `up` command. Cloud deployments incur costs and while most of these resource are available with free tier pricing you should consider the costs of the deployment. -Now that the application has been configured for deployment, let's try deploying it with the `up` command. - ```bash nitric up diff --git a/docs/guides/dart/serverless-rest-api-example.mdx b/docs/guides/dart/serverless-rest-api-example.mdx index b12d3d8af..1fe54db05 100644 --- a/docs/guides/dart/serverless-rest-api-example.mdx +++ b/docs/guides/dart/serverless-rest-api-example.mdx @@ -7,6 +7,7 @@ tags: languages: - dart published_at: 2024-04-24 +updated_at: 2025-01-06 --- # Building a REST API with Nitric @@ -40,18 +41,10 @@ There is also an extended section of the guide that adds file operations using a ## Getting started -Start by creating a base Dart +Start by creating a new Nitric project from the dart template. ```bash -dart create -t console my-profile-api -``` - -Add the Nitric SDK and the uuid dependency by adding it to your `pubspec.yaml`. - -```yaml -dependencies: - nitric_sdk: ^1.2.0 - uuid: ^4.3.3 +nitric new my-profile-api dart-starter ``` Next, open the project in your editor of choice. @@ -63,62 +56,29 @@ cd my-profile-api The scaffolded project should have the following structure: ```text -bin/ -├── my_profile_api.dart -lib/ -├── my_profile_api.dart -test/ -├── my_profile_api_test.dart +services/ +├── api.dart .gitignore analysis_options.yaml -CHANGELOG.md -pubspec.lock +dart.dockerfile +dart.dockerfile.dockerignore +nitric.yaml pubspec.yaml README.md ``` -To create our Nitric project, we have to create a `nitric.yaml` file. The handlers key will point to where our - -```yaml -name: my_profile_api -services: - - match: bin/my_profile_api.dart - start: dart run $SERVICE_PATH -``` - -## Create a Profile class - -We will create a class to represent the profiles that we will store in the key value store. We will add `toJson` and `fromJson` functions to assist. - -```dart -class Profile { - String name; - int age; - String homeTown; - - Profile(this.name, this.age, this.homeTown); - - Profile.fromJson(Map contents) - : name = contents["name"] as String, - age = contents["age"] as int, - homeTown = contents["homeTown"] as String; - - Map toJson() => { - 'name': name, - 'age': age, - 'homeTown': homeTown, - }; -} +As we will be generating IDs for each profile, add the uuid dependency by adding it to your `pubspec.yaml`. +```bash +dart pub add uuid ``` ## Building the API -Applications built with Nitric can contain many APIs, let's start by adding one to this project to serve as the public endpoint. Rename `bin/my_profile_api.dart` to `bin/profiles.dart` +Applications built with Nitric can contain many APIs, let's start by adding one to this project to serve as the public endpoint. -```dart +```dart title:services/api.dart import 'package:nitric_sdk/nitric.dart'; -import 'package:nitric_sdk/resources.dart'; import 'package:uuid/uuid.dart'; @@ -128,14 +88,14 @@ void main() { // Define a key value store named 'profiles', then request get, set and delete permissions. final profiles = Nitric.kv("profiles").allow([ - KeyValuePermission.get, - KeyValuePermission.set, - KeyValuePermission.delete + KeyValueStorePermission.get, + KeyValueStorePermission.set, + KeyValueStorePermission.delete ]); } ``` -Here we're creating an API named `public` and a key value store named `profiles`, then requesting get, set, and delete permissions which allows our function to access the key value store. +Here we're creating an API named `public` and a key value store named `profiles`, then requesting get, set, and delete permissions which allows our service to access the key value store. Resources in Nitric like `api` and `key value store` represent high-level @@ -143,7 +103,7 @@ Here we're creating an API named `public` and a key value store named `profiles` requests into appropriate resources for the specific [provider](https://nitric.io/docs/reference/providers). Nitric also takes care of adding the IAM roles, policies, etc. that grant the requested access. For - example the `key value stores` resource uses DynamoDB in AWS or FireStore on + example the `key value store` resource uses DynamoDB in AWS or FireStore on Google Cloud. @@ -156,13 +116,13 @@ Let's start adding features that allow our API consumers to work with profile da prefer. For simplicity we'll group them together in this guide. -```dart +```dart title:services/api.dart profileApi.post("/profiles", (ctx) async { final uuid = Uuid(); final id = uuid.v4(); - final profile = Profile.fromJson(ctx.req.json()); + final profile = ctx.req.json(); // Store the new profile in the profiles kv store await profiles.set(id, profile); @@ -176,14 +136,14 @@ profileApi.post("/profiles", (ctx) async { ### Retrieve a profile with GET -```dart +```dart title:services/api.dart profileApi.get("/profiles/:id", (ctx) async { final id = ctx.req.pathParams["id"]!; try { // Retrieve and return the profile data final profile = await profiles.get(id); - ctx.res.json(profile.toJson()); + ctx.res.json(profile); } on Exception catch (e) { print(e); ctx.res.status = 404; @@ -194,9 +154,28 @@ profileApi.get("/profiles/:id", (ctx) async { }); ``` +### Retrieve all profiles with GET + +```dart title:services/api.dart +profileApi.get("/profiles", (ctx) async { + List> profilesList = []; + final profilesIds = await profiles.keys(); + + await for (final id in profilesIds) { + final profile = await profiles.get(id); + profilesList.add(profile); + } + + ctx.res.body = jsonEncode(profilesList); + ctx.res.headers["Content-Type"] = ["application/json"]; + + return ctx; +}); +``` + ### Remove a profile with DELETE -```dart +```dart title:services/api.dart profileApi.delete("/profiles/:id", (ctx) async { final id = ctx.req.pathParams["id"]!; @@ -221,7 +200,7 @@ Now that you have an API defined with handlers for each of its methods, it's tim nitric start ``` -Once it starts, the application will receive requests via the API port. You can use the Local Dashboard or any HTTP client to test the API. We'll keep it running for our tests. If you want to update your functions, just save them, they'll be reloaded automatically. +Once it starts, the application will receive requests via the API port. You can use the [Local Dashboard](/get-started/foundations/projects/local-development) or any HTTP client to test the API. We'll keep it running for our tests. If you want to update your functions, just save them, they'll be reloaded automatically. ## Test the API @@ -259,44 +238,45 @@ curl --location --request DELETE 'http://localhost:4001/profiles/[id]' ## Deploy to the cloud -At this point, you can deploy the application to any supported cloud provider. Start by setting up your credentials and any configuration for the cloud you prefer: +At this point, you can deploy what you've built to any of the supported cloud providers. To do this start by setting up your credentials and any configuration for the cloud you prefer: - [AWS](/providers/pulumi/aws) - [Azure](/providers/pulumi/azure) -- [Google Cloud](/providers/pulumi/gcp) +- [GCP](/providers/pulumi/gcp) + +Next, we'll need to create a `stack`. A stack represents a deployed instance of an application, which is a key value store of resources defined in your project. You might want separate stacks for each environment, such as stacks for `dev`, `test` and `prod`. For now, let's start by creating a `dev` stack. -Next, we'll need to create a `stack`. Stacks represent deployed instances of an application, including the target provider and other details such as the deployment region. You'll usually define separate stacks for each environment such as development, testing and production. For now, let's start by creating a `dev` stack. +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. ```bash -nitric stack new +nitric stack new dev aws ``` -``` -? What should we name this stack? dev -? Which provider do you want to deploy with? aws -? Which region should the stack deploy to? us-east-1 -``` +Continue by checking your stack file `nitric.dev.yaml` and adding in your preferred region, let's use `us-east-1`. -### AWS +```yaml title:nitric.dev.yaml +# The nitric provider to use +provider: nitric/aws@latest +# The target aws region to deploy to +# See available regions: +# https://docs.aws.amazon.com/general/latest/gr/lambda-service.html +region: us-east-1 +``` Cloud deployments incur costs and while most of these resource are available with free tier pricing you should consider the costs of the deployment. -In the previous step we called our stack `dev`, let's try deploying it with the `up` command. +We called our stack `dev`, let's try deploying it with the `up` command ```bash nitric up -┌───────────────────────────────────────────────────────────────┐ -| API | Endpoint | -| main | https://XXXXXXXX.execute-api.us-east-1.amazonaws.com | -└───────────────────────────────────────────────────────────────┘ ``` -When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your API. If you'd like to make changes to the API you can apply those changes by rerunning the `up` command. Nitric will automatically detect what's changed and just update the relevant cloud resources. +When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your API. -When you're done testing your application you can tear it down from the cloud, use the `down` command: +To tear down your application from the cloud, use the `down` command: ```bash nitric down @@ -310,13 +290,13 @@ If you want to go a bit deeper and create some other resources with Nitric, why Define a bucket named `profilesImg` with reading/writing permissions. -```dart +```dart title:services/api.dart final profilesImg = Nitric.bucket("profilesImg").allow([BucketPermission.read, BucketPermission.write]); ``` ### Get a URL to upload a profile image -```dart +```dart title:services/api.dart profileApi.get("/profiles/:id/image/upload", (ctx) async { final id = ctx.req.pathParams["id"]; @@ -331,7 +311,7 @@ profileApi.get("/profiles/:id/image/upload", (ctx) async { ### Get a URL to download a profile image -```dart +```dart title:services/api.dart profileApi.get("/profiles/:id/image/download", (ctx) async { final id = ctx.req.pathParams["id"]; @@ -346,7 +326,7 @@ profileApi.get("/profiles/:id/image/download", (ctx) async { You can also return a redirect response that takes the HTTP client directly to the photo URL. -```dart +```dart title:services/api.dart profileApi.get("/profiles/:id/image/view", (ctx) async { final id = ctx.req.pathParams["id"]; diff --git a/docs/guides/go/serverless-rest-api-example.mdx b/docs/guides/go/serverless-rest-api-example.mdx index 2503f3ffc..b202dc3c6 100644 --- a/docs/guides/go/serverless-rest-api-example.mdx +++ b/docs/guides/go/serverless-rest-api-example.mdx @@ -7,7 +7,7 @@ tags: languages: - go published_at: 2023-08-11 -updated_at: 2024-10-03 +updated_at: 2025-01-06 --- # Building your first API with Nitric @@ -67,7 +67,6 @@ The scaffolded project should have the following structure: +--services/ | +-- hello/ | +-- main.go -| ... +--nitric.yaml +--go.mod +--go.sum @@ -82,13 +81,11 @@ You can test the project to verify everything is working as expected: nitric start ``` -If everything is working as expected you can now delete all files/folders in the `services/` folder, we'll create new services in this guide. - ## Building the Profile API -Let's begin by setting up the Profiles API. First, create a new folder called `profiles` within the services directory. Inside this folder, add a file named `main.go`, and include the following code: +Applications built with Nitric can contain many APIs, let's start by adding an API and a key value store to this project to serve as the public endpoint. -```go +```go title:services/hello/main.go import ( "github.com/nitrictech/go-sdk/nitric" "github.com/nitrictech/go-sdk/nitric/keyvalue" @@ -118,7 +115,7 @@ From here, let's add some features to that function that allow us to work with p ### Create profiles with POST -```go +```go title:services/hello/main.go profilesApi.Post("/profiles", func(ctx *apis.Ctx) error { id := uuid.New().String() @@ -140,7 +137,7 @@ profilesApi.Post("/profiles", func(ctx *apis.Ctx) error { ### Retrieve a profile with GET -```go +```go title:services/hello/main.go profilesApi.Get("/profiles/:id", func(ctx *apis.Ctx) { id := ctx.Request.PathParams()["id"] @@ -158,7 +155,7 @@ profilesApi.Get("/profiles/:id", func(ctx *apis.Ctx) { ### List all profiles with GET -```go +```go title:services/hello/main.go profilesApi.Get("/profiles", func(ctx *apis.Ctx) error { keys, err := profiles.Keys(context.TODO()) if err != nil { @@ -183,7 +180,7 @@ profilesApi.Get("/profiles", func(ctx *apis.Ctx) error { ### Remove a profile with DELETE -```go +```go title:services/hello/main.go profilesApi.Delete("/profiles/:id", func(ctx *apis.Ctx) { id := ctx.Request.PathParams()["id"] @@ -263,9 +260,19 @@ nitric stack new dev aws Continue by checking your stack file `nitric.dev.yaml` and adding in your preferred region, let's use `us-east-1`. -### AWS +```yaml +# The nitric provider to use +provider: nitric/aws@latest +# The target aws region to deploy to +# See available regions: +# https://docs.aws.amazon.com/general/latest/gr/lambda-service.html +region: us-east-1 +``` -Note: You are responsible for staying within the limits of the free tier or any costs associated with deployment. + + Cloud deployments incur costs and while most of these resource are available + with free tier pricing you should consider the costs of the deployment. + We called our stack `dev`, let's try deploying it with the `up` command @@ -289,13 +296,13 @@ If you want to go a bit deeper and create some other resources with Nitric, why Define a bucket named `profileImages` with read/write permissions -```go +```go title:services/hello/main.go profileImages := nitric.NewBucket("profileImages").Allow(storage.BucketRead, storage.BucketWrite) ``` ### Get a URL to upload a profile image -```go +```go title:services/hello/main.go profilesApi.Get("/profiles/:id/image/upload", func(ctx *apis.Ctx) error { id := ctx.Request.PathParams()["id"] photoId := fmt.Sprintf("images/%s/photo.png", id) @@ -313,7 +320,7 @@ profilesApi.Get("/profiles/:id/image/upload", func(ctx *apis.Ctx) error { ### Get a URL to download a profile image -```go +```go title:services/hello/main.go profilesApi.Get("/profiles/:id/image/download", func(ctx *apis.Ctx) error { id := ctx.Request.PathParams()["id"] photoId := fmt.Sprintf("images/%s/photo.png", id) @@ -331,7 +338,7 @@ profilesApi.Get("/profiles/:id/image/download", func(ctx *apis.Ctx) error { You can also directly redirect to the photo URL. -```go +```go title:services/hello/main.go profilesApi.Get("/profiles/:id/image/view", func(ctx *apis.Ctx) error { id := ctx.Request.PathParams()["id"] photoId := fmt.Sprintf("images/%s/photo.png", id) diff --git a/docs/guides/jvm/serverless-rest-api-example.mdx b/docs/guides/jvm/serverless-rest-api-example.mdx index 5c4a24dd6..7d2bf92a2 100644 --- a/docs/guides/jvm/serverless-rest-api-example.mdx +++ b/docs/guides/jvm/serverless-rest-api-example.mdx @@ -615,7 +615,7 @@ profileApi.get("/profiles/:id/image/download", (ctx) -> { -```kotlin Kotlin +```kotlin profileApi.get("/profiles/:id/image/download") { ctx -> val id = ctx.req.params["id"] diff --git a/docs/guides/nodejs/byo-database.mdx b/docs/guides/nodejs/byo-database.mdx index e9a2adb6e..757e29771 100644 --- a/docs/guides/nodejs/byo-database.mdx +++ b/docs/guides/nodejs/byo-database.mdx @@ -6,17 +6,15 @@ languages: - typescript - javascript published_at: 2022-10-13 -updated_at: 2024-06-14 +updated_at: 2024-12-30 --- # BYO Database -Nitric currently has out of the box **preview** support for [SQL databases](/sql) with AWS, however Nitric allows you to use whatever tooling and ORMs you prefer for directly interfacing with your database. Our recommendation for local development is to set up a container that runs alongside your Nitric processes. For a production environment, you can use any of the database services for your preferred cloud: +Nitric currently has out of the box **preview** support for [PostgreSQL databases](/sql), however Nitric allows you to use whatever tooling and ORMs you prefer for directly interfacing with your database. If you want to use a database that isn't managed by Nitric, we have the following recommendations: - - You can track the support for relational databases - [here](https://github.com/nitrictech/roadmap/issues/30). - +- For local development, set up a container that runs alongside your Nitric processes. +- For a production environment, use any of the database services offered by your preferred cloud provider: | | AWS | GCP | Azure | | ---------- | --------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------- | @@ -50,7 +48,7 @@ await client.connect() To start, make sure you have a `.env` file containing your environment variables. -``` +```text title:.env POSTGRES_USER=root POSTGRES_PASSWORD=root POSTGRES_DB=my-database @@ -58,7 +56,7 @@ POSTGRES_DB=my-database You can then create a docker compose file to simplify running your docker container each time. -```yaml {{ label: "docker-compose.yml" }} +```yaml title:docker-compose.yml version: '3.6' services: postgres: @@ -75,7 +73,7 @@ services: You can add a script to your `package.json` which will run the docker compose file. -```json {{ label: "package.json" }} +```json title:package.json "scripts": { "db": "docker compose up --wait" } @@ -115,7 +113,7 @@ con.connect(function (err) { You can then create a docker compose file to simplify running your docker container each time. -```yaml {{ label: "docker-compose.yml" }} +```yaml title:docker-compose.yml version: '3.6' services: mysql: @@ -133,7 +131,7 @@ services: You can add a script to your `package.json` which will run the docker compose file. -```json {{ label: "package.json" }} +```json title:package.json "scripts": { "db": "docker compose up --wait" } @@ -165,7 +163,7 @@ await client.connect() You can then create a docker compose file to simplify running your docker container each time. -```yaml {{ label: "docker-compose.yml" }} +```yaml title:docker-compose.yml version: '3.6' services: mongodb: @@ -182,7 +180,7 @@ services: You can add a script to your `package.json` which will run the docker compose file. -```json {{ label: "package.json" }} +```json title:package.json "scripts": { "db": "docker compose up --wait" } diff --git a/docs/guides/nodejs/debugging.mdx b/docs/guides/nodejs/debugging.mdx index 622665e3d..c3d1259dd 100644 --- a/docs/guides/nodejs/debugging.mdx +++ b/docs/guides/nodejs/debugging.mdx @@ -51,7 +51,7 @@ Before running the debugger, it's necessary to install the project's dependencie -```bash npm +```bash npm install ``` @@ -59,7 +59,7 @@ npm install -```bash yarn +```bash yarn install ``` @@ -67,7 +67,7 @@ yarn install -```bash pnpm +```bash pnpm install ``` diff --git a/docs/guides/nodejs/expressjs.mdx b/docs/guides/nodejs/expressjs.mdx index 1a66f8f03..64ec249e8 100644 --- a/docs/guides/nodejs/expressjs.mdx +++ b/docs/guides/nodejs/expressjs.mdx @@ -7,7 +7,7 @@ languages: - typescript - javascript published_at: 2023-07-10 -updated_at: 2024-03-18 +updated_at: 2024-12-27 --- # Enhance Express.js Apps with Cloud Resources @@ -32,7 +32,7 @@ nitric new express-example js-starter Then install dependencies and add express: -``` +```bash cd express-example yarn install yarn add express @@ -42,7 +42,7 @@ You can go ahead and open this new project in your editor of choice. You should ```txt ├── services -│ ├── hello.js +│ ├── api.js ├── node_modules │ ├── ... ├── .gitignore @@ -55,17 +55,9 @@ You can go ahead and open this new project in your editor of choice. You should In this structure you'll notice the `services` folder. By default, this is where Nitric expects the entrypoint code for your application. However, that's just a convention, we can change that to anything else that suits our needs. -Let's start by replacing the default `hello.js` service with an `app.js` file ready for the Express.js application: - -```bash -rm ./services/hello.js - -touch ./services/app.js -``` - Now, let's add some express code to get things started. -```javascript {{ label: 'services/app.js' }} +```javascript title:services/api.js import express from 'express' import { http } from '@nitric/sdk' const app = express() @@ -100,9 +92,9 @@ Hello World! With everything working so far, now is a good time to see how we can add new resources to the Express app using Nitric. In this example, let's add a pub/sub topic which allows us to perform work in the background, but still respond quickly via the HTTP API. -You can update the `app.js` file like so: +You can update the `api.js` file like so: -```javascript {{ label: 'services/app.js' }} +```javascript title:services/api.js import express from 'express' import { http, topic } from '@nitric/sdk' @@ -125,7 +117,7 @@ touch services/worker.js Add this code to that file: -```javascript {{ label: 'services/worker.js' }} +```javascript title:services/worker.js import { topic } from '@nitric/sdk' const sleep = (ms) => new Promise((res) => setTimeout(res, ms)) @@ -166,7 +158,7 @@ nitric stack new This command will create a file named `nitric.dev.yaml`, with contents like this: -```yaml {{ label: 'nitric.dev.yaml' }} +```yaml title:nitric.dev.yaml provider: nitric/aws@1.1.0 region: us-east-1 ``` diff --git a/docs/guides/nodejs/fastify.mdx b/docs/guides/nodejs/fastify.mdx index 94edbf78e..a4d67afff 100644 --- a/docs/guides/nodejs/fastify.mdx +++ b/docs/guides/nodejs/fastify.mdx @@ -7,7 +7,7 @@ languages: - typescript - javascript published_at: 2023-07-12 -updated_at: 2024-03-18 +updated_at: 2024-12-27 --- # Add Cloud Resources to Fastify Apps @@ -42,7 +42,7 @@ You can go ahead and open this new project in your editor of choice. You should ```txt ├── services -│ ├── hello.js +│ ├── api.js ├── node_modules │ ├── ... ├── .gitignore @@ -53,19 +53,10 @@ You can go ahead and open this new project in your editor of choice. You should └── yarn.lock ``` -In this structure you'll notice the `servuces` folder. By default, this is where Nitric expects the entrypoint code for your application. However, that's just a convention, we can change that to anything else that suits our needs. - -Let's start by replacing the default `hello.js` service with an `app.js` file ready for the Fastify application: - -```bash -rm ./services/hello.js - -touch ./services/app.js -``` - +In this structure you'll notice the `services` folder. By default, this is where Nitric expects the entrypoint code for your application. However, that's just a convention, we can change that to anything else that suits our needs. Now, let's add some Fastify code to get things started. -```javascript {{ label: 'services/app.js' }} +```javascript title:services/api.js import Fastify from 'fastify' import { http } from '@nitric/sdk' @@ -77,13 +68,21 @@ fastify.get('/', (request, reply) => { reply.send('Hello World!') }) -http(fastify) +async function bootstrap(port: number) { + const address = await fastify.listen({ port }) + + console.log(`Server listening on ${address}`) + + return fastify.server +} + +http(bootstrap) ``` - If you're familiar with Fastify you'll notice this example doesn't call - `fastify.listen`. The Nitric `http` function takes care of that, as well as - binding the application to the correct port in each environment. + The `http` function is a Nitric helper that allows you to bootstrap your + Fastify application with Nitric. Here we're passing the port number as an + argument to the `fastify.listen` function and returning the server instance. At this point we're ready to start testing locally. @@ -94,7 +93,7 @@ nitric start Your Fastify application will now be running with Nitric acting as a proxy. We can test this in another terminal or web browser. -```bash {{ label: 'terminal' }} +```bash curl http://localhost:4001 Hello World! ``` @@ -103,9 +102,9 @@ Hello World! With everything working so far, now is a good time to see how we can add new resources to the Fastify app using Nitric. In this example, let's add a pub/sub topic which allows us to perform work in the background, but still respond quickly via the HTTP API. -You can update the `app.js` file like so: +You can update the `api.js` file like so: -```javascript {{ label: 'services/app.js' }} +```javascript title:services/api.js import Fastify from 'fastify' import { http, topic } from '@nitric/sdk' @@ -120,14 +119,14 @@ fastify.get('/', async (request, reply) => { }) async function bootstrap(port: number) { - await fastify.listen({ port }) + const address = await fastify.listen({ port }) + + console.log(`Server listening on ${address}`) return fastify.server } -http(bootstrap, () => { - console.log(`Application started`) -}) +http(bootstrap) ``` We'll also add a new function to do the background work: @@ -138,7 +137,7 @@ touch services/worker.js Add this code to that file: -```javascript {{ label: 'services/worker.js' }} +```javascript title:services/worker.js import { topic } from '@nitric/sdk' const sleep = (ms) => new Promise((res) => setTimeout(res, ms)) @@ -173,13 +172,13 @@ To perform the deployment we'll create a `stack`, stacks give Nitric the config The new stack command can help you create the stack by following prompts. -```bash {{ label: 'terminal' }} +```bash nitric stack new ``` This command will create a file named `nitric.dev.yaml`, with contents like this: -```yaml {{ label: 'nitric.dev.yaml' }} +```yaml title:nitric.dev.yaml provider: nitric/aws@1.1.0 region: us-east-1 ``` diff --git a/docs/guides/nodejs/graphql.mdx b/docs/guides/nodejs/graphql.mdx index 50d2367d1..ba16efcd5 100644 --- a/docs/guides/nodejs/graphql.mdx +++ b/docs/guides/nodejs/graphql.mdx @@ -7,7 +7,7 @@ languages: - typescript - javascript published_at: 2022-11-17 -updated_at: 2024-03-18 +updated_at: 2024-12-27 --- # Building a GraphQL API with Nitric @@ -56,7 +56,7 @@ The scaffolded project should have the following structure: ```text +--services/ -| +-- hello.ts +| +-- api.ts +--node_modules/ | ... +--nitric.yaml @@ -67,17 +67,9 @@ The scaffolded project should have the following structure: You can test the project to verify everything is working as expected: ```bash -npm run dev +nitric start ``` - - The `dev` script starts the Nitric Server using `nitric start`, which provides - local interfaces to emulate cloud resources, then runs your services and - allows them to connect. - - -If everything is working as expected you can now delete all files in the `services/` folder, we'll create new services in this guide. - ## Build the GraphQL Schema GraphQL requests are typesafe, and so they require a schema to be defined to validate queries. @@ -89,10 +81,9 @@ npm install graphql npm install uuid ``` -Create a new file named 'graphql.ts' in the services folder. We can then import `buildSchema`, and write out the schema. -```typescript +```typescript title:services/api.ts import { graphql, buildSchema } from 'graphql' import { v4 as uuid } from 'uuid' @@ -123,7 +114,7 @@ const schema = buildSchema(` We will also define a few types to mirror the schema definition. -```typescript +```typescript title:services/api.ts interface Profile { pid: string name: string @@ -138,7 +129,7 @@ type ProfileInput = Omit Lets define a KV resource for the resolvers get/set data from with some helper functions for serialization. -```typescript +```typescript title:services/api.ts import { kv } from '@nitric/sdk' const profiles = kv('profiles').allow('get', 'set') @@ -171,7 +162,7 @@ async function updateProfiles(profileList) { We can create a resolver object for use by the graphql handler. -```typescript +```typescript title:services/api.ts const resolvers = { createProfile, fetchProfiles, @@ -185,7 +176,7 @@ We can then use the KV resource within these services. Each resolver will receiv ## Create a profile -```typescript +```typescript title:services/api.ts const createProfile = async ({ profile }): Promise => { const profileList = await getProfiles() profile.pid = uuid() @@ -197,7 +188,7 @@ const createProfile = async ({ profile }): Promise => { ## Get all profiles -```typescript +```typescript title:services/api.ts const fetchProfiles = async (): Promise => { return await getProfiles() } @@ -205,7 +196,7 @@ const fetchProfiles = async (): Promise => { ## Get a profile by its ID -```typescript +```typescript title:services/api.ts const fetchProfile = async ({ pid }): Promise => { const profileList = await getProfiles() const profile = profileList.find((profile) => profile.pid === pid) @@ -219,7 +210,7 @@ We'll define an api to put our handler in. This api will only have one endpoint, Update the imports to include api and declare the api. -```typescript +```typescript title:services/api.ts import { api, kv } from '@nitric/sdk' const profileApi = api('public') @@ -227,7 +218,7 @@ const profileApi = api('public') Then add the api handler. -```typescript +```typescript title:services/api.ts import { graphql, buildSchema } from 'graphql' profileApi.post('/', async (ctx) => { @@ -249,18 +240,11 @@ Now that you have an API defined with a handler for the GraphQL requests, it's t Test out your application with the following command: ```bash -npm run dev +nitric start ``` - - The `dev` script in the template starts the Nitric Server using `nitric start` - and runs your services. - - Once it starts, the application will be able to receive requests via the API port. -Pressing `ctrl + a + k` will end the application. - ## GraphQL Queries We can use cURL, postman or any other HTTP Client to test our application, however it's better if the client has GraphQL support. @@ -370,31 +354,34 @@ curl --location -X POST \ ## Deploy to the cloud -Setup your credentials and any other cloud specific configuration: +At this point, you can deploy what you've built to any of the supported cloud providers. In this example we'll deploy to AWS. Start by setting up your credentials and configuration for the [nitric/aws provider](/providers/pulumi/aws). -- [AWS](/providers/pulumi/aws) -- [Azure](/providers/pulumi/azure) -- [GCP](/providers/pulumi/gcp) +Next, we'll need to create a `stack file` (deployment target). A stack is a deployed instance of an application. You might want separate stacks for each environment, such as stacks for `dev`, `test`, and `prod`. For now, let's start by creating a file for the `dev` stack. -Create a stack - a collection of resources identified in your project which will be deployed. +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. ```bash -nitric stack new +nitric stack new dev aws ``` -``` -? What should we name this stack? dev -? Which provider do you want to deploy with? aws -? Which region should the stack deploy to? us-east-1 -``` +Edit the stack file `nitric.dev.yaml` and set your preferred AWS region, for example `us-east-1`. -You can then deploy using the following command: +### AWS + + + You are responsible for staying within the limits of the free tier or any + costs associated with deployment. + + +Let's try deploying the stack with the `up` command: ```bash nitric up ``` -To undeploy run the following command: +When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your WebSocket application. + +To tear down your application from the cloud, use the `down` command: ```bash nitric down diff --git a/docs/guides/nodejs/nestjs.mdx b/docs/guides/nodejs/nestjs.mdx index 2264f6ab5..8e6103289 100644 --- a/docs/guides/nodejs/nestjs.mdx +++ b/docs/guides/nodejs/nestjs.mdx @@ -47,7 +47,7 @@ This will create a src directory with the Nest application code and a test file. Within our src directory we want to replace the base App controller code with a Profile controller. -```typescript {{ label: 'profile/profile.controller.ts' }} +```typescript title:profile/profile.controller.ts import { Controller, Get } from '@nestjs/common' import { ProfileService } from './profile.service' @@ -64,7 +64,7 @@ export class ProfileController { Then update the App service with the Profile service. -```typescript {{ label: 'profile/profile.service.ts' }} +```typescript title:profile/profile.service.ts import { Injectable } from '@nestjs/common' @Injectable() @@ -77,7 +77,7 @@ export class ProfileService { We will then need to update the imports for the base App module. We will keep this as the AppModule as its going to act as the entrypoint to our application. -```typescript {{ label: 'app.module.ts' }} +```typescript title:app.module.ts import { Module } from '@nestjs/common' import { ProfileController } from './profile/profile.controller' import { ProfileService } from './profile/profile.service' @@ -100,7 +100,7 @@ yarn add @nitric/sdk Firstly, we will define our profiles [key value store](/keyvalue) and a Profile type at the top of our `ProfileService` file. The key value store will have permissions for getting, setting, and deleting as we will be using all those features for our Profile service. -```typescript {{ label: 'profile/profile.service.ts' }} +```typescript title:profile/profile.service.ts import { Injectable, NotFoundException } from '@nestjs/common'; import { kv } from '@nitric/sdk'; @@ -124,7 +124,21 @@ export class ProfileService { We can then create some handlers for creating profiles. This will accept a create profile request and return the newly created profile. -```typescript {{ label: 'profile/profile.service.ts' }} +Install the `uuid` package to generate unique ids for the profiles. + +```bash +yarn add uuid +``` + +Add the import to the top of the file. + +```typescript title:profile/profile.service.ts +import { v4 as uuidv4 } from 'uuid' +``` + +Then add the handler changes. + +```typescript title:profile/profile.service.ts @Injectable() export class ProfileService { async createProfile(createProfileReq: Omit): Promise { @@ -141,7 +155,7 @@ export class ProfileService { Our next handler will be getting an individual profile by its `id`. This will accept an `id` and will return either the found profile or throw a not found exception. -```typescript {{ label: 'profile/profile.service.ts' }} +```typescript title:profile/profile.service.ts @Injectable() export class ProfileService { ... @@ -160,7 +174,7 @@ export class ProfileService { The final handler we will write is for deleting an individual profile by its `id`. -```typescript {{ label: 'profile/profile.service.ts' }} +```typescript title:profile/profile.service.ts @Injectable() export class ProfileService { ... @@ -179,7 +193,7 @@ To run the application locally using Nitric, and to eventually deploy the applic name: nest-x-nitric services: - match: src/main.ts - start: yarn dev + start: yarn start ``` We then want to add the `http` type to our main function, and pass in the Nest.js application. The `http` wrapper will use the `bootstrap` function when starting the application and pass it the port. @@ -190,7 +204,7 @@ We then want to add the `http` type to our main function, and pass in the Nest.j set. -```typescript {{ label: 'app.module.ts' }} +```typescript title:app.module.ts import { http } from '@nitric/sdk' import { NestFactory } from '@nestjs/core' import { AppModule } from './app.module' @@ -207,7 +221,7 @@ http(bootstrap) Now that the service is built, we can use the `ProfileService` with our `ProfileController` routing. This involves correctly extracting the parameters from the request object to then pass into the services. -```typescript {{ label: 'profile/profile.controller.ts' }} +```typescript title:profile/profile.controller.ts import { Controller, Get, Post, Delete, Param, Req } from '@nestjs/common' import { Profile, ProfileService } from './profile.service' import { Request } from 'express' diff --git a/docs/guides/nodejs/nitric-and-supabase.mdx b/docs/guides/nodejs/nitric-and-supabase.mdx index 389a481f3..043bbf17d 100644 --- a/docs/guides/nodejs/nitric-and-supabase.mdx +++ b/docs/guides/nodejs/nitric-and-supabase.mdx @@ -137,7 +137,7 @@ With those steps complete, your project should look like this: Open `services/welcome.js` in your editor and replace the file contents with this: -```javascript {{ label: 'services/welcome.js' }} +```javascript title:services/welcome.js import 'dotenv/config' import sendgrid from '@sendgrid/mail' import { api } from '@nitric/sdk' @@ -181,13 +181,13 @@ Let's breakdown what the code above is doing. First, we import `dotenv/config` so that environment variables are automatically loaded from `.env` files. -```javascript {{ label: 'services/welcome.js' }} +```javascript title:services/welcome.js import 'dotenv/config' ``` Next, we setup the SendGrid client by providing it with an API key stored in the `SENDGRID_API_KEY` environment variable, which we'll create later. We also retrieve the sender email address from the `SENDGRID_SENDER` variable, which is used as the _from_ email address. -```javascript {{ label: 'services/welcome.js' }} +```javascript title:services/welcome.js import sendgrid from '@sendgrid/mail' sendgrid.setApiKey(process.env.SENDGRID_API_KEY) @@ -196,7 +196,7 @@ const sender = process.env.SENDGRID_SENDER After that, we create a new Nitric API and set it up with handler for _POST_ requests to `/welcome`. This handler checks for an API key in the request headers, which helps make sure that only _our_ Supabase project can make these requests. Then, it pulls the user emails address and name from the `record` object in the POST request body and uses those values to send the user an email. -```javascript {{ label: 'services/welcome.js' }} +```javascript title:services/welcome.js import { api } from '@nitric/sdk' const payApi = api('notifications') diff --git a/docs/guides/nodejs/serverless-api-with-planetscale-and-prisma.mdx b/docs/guides/nodejs/serverless-api-with-planetscale-and-prisma.mdx index b4cb51dfe..5bfe2f22a 100644 --- a/docs/guides/nodejs/serverless-api-with-planetscale-and-prisma.mdx +++ b/docs/guides/nodejs/serverless-api-with-planetscale-and-prisma.mdx @@ -114,7 +114,7 @@ This gives you a new prisma schema in a folder called `prisma` and a new `.env` Overwrite the contents of `prisma.schema` with the schema below. We'll use this to initialize our database. -```prisma {{ label: "prisma/schema.prisma" }} +```prisma title:prisma/schema.prisma // This is your Prisma schema file, // learn more about it in the docs: https://pris.ly/d/prisma-schema @@ -168,7 +168,7 @@ npx prisma generate Finally, let's make it easy to import an instance of the prisma client by creating the file `prisma/index.ts` and adding this code: -```typescript {{ label: 'prisma/index.ts' }} +```typescript title:prisma/index.ts import { PrismaClient } from './client' export * from './client' @@ -217,7 +217,7 @@ Apps built with Nitric define their resources in code, you can write this in the First, let's declare an API gateway. Create a new file called `apis.ts` in a new folder called `resources` and this code: -```typescript {{ label: "resources/apis.ts" }} +```typescript title:resources/apis.ts import { api } from '@nitric/sdk' export const memeApi = api('meme') @@ -227,7 +227,7 @@ This creates a new `api` resource with the name "meme" and exports it as a resou Next, let's also create some buckets to store our meme image files. Create a new file called `buckets.ts` under `resources` and populate it with the following: -```typescript {{ label: "resources/buckets.ts" }} +```typescript title:resources/buckets.ts import { bucket } from '@nitric/sdk' export const templates = bucket('templates') @@ -242,7 +242,7 @@ Now that the resources are declared, let's create the first service. This servic In the `/services` directory create a new file called `templates.ts` and populate it with the following code: -```typescript {{ label: "services/templates.ts" }} +```typescript title:services/templates.ts import Jimp from 'jimp' import prisma, { MemeTemplate, TextPosition } from '../prisma' import { memeApi } from '../resources/apis' @@ -317,7 +317,7 @@ The incoming `context` object _(which has been destructured into `req` and `res` Similar to the `templates` example, we'll create another new file `services/memes.ts`, with the code below: -```typescript {{ label: "services/memes.ts" }} +```typescript title:services/memes.ts import Jimp from 'jimp' import prisma, { Meme } from '../prisma' import { memes, templates } from '../resources/buckets' diff --git a/docs/guides/nodejs/serverless-rest-api-example.mdx b/docs/guides/nodejs/serverless-rest-api-example.mdx index 17e4eadd5..9d3a78b02 100644 --- a/docs/guides/nodejs/serverless-rest-api-example.mdx +++ b/docs/guides/nodejs/serverless-rest-api-example.mdx @@ -8,7 +8,7 @@ languages: - typescript - javascript published_at: 2023-06-16 -updated_at: 2024-05-15 +updated_at: 2025-01-06 --- # Building a REST API with Nitric @@ -19,6 +19,7 @@ The API will provide the following routes: | **Method** | **Route** | **Description** | | ---------- | -------------- | -------------------------------- | +| `GET` | /profiles | Get all profiles | | `GET` | /profiles/[id] | Get a specific profile by its Id | | `POST` | /profiles | Create a new profile | | `DELETE` | /profiles/[id] | Delete a profile | @@ -63,7 +64,7 @@ The scaffolded project should have the following structure: ```text services/ -├── hello.ts +├── api.ts node_modules/ nitric.yaml package.json @@ -76,8 +77,6 @@ You can test the project to verify everything is working as expected: nitric start ``` -If everything's working you can now delete all files in the `services/` folder, we'll create new services in this guide. - ## Building the API This example uses UUIDs to create unique IDs to store profiles against, let's start by adding a library to help with that: @@ -86,9 +85,9 @@ This example uses UUIDs to create unique IDs to store profiles against, let's st npm install uuid ``` -Applications built with Nitric can contain many APIs, let's start by adding one to this project to serve as the public endpoint. Create a file named `profiles.ts` in the services directory and add the following code to that file. +Applications built with Nitric can contain many APIs, let's start by adding an API and a key value store to this project to serve as the public endpoint. -```typescript title:services/profiles.ts +```typescript title:services/api.ts import { api, kv } from '@nitric/sdk' import { v4 as uuid } from 'uuid' @@ -127,7 +126,7 @@ Next we will add features that allow our API consumers to work with profile data prefer. For simplicity we'll group them together in this guide. -```typescript title:services/profiles.ts +```typescript title:services/api.ts profileApi.post('/profiles', async (ctx) => { const id = uuid() const { name, age, homeTown } = ctx.req.json() @@ -156,7 +155,7 @@ profileApi.post('/profiles', async (ctx) => { ### Retrieve a profile with GET -```typescript title:services/profiles.ts +```typescript title:services/api.ts profileApi.get('/profiles/:id', async (ctx) => { const { id } = ctx.req.params @@ -177,7 +176,7 @@ profileApi.get('/profiles/:id', async (ctx) => { ### Remove a profile with DELETE -```typescript title:services/profiles.ts +```typescript title:services/api.ts profileApi.delete('/profiles/:id', async (ctx) => { const { id } = ctx.req.params @@ -200,26 +199,18 @@ profileApi.delete('/profiles/:id', async (ctx) => { ### List all profiles with GET -```typescript title:services/profiles.ts +```typescript title:services/api.ts profileApi.get('/profiles', async (ctx) => { - try { - const profilesList = [] - // Get a profile by id - const keys = profiles.keys() + const profilesList = [] + const keys = profiles.keys() - for await (const key of keys) { - const profile = await profiles.get(key) - profilesList.push(profile) - } - - // Set a JSON HTTP response - ctx.res.json(profilesList) - } catch (error) { - ctx.res.status = 404 - ctx.res.json({ - msg: `Profiles not found.`, - }) + for await (const key of keys) { + const profile = await profiles.get(key) + profilesList.push(profile) } + + // Set a JSON HTTP response + ctx.res.json(profilesList) }) ``` @@ -278,10 +269,19 @@ At this point, you can deploy the application to any supported cloud provider. S Next, we'll need to create a `stack`. Stacks represent deployed instances of an application, including the target provider and other details such as the deployment region. You'll usually define separate stacks for each environment such as development, testing and production. ```bash -nitric stack new dev +nitric stack new dev aws ``` -### AWS +Continue by checking your stack file `nitric.dev.yaml` and adding in your preferred region, let's use `us-east-1`. + +```yaml title:nitric.dev.yaml +# The nitric provider to use +provider: nitric/aws@latest +# The target aws region to deploy to +# See available regions: +# https://docs.aws.amazon.com/general/latest/gr/lambda-service.html +region: us-east-1 +``` Cloud deployments incur costs and while most of these resource are available @@ -310,7 +310,7 @@ If you want to go a bit deeper and create some other resources with Nitric, why Define a bucket named `profilesImg` with reading/writing permissions. -```typescript title:services/profiles.ts +```typescript title:services/api.ts import { bucket } from '@nitric/sdk' const profilesImg = bucket('profilesImg').allow('read', 'write') @@ -318,7 +318,7 @@ const profilesImg = bucket('profilesImg').allow('read', 'write') ### Get a URL to upload a profile image -```typescript title:services/profiles.ts +```typescript title:services/api.ts profileApi.get('/profiles/:id/image/upload', async (ctx) => { const { id } = ctx.req.params @@ -335,7 +335,7 @@ profileApi.get('/profiles/:id/image/upload', async (ctx) => { ### Get a URL to download a profile image -```typescript title:services/profiles.ts +```typescript title:services/api.ts profileApi.get('/profiles/:id/image/download', async (ctx) => { const { id } = ctx.req.params @@ -352,7 +352,7 @@ profileApi.get('/profiles/:id/image/download', async (ctx) => { You can also return a redirect response that takes the HTTP client directly to the photo URL. -```typescript +```typescript title:services/api.ts profileApi.get('/profiles/:id/image/view', async (ctx) => { const { id } = ctx.req.params diff --git a/docs/guides/nodejs/stripe.mdx b/docs/guides/nodejs/stripe.mdx index 9f24f8859..c8d5cbda8 100644 --- a/docs/guides/nodejs/stripe.mdx +++ b/docs/guides/nodejs/stripe.mdx @@ -71,7 +71,7 @@ npm install stripe We will also add a utils file for our stripe object and some environment variables. We'll put ours in `common/utils.ts` -```ts {{ label: 'common/utils.ts' }} +```ts title:common/utils.ts import Stripe from 'stripe' import dotenv from 'dotenv' @@ -104,7 +104,7 @@ We'll first create a handler for creating checkout sessions. We will define a `P There is also an optional choice for a success and cancel url for where to redirect the user after the checkout has been completed. -```typescript {{ label: 'services/hello.ts' }} +```typescript title:services/hello.ts import { api } from '@nitric/sdk' import { stripe } from '../common/utils' @@ -143,7 +143,7 @@ Creating a webhook that connects to stripe means that you can have reactive logi This route is simple, but quite long. We will break it down, but here is the full example: -```typescript {{ label: 'services/hello.ts' }} +```typescript title:services/hello.ts import { api } from '@nitric/sdk'; import Stripe from 'stripe'; import { stripe, stripeWebhookSecret } from '../common/utils'; @@ -198,7 +198,7 @@ const handleOrder = (session: Stripe.Event.Data) => { The first step is to verify that the event signature is correct, thus validating that the event came from Stripe. We extract the stripe-signature from the request headers, then compare the signature against our Stripe webhook secret. The webhook secret comes from the environment variables and can be set when we start testing. We use the `stripe.webhooks.constructEvent` function for doing this comparison. It will throw an error if it runs into an error. -```typescript {{ label: 'services/hello.ts' }} +```typescript title:services/hello.ts const buf = Buffer.from(ctx.req.data) const sig = ctx.req.headers['stripe-signature'] let event: Stripe.Event @@ -218,7 +218,7 @@ try { After the event is constructed, we can write the logic to handle it based on the type. This will compare the event type in a switch statement. In this case, we only have the event type `checkout.session.completed`, but this is where you would put further event handling features. -```typescript {{ label: 'services/hello.ts' }} +```typescript title:services/hello.ts try { switch (event.type) { case 'checkout.session.completed': diff --git a/docs/guides/nodejs/testing.mdx b/docs/guides/nodejs/testing.mdx index a4c434e22..d67eb731f 100644 --- a/docs/guides/nodejs/testing.mdx +++ b/docs/guides/nodejs/testing.mdx @@ -21,13 +21,13 @@ In this example we'll write a small API, with some tricks that make unit testing We'll start by defining the Nitric resources for our application in their own files, separate from handlers and other code. This separation helps with resource reuse and can provide isolation that makes unit testing easier. -```javascript {{ label: "resources/apis.js" }} +```javascript title:resources/apis.js import { api } from '@nitric/sdk' export const helloApi = api('main') ``` -```javascript {{ label: "resources/buckets.js" }} +```javascript title:resources/buckets.js import { bucket } from '@nitric/sdk' export const imageBucket = bucket('images') @@ -35,7 +35,7 @@ export const imageBucket = bucket('images') We want to be able to test individual API route handler callbacks without running the entire API or application. Since testing anonymous services in the route callback can be difficult, we'll separate the callbacks into their own files to help with isolation. -```javascript {{ label: "handlers/hello.js" }} +```javascript title:handlers/hello.js import { imageBucket } from '../resources/buckets' const imageWriter = imageBucket.allow('write') @@ -54,7 +54,7 @@ export const handleAddImage = async (ctx) => { } ``` -```javascript {{ label: "services/hello.js" }} +```javascript title:services/hello.js import { helloApi } from '../resources/apis' import { handleHello, handleAddImage } from '../handlers/hello' @@ -70,7 +70,7 @@ Next create a `test` directory, then add the example test file as shown below. T In this example we're testing that if the function is passed a context with a set name parameter, it should return the same context with a body added to the response. Since the `handleHello` function is async we use an async test and await the results. -```javascript {{ label: "test/example.test.js" }} +```javascript title:test/example.test.js import { handleHello } from '../handlers/hello' describe('Testing Hello Service', () => { @@ -95,7 +95,7 @@ The next function `handleAddImage` is a bit more challenging to test since it re We'll go over it section by section, but here is the full example: -```javascript {{ label: "test/example.test.js" }} +```javascript title:test/example.test.js ... describe('Given name is valid', () => { @@ -237,13 +237,13 @@ npm install --save-dev jest supertest @types/jest @types/supertest Next, we'll write a small API to do some testing on. Just like in the Unit Testing example, we'll start by creating some Nitric resources. -```javascript {{ label: "resources/apis.js" }} +```javascript title:resources/apis.js import { api } from '@nitric/sdk' export const helloApi = api('main') ``` -```javascript {{ label: "resources/buckets.js" }} +```javascript title:resources/buckets.js import { bucket } from '@nitric/sdk' export const imageBucket = bucket('images') @@ -251,7 +251,7 @@ export const imageBucket = bucket('images') Next, let's create an API by defining our routes and their callback functions: -```javascript {{ label: "services/hello.js" }} +```javascript title:services/hello.js import { imageBucket } from '../resources/buckets' import { helloApi } from '../resources/apis' @@ -278,7 +278,7 @@ helloApi.post('/:name', handleAddImage) Now we can start writing the test. First create a `test` test directory, then add a test file to it named `integration.test.ts`. For the test, we want to create an agent using supertest, then point the agent at the URL of our API. -```javascript {{ label: "tests/integration.test.js" }} +```javascript title:tests/integration.test.js import supertest from 'supertest' describe('Testing Hello Api', () => { @@ -290,7 +290,7 @@ We can then add tests that make requests to the API. We'll start with testing th This request has the name parameter set to 'test'. This means we expect the response to be 'Hello test' and the status code to be 200. We're provided with a `done` function in the test callback, we call it when the test is resolved or encounters errors. This is to stop timeouts on the test, as we are testing an async operation. -```javascript {{ label: "tests/integration.test.js" }} +```javascript title:tests/integration.test.js import supertest from 'supertest' import assert from 'assert' diff --git a/docs/guides/nodejs/twilio.mdx b/docs/guides/nodejs/twilio.mdx index fde2c0fa5..9bbaa4c69 100644 --- a/docs/guides/nodejs/twilio.mdx +++ b/docs/guides/nodejs/twilio.mdx @@ -40,7 +40,7 @@ Once you've gone through the account creation and verification, you'll arrive at The messenger class acts as a helper wrapper for sending text messages. It accepts the account SID and the auth token and creates a client. This client is then used in the single method `send` which accepts a text message object and creates it. -```typescript {{ label: "common/messenger.ts" }} +```typescript title:common/messenger.ts import twilio, { Twilio } from 'twilio' class Messenger { @@ -73,7 +73,7 @@ npm install twilio For the API we will have a single POST route `send`. This done by creating an API resource using the Nitric SDK and defining a new route. -```typescript {{ label: "services/text.ts" }} +```typescript title:services/text.ts import { api } from '@nitric/sdk' const textApi = api('text') @@ -97,7 +97,7 @@ TWILIO_PHONE_NUMBER=+1234567890 The `dotenv.config` call will load the variables from the `.env` file into the `process.env` object. You can then use the variables to construct the messenger. -```typescript {{ label: "services/text.ts" }} +```typescript title:services/text.ts import { api } from '@nitric/sdk' import Messenger from '../common/messenger' @@ -140,7 +140,7 @@ Secondly, convert this data to an SMS object: Finally, putting all these components all together, we get a functional text messaging endpoint ready for testing: -```typescript {{ label: "services/text.ts" }} +```typescript title:services/text.ts textApi.post('/send', async (ctx) => { const messenger = new Messenger(twilioAccountSID, twilioAuthToken) diff --git a/docs/guides/nodejs/websockets.mdx b/docs/guides/nodejs/websockets.mdx index 69301b6b5..3f4856727 100644 --- a/docs/guides/nodejs/websockets.mdx +++ b/docs/guides/nodejs/websockets.mdx @@ -60,7 +60,7 @@ In this structure you'll notice the `services` folder. By default, this is where Let's update our `hello.ts` file with some websocket code to get started. -```typescript {{ label: 'services/hello.ts' }} +```typescript title:services/hello.ts import { websocket } from '@nitric/sdk' const socket = websocket('example-websocket') @@ -100,7 +100,7 @@ To do this we'll need to add some connection management. You can update the `hello.ts` file like so: -```typescript {{ label: 'services/hello.ts' }} +```typescript title:services/hello.ts import { websocket, kv } from '@nitric/sdk' // Initialize KV store for connections and a WebSocket @@ -159,14 +159,14 @@ nitric stack new This command will create a file named `nitric.dev.yaml`, with contents like this: -```yaml {{ label: 'nitric.dev.yaml' }} +```yaml title:nitric.dev.yaml provider: nitric/aws@1.1.0 region: us-east-1 ``` With the stack file in place we can run the deployment: -```bash {{ label: 'terminal' }} +```bash nitric up ``` diff --git a/docs/guides/python/ai-podcast-part-1.mdx b/docs/guides/python/ai-podcast-part-1.mdx index e4705a11b..63b3a2e3d 100644 --- a/docs/guides/python/ai-podcast-part-1.mdx +++ b/docs/guides/python/ai-podcast-part-1.mdx @@ -607,7 +607,7 @@ We'll also add a dockerignore file to try and keep the image size down. touch torch.dockerfile.dockerignore ``` -```gitignore title: torch.dockerfile.dockerignore +```text title: torch.dockerfile.dockerignore .mypy_cache/ .nitric/ .venv/ @@ -620,7 +620,7 @@ model.zip We'll also need to update the `python.dockerfile` to ignore the `.model` directory. -```gitignore title: python.dockerfile.dockerignore +```text title: python.dockerfile.dockerignore .mypy_cache/ .nitric/ .venv/ diff --git a/docs/guides/python/ai-podcast-part-2.mdx b/docs/guides/python/ai-podcast-part-2.mdx index 7d37fd1fd..4bab95f5f 100644 --- a/docs/guides/python/ai-podcast-part-2.mdx +++ b/docs/guides/python/ai-podcast-part-2.mdx @@ -374,7 +374,7 @@ We can also add a `.dockerignore` file to prevent unnecessary files from being i touch llama.dockerfile.dockerignore ``` -```plaintext title:llama.dockerfile.dockerignore +```text title:llama.dockerfile.dockerignore .mypy_cache/ .nitric/ .venv/ diff --git a/docs/guides/python/blender-render.mdx b/docs/guides/python/blender-render.mdx index d2728008e..777fc9df4 100644 --- a/docs/guides/python/blender-render.mdx +++ b/docs/guides/python/blender-render.mdx @@ -11,6 +11,7 @@ featured: languages: - python published_at: 2024-11-13 +updated_at: 2025-01-06 --- # Use Cloud GPUs for rendering your Blender projects @@ -879,12 +880,17 @@ Next, we'll need to create a stack file (deployment target). A stack is a deploy The `stack new` command below will create a stack named `dev` that uses the `aws` provider. -``` +```bash nitric stack new dev aws ``` Edit the stack file `nitric.dev.yaml` and set your preferred AWS region, for example `us-east-1`. +```yaml title:nitric.dev.yaml +provider: nitric/aws@latest +region: us-east-1 +``` + You are responsible for staying within the limits of the free tier or any costs associated with deployment. diff --git a/docs/guides/python/create-histogram.mdx b/docs/guides/python/create-histogram.mdx index 46f43d4db..4e46ecc4c 100644 --- a/docs/guides/python/create-histogram.mdx +++ b/docs/guides/python/create-histogram.mdx @@ -6,7 +6,7 @@ tags: languages: - python published_at: 2022-12-20 -updated_at: 2024-10-17 +updated_at: 2025-01-06 --- # Building a data visualization API with Nitric @@ -17,7 +17,7 @@ We'll be making a serverless application which can take information from a HTTP ## Prerequisites -- [Pipenv](https://pypi.org/project/pipenv/) - for simplified dependency management +- [uv](https://docs.astral.sh/uv/#getting-started) - for Python dependency management - The [Nitric CLI](/get-started/installation) - _(optional)_ Your choice of an [AWS](https://aws.amazon.com), [GCP](https://cloud.google.com) or [Azure](https://azure.microsoft.com) account @@ -26,7 +26,7 @@ We'll be making a serverless application which can take information from a HTTP We'll start by creating a new project for our API. ```bash -nitric new histogram-api py-starter-pipenv +nitric new histogram-api py-starter ``` Next, open the project in your editor of choice. @@ -35,15 +35,15 @@ Next, open the project in your editor of choice. cd histogram-api ``` -Make sure all dependencies are resolved using Pipenv: +Make sure all dependencies are resolved using `uv`: ```bash -pipenv install --dev +uv sync ``` -Starting from scratch in the `hello.py` service file, lets start by importing and defining our api. +Starting from scratch in the `api.py` service file, lets start by importing and defining our api. -```python +```python title:services/api.py from nitric.resources import api from nitric.application import Nitric from nitric.context import HttpContext @@ -55,7 +55,7 @@ Nitric.run() We can then define our api route which will accept the histogram data to then be returned as an image. -```python +```python title:services/api.py from nitric.resources import api from nitric.application import Nitric from nitric.context import HttpContext @@ -68,7 +68,6 @@ async def create_histogram(ctx: HttpContext) -> None: pass Nitric.run() - ``` The route will be registered for a GET method so that it is a link that can be embed. For example, once the API is complete, it will be able to be used in image sources like: @@ -81,7 +80,7 @@ The route will be registered for a GET method so that it is a link that can be e We can then write the logic for creating the histogram from the request. As seen in the above url, there are 4 query parameters that we want a user to be able to configure, the data, y-label, x-label, and title. -```python +```python title:services/api.py @main_api.get("/histogram") async def create_histogram(ctx: HttpContext) -> None # Extract the comma-delimited data and split it into array @@ -111,10 +110,10 @@ async def create_histogram(ctx: HttpContext) -> None We will also want to make the bins for our histogram equal to the range of the histogram. For this we will use numpy, as it will support using floats as well as integers. ```bash -pipenv install numpy +uv add numpy ``` -```python +```python title:services/api.py import numpy as np ... @@ -125,7 +124,7 @@ plt.hist(x=data, bins=np.arange(min(data), max(data) + 2, 1)) At this point the plot will be created, however, nothing but a 200 status will be returned to the user. To actually return the data as an image to the user, we will need to first get the image data. -```python +```python title:services/api.py import io with io.BytesIO() as buffer: # use buffer memory @@ -137,7 +136,7 @@ with io.BytesIO() as buffer: # use buffer memory This will convert the plot to a png and store it in the buffer. We can then return it in the body of our response and set the header to the correct content type. At the end we want to reset the plot. The plot not being cleared will only effect local reruns, as once deployed the state is ephemeral. -```python +```python title:services/api.py with io.BytesIO() as buffer: # use buffer memory plt.savefig(buffer, format='png') buffer.seek(0) @@ -165,25 +164,37 @@ Browsing to this URL should produce a histogram like: ## Deploy to the cloud -Setup your credentials and any other cloud specific configuration: +At this point, you can deploy what you've built to any of the supported cloud providers. In this example we'll deploy to AWS. Start by setting up your credentials and configuration for the [nitric/aws provider](/providers/pulumi/aws). -- [AWS](/providers/pulumi/aws) -- [Azure](/providers/pulumi/azure) -- [GCP](/providers/pulumi/gcp) +Next, we'll need to create a stack file (deployment target). A stack is a deployed instance of an application. You might want separate stacks for each environment, such as stacks for `dev`, `test`, and `prod`. For now, let's start by creating a file for the `dev` stack. -Create your stack. This is an environment configuration file for the cloud provider for which your project will be deployed. +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. ```bash -nitric stack new +nitric stack new dev aws +``` + +Edit the stack file `nitric.dev.yaml` and set your preferred AWS region, for example `us-east-1`. + +```yaml title:nitric.dev.yaml +provider: nitric/aws@latest +region: us-east-1 ``` -You can then deploy using the following command: + + You are responsible for staying within the limits of the free tier or any + costs associated with deployment. + + +Let's try deploying the stack with the `up` command: ```bash nitric up ``` -To undeploy run the following command: +When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your application. + +To tear down your application from the cloud, use the `down` command: ```bash nitric down diff --git a/docs/guides/python/graphql.mdx b/docs/guides/python/graphql.mdx index f7eca8ab4..0764d77d7 100644 --- a/docs/guides/python/graphql.mdx +++ b/docs/guides/python/graphql.mdx @@ -6,7 +6,7 @@ tags: languages: - python published_at: 2022-04-14 -updated_at: 2024-10-17 +updated_at: 2025-01-06 --- # Building a GraphQL API with Nitric @@ -31,7 +31,7 @@ Here's a video of this guide built with Node.js: ## Prerequisites -- [Pipenv](https://pypi.org/project/pipenv/) - for simplified dependency management +- [uv](https://docs.astral.sh/uv/#getting-started) - for Python dependency management - The [Nitric CLI](/get-started/installation) - _(optional)_ Your choice of an [AWS](https://aws.amazon.com), [GCP](https://cloud.google.com) or [Azure](https://azure.microsoft.com) account @@ -40,7 +40,7 @@ Here's a video of this guide built with Node.js: We'll start by creating a new project for our API. ```bash -nitric new my-profile-api py-starter-pipenv +nitric new my-profile-api py-starter ``` Next, open the project in your editor of choice. @@ -49,35 +49,27 @@ Next, open the project in your editor of choice. cd my-profile-api ``` -Make sure all dependencies are resolved: - -Using Pipenv: +Make sure all dependencies are resolved using `uv`: ```bash -pipenv install --dev +uv sync ``` The scaffolded project should have the following structure: ```text ++--.venv/ +--services/ -| +-- hello.py +| +-- api.py ++--.env ++--.gitignore +--nitric.yaml -+--Pipfile -+--Pipfile.lock ++--.pythonversion ++--pythonproject.toml ++--uv.lock +--README.md ``` -You can test the project to verify everything is working as expected: - -Start the Nitric server to emulate cloud services on your machine: - -```bash -nitric start -``` - -If everything is working as expected you can now delete all files in the `services/` folder, we'll create new services in this guide. - ## Build the GraphQL Schema GraphQL requests are typesafe, and so they require a schema to be defined to validate queries. @@ -85,13 +77,12 @@ GraphQL requests are typesafe, and so they require a schema to be defined to val Let's first add the [Ariadne library](https://ariadnegraphql.org/) ```bash -pipenv install ariadne +uv add ariadne ``` -Create a new file named 'graphql.py' in the services folder. -We can then import our dependencies, and write out the schema. +We'll then import our dependencies, and write out the schema. -```python +```python title:services/api.py from ariadne import MutationType, QueryType, gql, make_executable_schema, graphql from uuid import uuid4 @@ -128,7 +119,7 @@ type_defs = gql(""" Lets define a key value store resource for the resolvers get/set data from. -```python +```python title:services/api.py from nitric.resources import api, kv from nitric.application import Nitric @@ -139,7 +130,7 @@ profiles = kv('profiles').allow('get','set') We'll need to map our resolvers to mutations or queries using Ariadne's QueryType or MutationType. -```python +```python title:services/api.py query = QueryType() mutation = MutationType() ``` @@ -152,7 +143,7 @@ An example of this is converting the GraphQL query function into Python: updateProfile(pid: String!, profile: ProfileInput!): Profile ``` -```python +```python title:services/api.py @mutation.field("updateProfile") async def update_profiles(obj, info, pid, profile): pass @@ -160,7 +151,7 @@ async def update_profiles(obj, info, pid, profile): ## Create a profile -```python +```python title:services/api.py @mutation.field("createProfile") async def resolve_create_profile(obj, info, profile): pid = str(uuid4()) @@ -173,7 +164,7 @@ async def resolve_create_profile(obj, info, profile): ## Update a profile -```python +```python title:services/api.py @mutation.field("updateProfile") async def update_profiles(obj, info, pid, profile): profile = await profiles.get(pid) @@ -188,7 +179,7 @@ async def update_profiles(obj, info, pid, profile): ## Get a profile by its ID -```python +```python title:services/api.py @query.field("getProfile") async def resolve_get_profile(obj, info, pid): profile = await profiles.get(pid) @@ -197,11 +188,11 @@ async def resolve_get_profile(obj, info, pid): ## GraphQL Handler -We'll define an API to put our handler in. This API will only have one endpoint, which will handle all the requests. +We'll define an API to define our GraphQL handler. This API will only have one endpoint, which will handle all the requests. First load the schema with our queries and mutations. -```python +```python title:services/api.py from nitric.resources import kv, api from nitric.application import Nitric @@ -213,7 +204,7 @@ Nitric.run() Then add the API handler. -```python +```python title:services/api.py @graph_api.post("/") async def profile_handler(ctx: HttpContext) -> None: query = ctx.req.json @@ -233,18 +224,10 @@ Nitric.run() Now that you have an API defined with a handler for the GraphQL requests, it's time to test it out locally. -Start your Nitric server: - ```bash nitric start ``` -Then test out your service with the following command in a new terminal: - -```bash -pipenv run dev -``` - Once it starts, the service will be able to receive requests via the API port. ## GraphQL Queries @@ -356,25 +339,37 @@ curl --location -X POST \ ## Deploy to the cloud -Setup your credentials and any other cloud specific configuration: +At this point, you can deploy what you've built to any of the supported cloud providers. In this example we'll deploy to AWS. Start by setting up your credentials and configuration for the [nitric/aws provider](/providers/pulumi/aws). -- [AWS](/providers/pulumi/aws) -- [Azure](/providers/pulumi/azure) -- [GCP](/providers/pulumi/gcp) +Next, we'll need to create a stack file (deployment target). A stack is a deployed instance of an application. You might want separate stacks for each environment, such as stacks for `dev`, `test`, and `prod`. For now, let's start by creating a file for the `dev` stack. -Create your stack. This is an environment configuration file for the cloud provider for which your project will be deployed. +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. ```bash -nitric stack new +nitric stack new dev aws ``` -You can then deploy using the following command: +Edit the stack file `nitric.dev.yaml` and set your preferred AWS region, for example `us-east-1`. + +```yaml title:nitric.dev.yaml +provider: nitric/aws@latest +region: us-east-1 +``` + + + You are responsible for staying within the limits of the free tier or any + costs associated with deployment. + + +Let's try deploying the stack with the `up` command: ```bash nitric up ``` -To undeploy run the following command: +When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your application. + +To tear down your application from the cloud, use the `down` command: ```bash nitric down diff --git a/docs/guides/python/podcast-transcription.mdx b/docs/guides/python/podcast-transcription.mdx index 795889038..3b7cbec4d 100644 --- a/docs/guides/python/podcast-transcription.mdx +++ b/docs/guides/python/podcast-transcription.mdx @@ -9,7 +9,7 @@ featured: image: /docs/images/guides/podcast-transcription/featured.png image_alt: 'Podcast Transcription featured image' published_at: 2024-11-15 -updated_at: 2024-11-15 +updated_at: 2025-01-06 --- # Transcribing Podcasts using OpenAI Whisper @@ -352,7 +352,7 @@ preview: - batch-services ``` -### Testing the project +## Testing the project Before deploying our project, we can test that it works as expected locally. You can do this using `nitric start` or if you'd prefer to run the program in containers use `nitric run`. Either way you can test the transcription by first uploading an audio file to the podcast bucket. @@ -380,7 +380,7 @@ Once that's done, the batch job will be triggered so you can just sit back and w curl -sL http://localhost:4002/transcript/serial ``` -### Requesting a G instance quota increase +## Requesting a G instance quota increase Most AWS accounts **will not** have access to on-demand GPU instances (G Instances), if you'd like to run models using a GPU you'll need to request a quota increase for G instances. @@ -411,9 +411,31 @@ To request a quota increase for G instances in AWS you can follow these steps: Once you've requested the quota increase it may take time for AWS to approve it. -### Deploy the project +## Deploy the project -Once the above is complete, we can deploy the project to the cloud using: +At this point, you can deploy what you've built to any of the supported cloud providers. In this example we'll deploy to AWS. Start by setting up your credentials and configuration for the [nitric/aws provider](/providers/pulumi/aws). + +Next, we'll need to create a stack file (deployment target). A stack is a deployed instance of an application. You might want separate stacks for each environment, such as stacks for `dev`, `test`, and `prod`. For now, let's start by creating a file for the `dev` stack. + +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. + +```bash +nitric stack new dev aws +``` + +Edit the stack file `nitric.dev.yaml` and set your preferred AWS region, for example `us-east-1`. + +```yaml title:nitric.dev.yaml +provider: nitric/aws@latest +region: us-east-1 +``` + + + You are responsible for staying within the limits of the free tier or any + costs associated with deployment. + + +Let's try deploying the stack with the `up` command: ```bash nitric up @@ -426,7 +448,11 @@ nitric up Once the project is deployed you can try out some transcriptions, just add a podcast to the bucket and the bucket notification will be triggered. -You can destroy the project once it is finished using `nitric down`. +To tear down your application from the cloud, use the `down` command: + +```bash +nitric down +``` ## Summary diff --git a/docs/guides/python/scheduled-report.mdx b/docs/guides/python/scheduled-report.mdx index 4085c23b9..a6d21aae5 100644 --- a/docs/guides/python/scheduled-report.mdx +++ b/docs/guides/python/scheduled-report.mdx @@ -8,7 +8,7 @@ tags: languages: - python published_at: 2024-04-16 -updated_at: 2024-10-17 +updated_at: 2025-01-06 --- # Generate a report with Google Sheets and share it with Google Drive @@ -21,7 +21,7 @@ We'll create a scheduled service which will run on a daily basis to create and s ## Prerequisites - The [Nitric CLI](/get-started/installation) -- [Pipenv](https://pypi.org/project/pipenv/) - for simplified dependency management +- [uv](https://docs.astral.sh/uv/#getting-started) - for Python dependency management - A [Google Cloud](https://cloud.google.com) account with Sheets and Drive APIs enabled. - Credentials for a Google service account. @@ -30,9 +30,8 @@ We'll create a scheduled service which will run on a daily basis to create and s First, we'll create a new nitric project and install the necessary Python packages. ```bash -nitric new reports py-starter-pipenv -pipenv install google-auth google-api-python-client -pipenv install --dev +nitric new reports py-starter +uv add google-auth google-api-python-client ``` You can now delete all files in the `services/` folder, we'll create new services in this guide. @@ -160,7 +159,7 @@ Nitric.run() We can now set environment variables with the values needed for the scheduled reporting to run. Create a file named `.env` in the root of your project and set the variables below, substituting the correct values for your setup. -```bash +```text title:.env GOOGLE_APPLICATION_CREDENTIALS=/path/to/your/service-account-file.json ADMIN_EMAIL=admin@example.com ``` @@ -179,25 +178,37 @@ nitric start ## Deploy to the cloud -Without creating a separate IaC project, we can immediately deploy our application to the cloud. To do this start by setting up credentials and any configuration for the cloud you prefer: +At this point, you can deploy what you've built to any of the supported cloud providers. In this example we'll deploy to AWS. Start by setting up your credentials and configuration for the [nitric/aws provider](/providers/pulumi/aws). -- [AWS](/providers/pulumi/aws) -- [Azure](/providers/pulumi/azure) -- [GCP](/providers/pulumi/gcp) +Next, we'll need to create a stack file (deployment target). A stack is a deployed instance of an application. You might want separate stacks for each environment, such as stacks for `dev`, `test`, and `prod`. For now, let's start by creating a file for the `dev` stack. -To do this, we'll need to create a `stack`. A stack represents a deployed instance of an application, which is a collection of resources defined in the project. +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. ```bash -nitric stack new +nitric stack new dev aws +``` + +Edit the stack file `nitric.dev.yaml` and set your preferred AWS region, for example `us-east-1`. + +```yaml title:nitric.dev.yaml +provider: nitric/aws@latest +region: us-east-1 ``` -Let's try deploying it with the `up` command + + You are responsible for staying within the limits of the free tier or any + costs associated with deployment. + + +Let's try deploying the stack with the `up` command: ```bash nitric up ``` -To tear down the application from the cloud, use the `down` command: +When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your application. + +To tear down your application from the cloud, use the `down` command: ```bash nitric down diff --git a/docs/guides/python/serverless-rest-api-example.mdx b/docs/guides/python/serverless-rest-api-example.mdx index 7c07496bc..c01b58350 100644 --- a/docs/guides/python/serverless-rest-api-example.mdx +++ b/docs/guides/python/serverless-rest-api-example.mdx @@ -7,7 +7,7 @@ tags: languages: - python published_at: 2022-09-11 -updated_at: 2024-10-17 +updated_at: 2025-01-06 --- # Building your first API with Nitric @@ -19,10 +19,10 @@ updated_at: 2024-10-17 | **Method** | **Route** | **Description** | | ---------- | -------------- | -------------------------------- | +| `GET` | /profiles/ | Get all profiles | | `GET` | /profiles/[id] | Get a specific profile by its Id | | `POST` | /profiles | Create a new profile | | `DELETE` | /profiles/[id] | Delete a profile | -| `PUT` | /profiles/[id] | Update a profile | 3. Run locally for testing 4. Deploy to a cloud of your choice @@ -45,29 +45,33 @@ updated_at: 2024-10-17 We'll start by creating a new project for our API. ```bash -nitric new my-profile-api py-starter-pipenv +nitric new my-profile-api py-starter ``` Next, open the project in your editor of choice. ```bash -> cd my-profile-api +cd my-profile-api ``` -Make sure all dependencies are resolved using Pipenv: +Make sure all dependencies are resolved using `uv`: ```bash -pipenv install --dev +uv sync ``` The scaffolded project should have the following structure: ```text ++--.venv/ +--services/ -| +-- hello.py +| +-- api.py ++--.env ++--.gitignore +--nitric.yaml -+--Pipfile -+--Pipfile.lock ++--.pythonversion ++--pythonproject.toml ++--uv.lock +--README.md ``` @@ -77,13 +81,18 @@ Start the Nitric server to emulate cloud services on your machine: nitric start ``` -If everything is working as expected you can now delete all files in the `services/` folder, we'll create new services in this guide. - ## Building the Profile API -Let's start building our profiles API. Create a file named 'profiles.py' in the services directory and add the following: +This example uses UUIDs to create unique IDs to store profiles against, let's start by adding a library to help with that: -```python +```bash +uv add uuid +``` + +Applications built with Nitric can contain many APIs, let's start by adding an API and a key value store to this project to serve as the public endpoint. + +```python title:services/api.py +import json from uuid import uuid4 from nitric.resources import api, kv, bucket @@ -94,7 +103,7 @@ from nitric.context import HttpContext profile_api = api("public") # Access profile key value store with permissions -profiles = kv('profiles').allow('get', 'set') +profiles = kv('profiles').allow('get', 'set', 'delete') Nitric.run() ``` @@ -106,7 +115,10 @@ Here we're creating: From here, let's add some features to that service that allow us to work with profiles. -> _Note:_ You could separate some or all of these request handlers their own services if you prefer. For simplicity we'll group them together in this guide. + + You could separate some or all of these request handlers their own services if + you prefer. For simplicity we'll group them together in this guide. + ### Create profiles with POST @@ -123,6 +135,21 @@ async def create_profile(ctx: HttpContext) -> None: ctx.res.body = { 'msg': f'Profile with id {pid} created.'} ``` +### Retrieve all profiles with GET + +```python +@profile_api.get("/profiles") +async def get_all_profile(ctx: HttpContext) -> None: + profile_list = [] + + async for id in profiles.keys(): + d = await profiles.get(id) + profile_list.append(d) + + ctx.res.body = json.dumps(profile_list) + ctx.res.headers['Content-Type'] = 'application/json' +``` + ### Retrieve a profile with GET ```python @@ -131,7 +158,7 @@ async def get_profile(ctx: HttpContext) -> None: pid = ctx.req.params['id'] d = await profiles.get(pid) - ctx.res.body = f"{d}" + ctx.res.body = json.dumps(d) ctx.res.headers['Content-Type'] = 'application/json' ``` @@ -186,6 +213,12 @@ curl --location --request POST 'http://localhost:4001/profiles' \ curl --location --request GET 'http://localhost:4001/profiles/[id]' ``` +### Fetch all Profiles + +```bash +curl --location --request GET 'http://localhost:4001/profiles' +``` + ### Delete Profile ```bash @@ -194,38 +227,35 @@ curl --location --request DELETE 'http://localhost:4001/profiles/[id]' ## Deploy to the cloud -At this point, you can deploy what you've built to any of the supported cloud providers. To do this start by setting up your credentials and any configuration for the cloud you prefer: +At this point, you can deploy what you've built to any of the supported cloud providers. In this example we'll deploy to AWS. Start by setting up your credentials and configuration for the [nitric/aws provider](/providers/pulumi/aws). -- [AWS](/providers/pulumi/aws) -- [Azure](/providers/pulumi/azure) -- [GCP](/providers/pulumi/gcp) +Next, we'll need to create a stack file (deployment target). A stack is a deployed instance of an application. You might want separate stacks for each environment, such as stacks for `dev`, `test`, and `prod`. For now, let's start by creating a file for the `dev` stack. -Next, we'll need to create a `stack`. A stack represents a deployed instance of an application, which is a collection of resources defined in your project. You might want separate stacks for each environment, such as stacks for `dev`, `test` and `prod`. For now, let's start by creating a `dev` stack. +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. ```bash -nitric stack new +nitric stack new dev aws ``` -``` -? What should we name this stack? dev -? Which provider do you want to deploy with? aws -? Which region should the stack deploy to? us-east-1 -``` +Edit the stack file `nitric.dev.yaml` and set your preferred AWS region, for example `us-east-1`. -### AWS +```yaml title:nitric.dev.yaml +provider: nitric/aws@latest +region: us-east-1 +``` You are responsible for staying within the limits of the free tier or any costs associated with deployment. -We called our stack `dev`, let's try deploying it with the `up` command +Let's try deploying the stack with the `up` command: ```bash nitric up ``` -When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your API. +When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your application. To tear down your application from the cloud, use the `down` command: @@ -245,8 +275,6 @@ Define a bucket named `profilesImg` with reading/writing permissions photos = bucket("photos").allow('read','write') ``` -> Earlier versions of the Nitric SDK used 'reading', 'writing', etc. permissions. The latest version uses 'read', 'write', etc. - Add imports for time and date so that we can set up caching/expiry headers ```python diff --git a/docs/guides/python/text-prediction.mdx b/docs/guides/python/text-prediction.mdx index ccafe1d00..681e0dbea 100644 --- a/docs/guides/python/text-prediction.mdx +++ b/docs/guides/python/text-prediction.mdx @@ -5,7 +5,7 @@ tags: languages: - python published_at: 2022-12-20 -updated_at: 2024-10-17 +updated_at: 2025-01-06 --- # Building serverless text prediction from training to deployment @@ -16,7 +16,7 @@ We'll be making a simple text based prediction model based off of the book Pride ## Prerequisites -- [Pipenv](https://pypi.org/project/pipenv/) - for simplified dependency management +- [uv](https://docs.astral.sh/uv/#getting-started) - for Python dependency management - The [Nitric CLI](/get-started/installation) - _(optional)_ Your choice of an [AWS](https://aws.amazon.com), [GCP](https://cloud.google.com) or [Azure](https://azure.microsoft.com) account @@ -25,7 +25,7 @@ We'll be making a simple text based prediction model based off of the book Pride We'll start by creating a new project for our API. ```bash -nitric new prediction-api py-starter-pipenv +nitric new prediction-api py-starter ``` Next, open the project in your editor of choice. @@ -34,10 +34,10 @@ Next, open the project in your editor of choice. cd prediction-api ``` -Make sure all dependencies are resolved using Pipenv: +Make sure all dependencies are resolved using `uv`: ```bash -pipenv install --dev +uv sync ``` ## Exploring our data @@ -50,7 +50,7 @@ To start we can manually remove the headers and footers. The header starts with We can then either manually remove the section headers, or do it programmatically. -```python +```python title:prediction/preprocess.py def remove_section_headers(lines: list[str]): section = False new_lines = [] @@ -67,7 +67,7 @@ def remove_section_headers(lines: list[str]): Removing the chapters. -```python +```python title:prediction/preprocess.py import re def remove_chapters(data: str): @@ -76,7 +76,7 @@ def remove_chapters(data: str): Remove contractions. -```python +```python title:prediction/preprocess.py def remove_contractions(data: str) -> str: return (data. replace("shan't", "shall not"). @@ -91,7 +91,7 @@ def remove_contractions(data: str) -> str: Remove punctuation. -```python +```python title:prediction/preprocess.py import string def remove_punctuation(data: str) -> str: @@ -101,12 +101,12 @@ def remove_punctuation(data: str) -> str: Convert to numbers using num2words. This will mean we have to install it. ```bash -pipenv install num2words +uv add num2words ``` We can then write our convert numbers function. -```python +```python title:prediction/preprocess.py from num2words import num2words def convert_numbers(data: str) -> str: @@ -121,7 +121,7 @@ def convert_numbers(data: str) -> str: Putting it all together we can get our cleaned data. -```python +```python title:prediction/preprocess.py # Open text data and read it into array file = open("data.txt", "r") lines = [] @@ -144,12 +144,12 @@ with open('clean_data.txt', 'w') as f: Before we are done, we will want to tokenize the data so that it can be processed by the model. After it's fit to the text, we will save it so we can use it later. To tokenize the data, we will use keras' pre-processing module. For this we require the keras module. ```bash -pipenv install keras +uv add keras ``` We can then create and fit the tokenizer to the text. We will initialize the Out of Vocabulary (OOV) token as ``. -```python +```python title:prediction/preprocess.py import pickle from keras.preprocessing.text import Tokenizer @@ -168,7 +168,7 @@ To train the model, we will be using a Bi-Directional Long-Short Term Memory Rec Start by loading the tokenizer from the pre-processing stage. -```python +```python title:prediction/training.py import pickle with open('tokenizer.pickle', 'rb') as handle: @@ -178,12 +178,12 @@ with open('tokenizer.pickle', 'rb') as handle: We can then create all the input sequences to train our model. This works by getting every 6 word combination in the text. First add numpy as a dependency. ```bash -pipenv install numpy +uv add numpy ``` Then we'll write the function to create the input sequences from the data. -```python +```python title:prediction/training.py import numpy as np from keras.utils import pad_sequences @@ -204,7 +204,7 @@ def create_input_sequences(data: list[str], n_gram_size=6): We'll then split the input sequences into labels, training, and testing data. -```python +```python title:prediction/training.py from keras.utils import to_categorical, pad_sequences from sklearn.model_selection import train_test_split @@ -220,7 +220,7 @@ def create_training_data(input_sequences): The next part is fitting, compiling, and training the model. We will use the X training data and y training data, as well as the sizes of our data. We are using an ADAM optimizer, a reduce learning rate on plateau callback, and a save model on checkpoint callback. -```python +```python title:prediction/training.py # Create callbacks checkpoint = ModelCheckpoint("model.h5", monitor='loss', verbose=1, save_best_only=True, mode='auto') @@ -232,7 +232,7 @@ optimizer = Adam(learning_rate=0.01) Then we will add layers to the sequential model. -```python +```python title:prediction/training.py # Create model model = Sequential() model.add(Embedding(total_words, 100, input_length=max_sequence_len-1)) @@ -244,7 +244,7 @@ model.summary() Putting it all together and compiling the model using the training data. -```python +```python title:prediction/training.py from keras.models import Sequential from keras.layers import LSTM, Dense, Embedding, Bidirectional from keras.optimizers import Adam @@ -281,7 +281,7 @@ def train_model(X_train, y_train, total_words, max_sequence_len): With all the services defined, we can train our model with the cleaned data. -```python +```python title:prediction/training.py data = open('clean_data.txt', 'r').read().split(' ') total_words = len(tokenizer.word_index) + 1 @@ -295,9 +295,9 @@ The model checkpoint save callback will save the model as `model.h5`. We will th ## Predicting text -Starting with the `hello.py` file, we will first load the model and tokenizer. This is done with dynamic imports so that it will reduce the cold start time when its deployed. +Starting with the `api.py` file, we will first load the model and tokenizer. This is done with dynamic imports so that it will reduce the cold start time when its deployed. -```python +```python title:services/api.py import pickle import importlib @@ -323,7 +323,7 @@ def load_model(): Once the model is loaded, we can write a function to predict the next 3 most likely words. This uses the tokenizer to create the same token list that was used to train the model. We can then get a prediction of all the most likely words, which we will reduce down to 3. We'll then get the actual word from the map of tokens by finding the word in the dictionary. The tokenizer word index is in the from `{ "word": token_num }`, e.g. `{ "the": 1, "and": 2 }`. The predictions we receive will be an array of the token numbers. -```python +```python title:prediction/training.py # Predict text based on a set of seed text # Returns a list of 3 top choices for the next word def predict_text(seed_text: str) -> list[str]: @@ -361,7 +361,7 @@ def predict_text(seed_text: str) -> list[str]: Using the predictive text function, we can create our API. First we will make sure that the necessary modules are imported. -```python +```python title:prediction/training.py from nitric.resources import api from nitric.application import Nitric from nitric.context import HttpContext @@ -369,7 +369,7 @@ from nitric.context import HttpContext We will then define the API and our first route. -```python +```python title:prediction/training.py mainApi = api("main") @mainApi.get("/prediction") @@ -381,7 +381,7 @@ Nitric.run() Within this function block we want to define the code that is run on a request. We will accept the prompt to predict from via the query parameters. This will mean requests are in the form: `/predictions?prompt=where should I`. -```python +```python title:prediction/training.py @mainApi.get("/prediction") async def create_prediction(ctx: HttpContext) -> None: prompt = ctx.req.query.get("prompt") @@ -396,7 +396,7 @@ Nitric.run() With the users prompt we can then do the prediction and return to the user the prediction. -```python +```python title:prediction/training.py @mainApi.get("/prediction") async def create_prediction(ctx: HttpContext) -> None: ... @@ -426,37 +426,40 @@ What should I ['have', 'think', 'say'] ## Deploy to the cloud -Setup your credentials and any other cloud specific configuration: +At this point, you can deploy what you've built to any of the supported cloud providers. In this example we'll deploy to AWS. Start by setting up your credentials and configuration for the [nitric/aws provider](/providers/pulumi/aws). -- [AWS](/providers/pulumi/aws) -- [Azure](/providers/pulumi/azure) -- [GCP](/providers/pulumi/gcp) +Next, we'll need to create a `stack file` (deployment target). A stack is a deployed instance of an application. You might want separate stacks for each environment, such as stacks for `dev`, `test`, and `prod`. For now, let's start by creating a file for the `dev` stack. -Create your stack. This is an environment configuration file for the cloud provider for which your project will be deployed. +The `stack new` command below will create a stack named `dev` that uses the `aws` provider. ```bash -nitric stack new +nitric stack new dev aws ``` -This project will run perfectly fine with a default memory configuration of 512 MB. However, to get instant predictions we will amend the memory to be 1 GB. In the newly created stack file we want to add some config. +This project will run perfectly fine with a default memory configuration of 512 MB. However, to get instant predictions we will amend the memory to be 1 GB. Edit the stack file `nitric.dev.yaml` and set your preferred AWS region and memory configuration. -```yaml -name: dev -provider: gcp -region: us-west2 -project: gcp-project-123456 +```yaml title:nitric.dev.yaml +provider: nitric/awsp@latest +region: us-east-1 config: - default: + lambda: memory: 1024 ``` -You can then deploy using the following command: + + You are responsible for staying within the limits of the free tier or any + costs associated with deployment. + + +Let's try deploying the stack with the `up` command: ```bash nitric up ``` -To undeploy run the following command: +When the deployment is complete, go to the relevant cloud console and you'll be able to see and interact with your application. + +To tear down your application from the cloud, use the `down` command: ```bash nitric down diff --git a/docs/index.mdx b/docs/index.mdx index d82660822..0ce770c9b 100644 --- a/docs/index.mdx +++ b/docs/index.mdx @@ -249,7 +249,7 @@ We have several providers built-in with IaC from [Pulumi](https://www.pulumi.com Projects built with Nitric don't have many restrictions. You can use most languages, libraries, tools, clouds, services, mostly anything you like. But, you need to have a `nitric.yaml` file in the root of your project. -```yaml {{ label:"nitric.yaml" }} +```yaml title:nitric.yaml name: example services: - match: services/*.ts diff --git a/docs/providers/custom/create.mdx b/docs/providers/custom/create.mdx index 5ca3ba38e..84b70e409 100644 --- a/docs/providers/custom/create.mdx +++ b/docs/providers/custom/create.mdx @@ -120,7 +120,7 @@ type NitricPulumiProvider interface { Before you override any of the methods, you'll need to create a provider interface. -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go import ( "github.com/nitrictech/nitric/cloud/common/deploy" "github.com/nitrictech/nitric/cloud/common/deploy/provider" @@ -148,7 +148,7 @@ func NewNitricCustomPulumiProvider() *NitricCustomPulumiProvider { The `Init` method is used to initialize the provider with the required attributes. This method is called before any of the resource creation and is not a part of the Pulumi context. This is where you will validate stack files attributes and add them into the provider. Below is some boilerplate code for the `Init` method and a helper method for converting stack attributes to a configuration object. -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go import ( ... @@ -179,7 +179,7 @@ var err error ``` -```go {{ label: "deploy/config.go" }} +```go title:deploy/config.go package deploy import "github.com/mitchellh/mapstructure" @@ -205,7 +205,7 @@ func ConfigFromAttributes(attributes map[string]interface{}) (*CustomConfig, err The `Pre` method is called before any resources are created, but after the pulumi context has been established. This is where global pulumi resources that must be created before all other resources can be created. This is where a unique stack id can be created, as well as global resources like resource groups or service accounts. -```go {{ label: "deploy/deploy.go"}} +```go title:deploy/deploy.go"}} import ( ... "github.com/nitrictech/nitric/cloud/common/deploy/pulumix" @@ -247,7 +247,7 @@ func (a *NitricCustomPulumiProvider) Pre(ctx *pulumi.Context, resources []*pulum The `Config` method is where you can create the Pulumi ConfigMap with provider specific information. Below the pulumi config map is used to set the Pulumi Docker version. For the AWS provider it is used to set the region. The use case will be highly dependent on what pulumi provider you are using. -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go import ( ... "github.com/pulumi/pulumi/sdk/v3/go/auto" @@ -311,7 +311,7 @@ func (*NitricDefaultOrder) Order(resources []*deploymentspb.Resource) []*deploym The `Post` method is called after all resources have been created, but before the pulumi context has been concluded. This is where you can put cleanup if required. -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go func (a *NitricCustomPulumiProvider) Post(ctx *pulumi.Context) error { return nil } @@ -321,7 +321,7 @@ func (a *NitricCustomPulumiProvider) Post(ctx *pulumi.Context) error { The `Result` method is the last to be called. This is where you can get any output information from the resources (like generated API endpoints) and return them as `stdout`. -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go func (a *NitricCustomPulumiProvider) Result(ctx *pulumi.Context) (pulumi.StringOutput, error) { outputs := []interface{}{} @@ -356,7 +356,7 @@ Each of the resource methods are where you can implement the custom deployment c Below is the stub for an unimplemented bucket resource. -```go {{ label: "deploy/bucket.go" }} +```go title:deploy/bucket.go import ( deploymentspb "github.com/nitrictech/nitric/core/pkg/proto/deployments/v1" "github.com/pulumi/pulumi/sdk/v3/go/pulumi" @@ -369,7 +369,7 @@ func (n *NitricCustomPulumiProvider) Bucket(ctx *pulumi.Context, parent pulumi.R For example, if you wanted your implementation to use a bucket with the Digital Ocean Pulumi provider, it would look like this: -```go {{ label: "deploy/bucket.go" }} +```go title:deploy/bucket.go import ( deploymentspb "github.com/nitrictech/nitric/core/pkg/proto/deployments/v1" "github.com/pulumi/pulumi-digitalocean/sdk/v4/go/digitalocean" @@ -395,7 +395,7 @@ func (a *NitricDOPulumiProvider) Bucket(ctx *pulumi.Context, parent pulumi.Resou Below is how we would change the provider interface to add references to the Bucket. -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go import ( "github.com/nitrictech/nitric/cloud/common/deploy" "github.com/nitrictech/nitric/cloud/common/deploy/provider" @@ -427,7 +427,7 @@ The most complicated resource to create is the `Service` resource. This is becau [here](https://github.com/nitrictech/nitric/blob/main/cloud/aws/deploy/service.go). -```go {{ label: "deploy/service.go" }} +```go title:deploy/service.go package deploy import ( @@ -543,7 +543,7 @@ If you have been using the [custom provider skeleton](https://github.com/nitrict The implementation details for each of the runtime resources will be highly dependent on what provider you are using. For example, if you are using S3 as your Bucket implementation, your runtime provider will use a S3 client. An example of a S3 implementation is shown here: -```go {{ label: "runtime/storage/storage.go"}} +```go title:runtime/storage/storage.go"}} package storage import ( @@ -631,7 +631,7 @@ The first step is creating the runtime entrypoint file. This sets up which plugi The Nitric runtime server is referred to as the `membrane`. -```go {{ label: "cmd/runtime/main.go"}} +```go title:cmd/runtime/main.go"}} package main import ( @@ -701,7 +701,7 @@ func main() { We'll then create the deployment provider which will embed the runtime provider. We create the custom provider and use `provider.NewPulumiProviderServer` to wrap the provider so it can be used as a deployment gRPC server. `providerStack.Start()` starts the gRPC server. -```go {{ label: "cmd/deploy/main.go"}} +```go title:cmd/deploy/main.go"}} package main import ( @@ -732,7 +732,7 @@ func main() { We then need a way use our provider with Nitric projects. The following `makefile` has default scripts to build our runtime binary and our deployment binary, as well as a script `make install` to put in the provider directory. The `go build` output files match the binaries we were embedding into our `cmd` files earlier. -```makefile {{ label: "makefile" }} +```makefile title:makefile binaries: deploybin # build runtime binary @@ -771,7 +771,7 @@ This will build the runtime provider and the deployment provider, packaging them To use the custom provider you can use the following stack configuration file. If you added any additional attribute config, this is where it will go. -```yaml {{ label: "nitric.xxxx.yaml"}} +```yaml title:nitric.xxxx.yaml"}} provider: custom/extension@0.0.1 region: us-east-1 ``` diff --git a/docs/providers/custom/extend.mdx b/docs/providers/custom/extend.mdx index 1bb116004..ae66f71ab 100644 --- a/docs/providers/custom/extend.mdx +++ b/docs/providers/custom/extend.mdx @@ -105,7 +105,7 @@ We will then scaffold our project structure. This isn't necessary if you are bui We can start by creating the deployment interface. This will be an interface that embeds the Nitric AWS provider, that way we only have to build the services that we want to replace. -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go package deploy import ( @@ -139,7 +139,7 @@ Now we can create an extension configuration to allow adding digital ocean confi Start by defining the type of configuration we want. To deploy to digital ocean we require setting a Digital Ocean token as well as a spaces key, secret, and region. -```go {{ label: "deploy/config.go" }} +```go title:deploy/config.go package deploy import ( @@ -162,7 +162,7 @@ type SpacesConfig struct { The attributes from the stack file are sent to the provider as a `map[string]interface{}`. We'll write a helper function to convert and validate the `config`. -```go {{ label: "deploy/config.go" }} +```go title:deploy/config.go func ConfigFromAttributes(attributes map[string]interface{}) (*ExtensionConfig, error) { // Get our extension config extensionConfig := &ExtensionConfig{} @@ -190,7 +190,7 @@ func ConfigFromAttributes(attributes map[string]interface{}) (*ExtensionConfig, Now that we have the helper, we can overwrite the AWS configuration `Init` function to take config from our `ConfigFromAttributes` function. The `Init` function is what is run before any pulumi resource creation to populate a provider with the required attributes. -```go {{ label: "deploy/deploy.go"}} +```go title:deploy/deploy.go"}} import ( ... "google.golang.org/grpc/codes" @@ -222,7 +222,7 @@ var err error We can then replace the `Config` function to add our digital ocean token and spaces access key. The `Config` function is used to provide pulumi with configuration variables. These are standardized based on the provider. You can find the Digital Ocean Pulumi config [here](https://www.pulumi.com/registry/packages/digitalocean/installation-configuration/#configuring-credentials). -```go {{ label: "deploy/deploy.go" }} +```go title:deploy/deploy.go import ( ... common "github.com/nitrictech/nitric/cloud/common/deploy" @@ -253,7 +253,7 @@ Now that we have our provider initialized with all the configuration we require, The service resource will need to be changed slightly to add our spaces credentials as environment variables. This will allow our Lambda's to interact with our Spaces bucket at runtime. -```go {{ label: "deploy/service.go"}} +```go title:deploy/service.go"}} package deploy import ( @@ -280,7 +280,7 @@ As we aren't deploying S3 buckets to AWS, the base provider's policy (which incl methods. You can get this dependency with `go get "github.com/samber/lo"` -```go {{ label: "deploy/policy.go" }} +```go title:deploy/policy.go package deploy import ( @@ -320,7 +320,7 @@ func (a *AwsExtensionProvider) Policy(ctx *pulumi.Context, parent pulumi.Resourc With that done, we can create our Bucket. This is done using the [pulumi SDK](https://www.pulumi.com/registry/packages/digitalocean/api-docs/spacesbucket/). -```go {{ label: "deploy/bucket.go" }} +```go title:deploy/bucket.go import ( deploymentspb "github.com/nitrictech/nitric/core/pkg/proto/deployments/v1" "github.com/pulumi/pulumi-digitalocean/sdk/v4/go/digitalocean" @@ -349,7 +349,7 @@ With the deployment code created, we can now do the runtime implementation for t Spaces is compatible with the S3 APIs, so we can use the AWS S3 implementation and just change out a few AWS specific features. We'll start by creating the interface. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go import ( "github.com/nitrictech/nitric/cloud/aws/ifaces/s3iface" "github.com/nitrictech/nitric/cloud/aws/runtime/env" @@ -369,7 +369,7 @@ var _ storagepb.StorageServer = (*SpacesStorageService)(nil) We can then write a function to create a new `SpacesStorageService` with the S3 client authenticated using our digital ocean credentials. We have access to these environment variables because of the augmentation we did to the `Services` deployment code. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go import ( ... "github.com/aws/aws-sdk-go-v2/aws" @@ -403,7 +403,7 @@ func New(provider resource.AwsResourceProvider) (*SpacesStorageService, error) { A complete Nitric bucket implementation has the following functions that need to be implemented. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go import ( ... storagepb "github.com/nitrictech/nitric/core/pkg/proto/storage/v1" @@ -421,7 +421,7 @@ func (s *SpacesStorageService) PreSignUrl(ctx context.Context, req *storagepb.St If you don't want to implement all the functions you can return an unimplemented exception. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go import ( "google.golang.org/grpc/status" "google.golang.org/grpc/codes" @@ -435,7 +435,7 @@ func (s *SpacesStorageService) PreSignUrl(ctx context.Context, req *storagepb.St However, for this guide we will be implementing every feature. Let's start with the `Read` function. `Read` will get the contents of a blob from the bucket. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go import ( ... @@ -472,7 +472,7 @@ func (s *SpacesStorageService) Read(ctx context.Context, req *storagepb.StorageR `Write` creates or updates a blob in a bucket. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go func (s *SpacesStorageService) Write(ctx context.Context, req *storagepb.StorageWriteRequest) (*storagepb.StorageWriteResponse, error) { bucketName := &req.BucketName @@ -493,7 +493,7 @@ func (s *SpacesStorageService) Write(ctx context.Context, req *storagepb.Storage `Delete` deletes a blob from a bucket. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go func (s *SpacesStorageService) Delete(ctx context.Context, req *storagepb.StorageDeleteRequest) (*storagepb.StorageDeleteResponse, error) { bucketName := &req.BucketName @@ -510,7 +510,7 @@ func (s *SpacesStorageService) Delete(ctx context.Context, req *storagepb.Storag `Exists` checks the existence of a single blob, returning true if it exists, false if it does not. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go func (s *SpacesStorageService) Exists(ctx context.Context, req *storagepb.StorageExistsRequest) (*storagepb.StorageExistsResponse, error) { bucketName := &req.BucketName @@ -532,7 +532,7 @@ func (s *SpacesStorageService) Exists(ctx context.Context, req *storagepb.Storag `ListBlobs` will list all the blobs in a bucket. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go func (s *SpacesStorageService) ListBlobs(ctx context.Context, req *storagepb.StorageListBlobsRequest) (*storagepb.StorageListBlobsResponse, error) { var prefix *string = nil if req.Prefix != "" { @@ -565,7 +565,7 @@ func (s *SpacesStorageService) ListBlobs(ctx context.Context, req *storagepb.Sto `PreSignUrl` generates a signed URL which can be used to perform direct operations on a file. It is useful for large file uploads/downloads so they can bypass application code and work directly with S3. A pre-signed url request can either be for a download URL or an upload URL. An expiry time can also be specified. -```go {{ label: "runtime/storage/spaces.go" }} +```go title:runtime/storage/spaces.go func (s *SpacesStorageService) PreSignUrl(ctx context.Context, req *storagepb.StoragePreSignUrlRequest) (*storagepb.StoragePreSignUrlResponse, error) { bucketName := &req.BucketName @@ -610,7 +610,7 @@ The first step is creating the runtime entrypoint file. This sets up which plugi The Nitric runtime server is referred to as the `membrane`. -```go {{ label:"cmd/runtime/main.go"}} +```go title:cmd/runtime/main.go package main import ( @@ -700,7 +700,7 @@ func main() { We'll then create the deployment provider which will embed the runtime provider. We create the AWS extension provider and use `provider.NewPulumiProviderServer` to wrap the provider so it can be used as a deployment gRPC server. `providerStack.Start()` starts the gRPC server. -```go {{ label:"cmd/deploy/main.go"}} +```go title:cmd/deploy/main.go package main import ( @@ -738,7 +738,7 @@ func main() { We then need a way use our provider with Nitric projects. The following `makefile` has default scripts to build our runtime binary and our deployment binary, as well as a script `make install` to put in the provider directory. The `go build` output files match the binaries we were embedding into our `cmd` files earlier. -```makefile {{ label: "makefile" }} +```makefile title:makefile .PHONY: install # build runtime binary @@ -780,7 +780,7 @@ To use the custom extension you can use the following stack configuration file. - [spaces_key](https://cloud.digitalocean.com/account/api/spaces) - [spaces_secret](https://cloud.digitalocean.com/account/api/spaces) -```yaml {{ label: "nitric.xxxx.yaml"}} +```yaml title:nitric.xxxx.yaml"}} provider: custom/extension@0.0.1 region: us-east-1 token: `digital_ocean_token`