Skip to content
This repository has been archived by the owner on Aug 8, 2023. It is now read-only.

Commit

Permalink
Create/Delete remote RHOSAK cluster
Browse files Browse the repository at this point in the history
Fixes #1

Signed-off-by: Fred Bricon <fbricon@gmail.com>
  • Loading branch information
fbricon committed Aug 23, 2021
1 parent bb5b7a9 commit d730300
Show file tree
Hide file tree
Showing 18 changed files with 659 additions and 23 deletions.
1 change: 1 addition & 0 deletions .vscodeignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,6 @@ vsc-extension-quickstart.md
**/*.map
**/*.ts
*.vsix
doc/**
webpack.config.js
node_modules
5 changes: 5 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,11 @@

All notable changes to the `Red Hat OpenShift Application Services` extension will be documented in this file.

## 0.0.3 (23/08/2021)

- Create and delete remote RHOSAK clusters [#3](https://github.com/redhat-developer/vscode-rhoas/pull/3).
- Update logo

## 0.0.2 (07/07/2021)

- Use `@rhoas/kafka-management-sdk` to interact with RHOSAK clusters [#4](https://github.com/redhat-developer/vscode-rhoas/pull/4).
Expand Down
9 changes: 7 additions & 2 deletions USAGE_DATA.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,16 @@

## What's included in the `Red Hat OpenShift Application Services` telemetry data

* A telemetry event is sent every time new Kafka clusters have been added
* A telemetry event is sent every time new Kafka clusters have been added to the Kafka explorer.
- includes the number of added clusters
- the error message if there was an error
* A telemetry event is sent every time the "Red Hat OpenShift Streams for Apache Kafka" dashboard is opened, via the command palette or clicking on a button
- includes the reason why it was opened
* A telemetry event is sent every time an error occurred while fetching Kafka clusters.
* A telemetry event is sent a new "Red Hat OpenShift Streams for Apache Kafka" cluster is created. Data includes:
- the cloud provider
- the cloud region
- multizone or not
- the error message if there was an error

## What's included in the general telemetry data

Expand Down
Binary file added doc/images/create-rhosak-command.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/create-rhosak.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/delete-rhosak.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/import-rhosak.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/new-rhosak-cluster.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified doc/images/no-cluster.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/images/rhosak-menus.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
41 changes: 35 additions & 6 deletions doc/kafkaSupport.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ You can either:
<img title="Discover Kafka Providers" src="images/discover-cluster-providers.png" width="650" />


### Adding a cluster
### Adding an existing cluster
Once `Red Hat OpenShift Application Services` is installed, open the Kafka view. Clicking on the `Add new cluster` button (or `+`) in the Kafka Explorer will bring up the options to create new clusters. Select `Red Hat OpenShift Streams For Apache Kafka`:

![](images/new-rhosak-cluster.png)
Expand All @@ -24,7 +24,7 @@ This will open a browser page to sign into https://sso.redhat.com. Once you sign

This first sign-in operation will allow the discovery of your existing `Red Hat OpenShift Streams For Apache Kafka` clusters.

If no clusters have been created yet, a message will provide you with a link to open the [dashboard](https://cloud.redhat.com/beta/application-services/streams/kafkas):
If no clusters have been created yet, a pop-up will let you [create a new remote custer](#create-a-new-remote-cluster) or open the [dashboard](https://cloud.redhat.com/beta/application-services/streams/kafkas):

![](images/no-cluster.png)

Expand All @@ -34,22 +34,51 @@ If you already created your Apache Kafka cluster, the extension will require to

That second sign-in is transparent though, you won't need to manually log in.


![Signed into Red Hat](images/signedin2.png)

Finally, your cluster(s) will automatically be added to the Kafka Explorer.

![](images/cluster-added.png)

Here's an example of the workflow, when the user already logged in their Red Hat account:
![](images/import-rhosak.gif)

Please read the [`Tools for Apache Kafka` documentation](https://github.com/jlandersen/vscode-kafka/blob/master/docs/README.md) and learn how to manage your Kafka clusters and topics and how to produce or consume messages.

### Opening the Red Hat OpenShift Streams For Apache Kafka dashboard
### Create a new remote cluster

If you want to create a new remote cluster, you can do so by clicking on the `Create new remote cluster` button, after trying to discover existing clusters. You will then need to select:
- a cluster name,
- the cloud provider (only `Amazon Web Services` is available at the moment),
- the cloud region (only `us-east-1` is available at the moment),
- multizone (`true` is the only choice at the moment).

Creating the cluster usually takes 3 to 4 minutes.
A progress dialog will be displayed while the cluster is being created.
In the meantime, the web dashboard will be opened to show you the status of the cluster.

Here's an example of the workflow:

![](images/create-rhosak.gif)

Alternatively, you can also create a remote cluster from the `Red Hat: Create a Red Hat OpenShift Streams For Apache Kafka cluster` command, available from the command palette (F1):

`Red Hat OpenShift Streams For Apache Kafka` clusters display a unique menu when you right-click on them, to open their online dashboard page:
![](images/open-dashboard-command.png)
![](images/create-rhosak-command.png)




### Red Hat OpenShift Streams For Apache Kafka specific menus

`Red Hat OpenShift Streams For Apache Kafka` clusters display specific menus when you right-click on them:
![](images/rhosak-menus.png)

You can either open the Dashboard:
![](images/cluster-dashboard.png)

Or delete the remote cluster:
![](images/delete-rhosak.png)

### About ephemeral clusters

Please be aware ephemeral [`Red Hat OpenShift Streams For Apache Kafka`](https://cloud.redhat.com/beta/application-services/streams/kafkas) clusters are not automatically purged from the Kafka settings after they have been deprovisioned. You will need to [delete](https://github.com/jlandersen/vscode-kafka/blob/master/docs/Explorer.md#delete) them manually from the Kafka Explorer.
12 changes: 6 additions & 6 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

21 changes: 19 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,16 @@
"command": "rhoas.open.RHOSAKDashboard",
"category": "Red Hat",
"title": "Open Red Hat OpenShift Streams for Apache Kafka Dashboard"
},
{
"command": "rhoas.create.RHOSAKCluster",
"category": "Red Hat",
"title": "Create a Red Hat OpenShift Streams for Apache Kafka cluster"
},
{
"command": "rhoas.delete.RHOSAKCluster",
"category": "Red Hat",
"title": "Delete remote cluster"
}
],
"menus": {
Expand All @@ -44,6 +54,11 @@
"command": "rhoas.open.RHOSAKDashboard",
"when": "view == kafkaExplorer && viewItem =~ /^cluster-rhosak$|^selectedCluster-rhosak$/ && !listMultiSelection",
"group": "0_rhosak"
},
{
"command": "rhoas.delete.RHOSAKCluster",
"when": "view == kafkaExplorer && viewItem =~ /^cluster-rhosak$|^selectedCluster-rhosak$/ && !listMultiSelection",
"group": "0_rhosak"
}
]
}
Expand All @@ -54,7 +69,9 @@
],
"main": "./dist/extension.js",
"activationEvents": [
"onCommand:rhoas.open.RHOSAKDashboard"
"onCommand:rhoas.open.RHOSAKDashboard",
"onCommand:rhoas.create.RHOSAKCluster",
"onCommand:rhoas.delete.RHOSAKCluster"
],
"scripts": {
"vscode:prepublish": "npm run package",
Expand Down Expand Up @@ -85,6 +102,6 @@
},
"dependencies": {
"@redhat-developer/vscode-redhat-telemetry": "0.2.0",
"@rhoas/kafka-management-sdk": "0.9.0"
"@rhoas/kafka-management-sdk": "0.12.2"
}
}
63 changes: 61 additions & 2 deletions src/commands.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,11 @@
import { TelemetryService } from '@redhat-developer/vscode-redhat-telemetry/lib';
import { commands, ExtensionContext, Uri } from 'vscode';
import { TelemetryEvent, TelemetryService } from '@redhat-developer/vscode-redhat-telemetry/lib';
import { authentication, commands, ExtensionContext, Uri, window } from 'vscode';
import { rhosakService } from './rhosakService';
import { createRemoteCluster } from './wizard';

export const OPEN_RHOSAK_DASHBOARD_CMD = 'rhoas.open.RHOSAKDashboard';
export const DELETE_RHOSAK_CLUSTER_CMD = 'rhoas.delete.RHOSAKCluster';
export const CREATE_RHOSAK_CLUSTER_CMD = 'rhoas.create.RHOSAKCluster';
const LANDING_PAGE = 'https://cloud.redhat.com/beta/application-services/streams';

export function registerCommands (context: ExtensionContext, telemetryService:TelemetryService ) {
Expand All @@ -17,6 +21,61 @@ export function registerCommands (context: ExtensionContext, telemetryService:Te
openRHOSAKDashboard(telemetryService, reason, clusterId);
})
);
context.subscriptions.push(
commands.registerCommand(DELETE_RHOSAK_CLUSTER_CMD, async (clusterItem?: any) => {
let clusterId:string|undefined;
if (clusterItem?.cluster?.id) {
clusterId = clusterItem.cluster.id;
} else if (clusterItem?.id) {
clusterId = clusterItem.id;
}
if (!clusterId) {
return;
}
let name: string|undefined;
if (clusterItem?.cluster?.name) {
name = clusterItem.cluster.name;
} else if (clusterItem?.name) {
name = clusterItem.name;
}
const deleteConfirmation = await window.showWarningMessage(`Are you sure you want to physically delete remote cluster '${name}'?`, 'Cancel', 'Delete');
if (deleteConfirmation !== 'Delete') {
return;
}

const session = await authentication.getSession('redhat-account-auth', ['openid'], { createIfNone: true });
if (session) {
let event = {
name: "rhoas.delete.rhosak.cluster",
properties:[]
} as TelemetryEvent;
try {
await rhosakService.deleteKafka(clusterId!, session.accessToken);
} catch (error) {
event.properties.error = error.message;
window.showErrorMessage(`Failed to delete remote Kafka cluster '${name}': ${error.message}`);
}
telemetryService.send(event);
const deleteRequest = {
clusterIds: [clusterId]
};
return commands.executeCommand("vscode-kafka.api.deleteclusters", deleteRequest);
}
})
);
context.subscriptions.push(
commands.registerCommand(CREATE_RHOSAK_CLUSTER_CMD, async () => {
try {
const clusters = await createRemoteCluster(telemetryService);
if (clusters && clusters.length > 0){
return commands.executeCommand("vscode-kafka.api.saveclusters", clusters);
}
} catch(error) {
console.log(error);
window.showErrorMessage(error.message);
}
})
);
}

export async function openRHOSAKDashboard(telemetryService:TelemetryService, reason: string, clusterId?:string) {
Expand Down
11 changes: 7 additions & 4 deletions src/extension.ts
Original file line number Diff line number Diff line change
@@ -1,13 +1,14 @@
/* eslint-disable @typescript-eslint/naming-convention */
import { getRedHatService, TelemetryService } from "@redhat-developer/vscode-redhat-telemetry";
import { authentication, commands, ExtensionContext, ProgressLocation, window } from 'vscode';
import { openRHOSAKDashboard, registerCommands } from './commands';
import { CREATE_RHOSAK_CLUSTER_CMD, openRHOSAKDashboard, registerCommands } from './commands';
import { rhosakService } from './rhosakService';
import { convertAll } from './utils';
import { Cluster, ClusterProviderParticipant, ClusterSettings, ConnectionOptions, KafkaConfig, KafkaExtensionParticipant } from './vscodekafka-api';

const RHOSAK_LABEL = "Red Hat OpenShift Streams for Apache Kafka";
const OPEN_DASHBOARD = 'Open Dashboard';
const CREATE_CLUSTER = 'Create a new remote cluster';

export async function activate(context: ExtensionContext): Promise<KafkaExtensionParticipant> {
let telemetryService: TelemetryService = await (await getRedHatService(context)).getTelemetryService();
Expand Down Expand Up @@ -49,7 +50,7 @@ async function configureClusters(clusterSettings: ClusterSettings, telemetryServ
});
} catch (error) {
let event = {
name: "rhoas.add.rhosak.clusters.failure",
name: "rhoas.add.rhosak.clusters",
properties: {
"error": `${error}`
}
Expand Down Expand Up @@ -85,8 +86,10 @@ async function configureClusters(clusterSettings: ClusterSettings, telemetryServ
window.showInformationMessage(`All ${RHOSAK_LABEL} clusters have already been added`);
} else {
// Should open the landing page
const action = await window.showWarningMessage(`No ${RHOSAK_LABEL} cluster available!`, OPEN_DASHBOARD);
if (action === OPEN_DASHBOARD) {
const action = await window.showWarningMessage(`No ${RHOSAK_LABEL} cluster available!`, CREATE_CLUSTER, OPEN_DASHBOARD);
if (action === CREATE_CLUSTER) {
commands.executeCommand(CREATE_RHOSAK_CLUSTER_CMD);
} else if (action === OPEN_DASHBOARD) {
openRHOSAKDashboard(telemetryService, "No clusters");
}
}
Expand Down
Loading

0 comments on commit d730300

Please sign in to comment.