Skip to content

Commit

Permalink
Docs: Handle Throwing Expression in SpeziLLMOpenAI.md's LLMOpenAIDemo…
Browse files Browse the repository at this point in the history
… Example (#61)

# Handle Throwing Expression in SpeziLLMOpenAI.md's LLMOpenAIDemo
Example

## ♻️ Current situation & Problem

After copying a code example from the SpeziLLM documentation, I noticed
that ...
- a throwing expression wasn’t being handled. To make life eseier for
future readers, add a simple `do … catch` for handling any exceptions
that may be thrown. Also, happy to take suggestions for a more elegant
way to do this!
- some imports were missing. To make life eseier for future readers, add
the missing imports. I hope I didn’t miss any.
- I was getting an error around the use of `@EnvironmentObject`. Fix the
error by substituting `@EnvironmentObject` with `@Environment`, which is
also in line with the code in `TestApp`.

## ⚙️ Release Notes 
- Update documentation

## 📚 Documentation
- Update Documentation


## 📝 Code of Conduct & Contributing Guidelines 

By submitting creating this pull request, you agree to follow our [Code
of
Conduct](https://github.com/StanfordSpezi/.github/blob/main/CODE_OF_CONDUCT.md)
and [Contributing
Guidelines](https://github.com/StanfordSpezi/.github/blob/main/CONTRIBUTING.md):
- [x] I agree to follow the [Code of
Conduct](https://github.com/StanfordSpezi/.github/blob/main/CODE_OF_CONDUCT.md)
and [Contributing
Guidelines](https://github.com/StanfordSpezi/.github/blob/main/CONTRIBUTING.md).

---------

Co-authored-by: Paul Schmiedmayer <PSchmiedmayer@users.noreply.github.com>
  • Loading branch information
paulhdk and PSchmiedmayer authored Aug 19, 2024
1 parent f52fe38 commit e53bc15
Show file tree
Hide file tree
Showing 7 changed files with 78 additions and 19 deletions.
32 changes: 26 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,8 +127,12 @@ struct LLMLocalDemoView: View {
)
)
for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand All @@ -150,6 +154,10 @@ In order to use OpenAI LLMs within the Spezi ecosystem, the [SpeziLLM](https://s
See the [SpeziLLM documentation](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) for more details.
```swift
import Spezi
import SpeziLLM
import SpeziLLMOpenAI
class LLMOpenAIAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
Expand All @@ -171,6 +179,10 @@ The code example below showcases the interaction with an OpenAI LLM through the
The `LLMOpenAISchema` defines the type and configurations of the to-be-executed `LLMOpenAISession`. This transformation is done via the [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner) that uses the `LLMOpenAIPlatform`. The inference via `LLMOpenAISession/generate()` returns an `AsyncThrowingStream` that yields all generated `String` pieces.
```swift
import SpeziLLM
import SpeziLLMOpenAI
import SwiftUI
struct LLMOpenAIDemoView: View {
@Environment(LLMRunner.self) var runner
@State var responseText = ""
Expand All @@ -189,8 +201,12 @@ struct LLMOpenAIDemoView: View {
)
)
for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down Expand Up @@ -263,8 +279,12 @@ struct LLMFogDemoView: View {
)
)
for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMFog/LLMFogSession.swift
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,12 @@ import SpeziLLM
/// )
/// )
///
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// do {
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// }
/// } catch {
/// // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
/// }
/// }
/// }
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMFog/SpeziLLMFog.docc/SpeziLLMFog.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,12 @@ struct LLMFogDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMLocal/LLMLocalSession.swift
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,12 @@ import SpeziLLM
/// )
/// )
///
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// do {
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// }
/// } catch {
/// // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
/// }
/// }
/// }
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMLocal/SpeziLLMLocal.docc/SpeziLLMLocal.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,8 +111,12 @@ struct LLMLocalDemoView: View {
)
)
for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down
12 changes: 10 additions & 2 deletions Sources/SpeziLLMOpenAI/LLMOpenAISession.swift
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,10 @@ import SpeziSecureStorage
/// The example below demonstrates a minimal usage of the ``LLMOpenAISession`` via the `LLMRunner`.
///
/// ```swift
/// import SpeziLLM
/// import SpeziLLMOpenAI
/// import SwiftUI
///
/// struct LLMOpenAIDemoView: View {
/// @Environment(LLMRunner.self) var runner
/// @State var responseText = ""
Expand All @@ -51,8 +55,12 @@ import SpeziSecureStorage
/// )
/// )
///
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// do {
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// }
/// } catch {
/// // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
/// }
/// }
/// }
Expand Down
21 changes: 18 additions & 3 deletions Sources/SpeziLLMOpenAI/SpeziLLMOpenAI.docc/SpeziLLMOpenAI.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,10 @@ In order to use OpenAI LLMs, the [SpeziLLM](https://swiftpackageindex.com/stanfo
See the [SpeziLLM documentation](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) for more details.

```swift
import Spezi
import SpeziLLM
import SpeziLLMOpenAI

class LLMOpenAIAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
Expand All @@ -86,6 +90,10 @@ The ``LLMOpenAISession`` contains the ``LLMOpenAISession/context`` property whic
Ensure the property always contains all necessary information, as the ``LLMOpenAISession/generate()`` function executes the inference based on the ``LLMOpenAISession/context``

```swift
import SpeziLLM
import SpeziLLMOpenAI
import SwiftUI

struct LLMOpenAIDemoView: View {
@Environment(LLMRunner.self) var runner
@State var responseText = ""
Expand All @@ -104,8 +112,12 @@ struct LLMOpenAIDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand All @@ -125,10 +137,12 @@ The ``LLMOpenAIAPITokenOnboardingStep`` provides a view that can be used for the
First, create a new view to show the onboarding step:

```swift
import SpeziLLMOpenAI
import SpeziOnboarding
import SwiftUI

struct OpenAIAPIKey: View {
@EnvironmentObject private var onboardingNavigationPath: OnboardingNavigationPath
@Environment(OnboardingNavigationPath.self) private var onboardingNavigationPath: OnboardingNavigationPath

var body: some View {
LLMOpenAIAPITokenOnboardingStep {
Expand All @@ -142,6 +156,7 @@ This view can then be added to the `OnboardingFlow` within the Spezi Template Ap

```swift
import SpeziOnboarding
import SwiftUI

struct OnboardingFlow: View {
@AppStorage(StorageKeys.onboardingFlowComplete) var completedOnboardingFlow = false
Expand Down

0 comments on commit e53bc15

Please sign in to comment.