Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs: Handle Throwing Expression in SpeziLLMOpenAI.md's LLMOpenAIDemo Example #61

Merged
merged 9 commits into from
Aug 19, 2024
32 changes: 26 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,8 +127,12 @@ struct LLMLocalDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand All @@ -150,6 +154,10 @@ In order to use OpenAI LLMs within the Spezi ecosystem, the [SpeziLLM](https://s
See the [SpeziLLM documentation](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) for more details.

```swift
import Spezi
import SpeziLLM
import SpeziLLMOpenAI

class LLMOpenAIAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
Expand All @@ -171,6 +179,10 @@ The code example below showcases the interaction with an OpenAI LLM through the
The `LLMOpenAISchema` defines the type and configurations of the to-be-executed `LLMOpenAISession`. This transformation is done via the [`LLMRunner`](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm/llmrunner) that uses the `LLMOpenAIPlatform`. The inference via `LLMOpenAISession/generate()` returns an `AsyncThrowingStream` that yields all generated `String` pieces.

```swift
import SpeziLLM
import SpeziLLMOpenAI
import SwiftUI

struct LLMOpenAIDemoView: View {
@Environment(LLMRunner.self) var runner
@State var responseText = ""
Expand All @@ -189,8 +201,12 @@ struct LLMOpenAIDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down Expand Up @@ -263,8 +279,12 @@ struct LLMFogDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMFog/LLMFogSession.swift
Original file line number Diff line number Diff line change
Expand Up @@ -50,8 +50,12 @@ import SpeziLLM
/// )
/// )
///
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// do {
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// }
/// } catch {
/// // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
/// }
/// }
/// }
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMFog/SpeziLLMFog.docc/SpeziLLMFog.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,12 @@ struct LLMFogDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMLocal/LLMLocalSession.swift
Original file line number Diff line number Diff line change
Expand Up @@ -46,8 +46,12 @@ import SpeziLLM
/// )
/// )
///
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// do {
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// }
/// } catch {
/// // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
/// }
/// }
/// }
Expand Down
8 changes: 6 additions & 2 deletions Sources/SpeziLLMLocal/SpeziLLMLocal.docc/SpeziLLMLocal.md
Original file line number Diff line number Diff line change
Expand Up @@ -111,8 +111,12 @@ struct LLMLocalDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand Down
12 changes: 10 additions & 2 deletions Sources/SpeziLLMOpenAI/LLMOpenAISession.swift
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,10 @@ import SpeziSecureStorage
/// The example below demonstrates a minimal usage of the ``LLMOpenAISession`` via the `LLMRunner`.
///
/// ```swift
/// import SpeziLLM
/// import SpeziLLMOpenAI
/// import SwiftUI
///
/// struct LLMOpenAIDemoView: View {
/// @Environment(LLMRunner.self) var runner
/// @State var responseText = ""
Expand All @@ -51,8 +55,12 @@ import SpeziSecureStorage
/// )
/// )
///
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// do {
/// for try await token in try await llmSession.generate() {
/// responseText.append(token)
/// }
/// } catch {
/// // Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
/// }
/// }
/// }
Expand Down
21 changes: 18 additions & 3 deletions Sources/SpeziLLMOpenAI/SpeziLLMOpenAI.docc/SpeziLLMOpenAI.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,10 @@ In order to use OpenAI LLMs, the [SpeziLLM](https://swiftpackageindex.com/stanfo
See the [SpeziLLM documentation](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillm) for more details.

```swift
import Spezi
import SpeziLLM
import SpeziLLMOpenAI

class LLMOpenAIAppDelegate: SpeziAppDelegate {
override var configuration: Configuration {
Configuration {
Expand All @@ -86,6 +90,10 @@ The ``LLMOpenAISession`` contains the ``LLMOpenAISession/context`` property whic
Ensure the property always contains all necessary information, as the ``LLMOpenAISession/generate()`` function executes the inference based on the ``LLMOpenAISession/context``

```swift
import SpeziLLM
import SpeziLLMOpenAI
import SwiftUI

struct LLMOpenAIDemoView: View {
@Environment(LLMRunner.self) var runner
@State var responseText = ""
Expand All @@ -104,8 +112,12 @@ struct LLMOpenAIDemoView: View {
)
)

for try await token in try await llmSession.generate() {
responseText.append(token)
do {
for try await token in try await llmSession.generate() {
responseText.append(token)
}
} catch {
// Handle errors here. E.g., you can use `ViewState` and `viewStateAlert` from SpeziViews.
}
}
}
Expand All @@ -125,10 +137,12 @@ The ``LLMOpenAIAPITokenOnboardingStep`` provides a view that can be used for the
First, create a new view to show the onboarding step:

```swift
import SpeziLLMOpenAI
import SpeziOnboarding
import SwiftUI

struct OpenAIAPIKey: View {
@EnvironmentObject private var onboardingNavigationPath: OnboardingNavigationPath
@Environment(OnboardingNavigationPath.self) private var onboardingNavigationPath: OnboardingNavigationPath
paulhdk marked this conversation as resolved.
Show resolved Hide resolved

var body: some View {
LLMOpenAIAPITokenOnboardingStep {
Expand All @@ -142,6 +156,7 @@ This view can then be added to the `OnboardingFlow` within the Spezi Template Ap

```swift
import SpeziOnboarding
import SwiftUI

struct OnboardingFlow: View {
@AppStorage(StorageKeys.onboardingFlowComplete) var completedOnboardingFlow = false
Expand Down
Loading