Unofficial Kotlin multiplatform variant of the Antropic SDK.
Because I believe that coding Agentic AI should be as easy as possible. I am coming from the creative coding community, where we are teaching artists, without prior programming experience, how to express their creations through code as a medium. I want to give creators of all kinds this extremely powerful tool, so that you can turn your own machine into an outside window, through which, the AI system can perceive your world and your needs, and act upon this information.
There is no official Anthropic SDK for Kotlin, a de facto standard for Android development. The one for Java is also lacking. Even if they will appear one day, we can expect them to be autogenerated by the Stainless API bot, which is used by both, Anthropic and OpenAI, to automate their SDK development based on evolving API. While such an approach seem to work with dynamically typed languages, it might fail short with statically typed languages like Kotlin, sacrificing typical language idioms in favor of over-verbose constructs. This library is a Kotlin multiplatform therefore your AI agents developed with it can be seamlessly used in Android, JVM, JavaScript, iOS, WebAssembly, and many other environments.
Add to your build.gradle.kts
:
dependencies {
implementation("com.xemantic.anthropic:anthropic-sdk-kotlin:.0.3.1")
}
And in case of JVM:
dependencies {
implementation("io.ktor:ktor-client-java:3.0.0") // or the latest ktor version
}
If you are planning to use tools, you will also need:
plugins {
// ... other plugins like kotlin jvm or multiplatform
kotlin("plugin.serialization") version "2.0.21"
}
dependencies {
implementation("org.jetbrains.kotlinx:kotlinx-serialization-core:1.7.3")
}
The simplest code look like:
fun main() {
val anthropic = Anthropic()
val response = runBlocking {
anthropic.messages.create {
+Message {
+"Hello, Claude"
}
}
}
println(response)
}
Streaming is also possible:
fun main() {
val client = Anthropic()
runBlocking {
client.messages.stream {
+Message { +"Write me a poem." }
}
.filterIsInstance<ContentBlockDeltaEvent>()
.map { (it.delta as Delta.TextDelta).text }
.collect { delta -> print(delta) }
}
}
If you want to write AI agents, you need tools, and this is where this library shines:
@AnthropicTool(
name = "get_weather",
description = "Get the weather for a specific location"
)
data class WeatherTool(val location: String): UsableTool {
override fun use(
toolUseId: String
) = ToolResult(
toolUseId,
"The weather is 73f" // it should use some external service
)
}
fun main() = runBlocking {
val client = Anthropic {
tool<WeatherTool>()
}
val conversation = mutableListOf<Message>()
conversation += Message { +"What is the weather in SF?" }
val initialResponse = client.messages.create {
messages = conversation
useTools()
}
println("Initial response:")
println(initialResponse)
conversation += initialResponse.asMessage()
val tool = initialResponse.content.filterIsInstance<ToolUse>().first()
val toolResult = tool.use()
conversation += Message { +toolResult }
val finalResponse = client.messages.create {
messages = conversation
useTools()
}
println("Final response:")
println(finalResponse)
}
The advantage comes no only from reduced verbosity, but also the class annotated with
the @AnthropicTool
will have its JSON schema automatically sent to the Anthropic API when
defining the tool to use. For the reference check equivalent examples in the official
Anthropic SDKs:
None of them is taking the advantage of automatic schema generation, which becomes crucial for maintaining agents expecting more complex and structured input from the LLM.
Tools can be provided with dependencies, for example singleton services providing some facilities, like HTTP client to connect to the internet or DB connection pool to access the database.
@AnthropicTool(
name = "query_database",
description = "Executes SQL on the database"
)
data class DatabaseQueryTool(val sql: String): UsableTool {
@Transient
internal lateinit var connection: Connection
override fun use(
toolUseId: String
) = ToolResult(
toolUseId,
text = connection.prepareStatement(sql).use { statement ->
statement.resultSet.use { resultSet ->
resultSet.toString()
}
}
)
}
fun main() = runBlocking {
val client = Anthropic {
tool<DatabaseQueryTool> {
connection = DriverManager.getConnection("jdbc:...")
}
}
val response = client.messages.create {
+Message { +"Select all the users who never logged in to the the system" }
useTools()
}
val tool = response.content.filterIsInstance<ToolUse>().first()
val toolResult = tool.use()
println(toolResult)
}
After the DatabaseQueryTool
is decoded from the API response, it can be processed
by the lambda function passed to the tool definition. In case of the example above,
the lambda will inject a JDBC connection to the tool.
More sophisticated code examples targeting various Kotlin platforms can be found in the anthropic-sdk-kotlin-demo project.
- anthropic-sdk-kotlin-demo: more complex examples and use cases
- claudine: Claudine, the only AI assistant you will ever need, the actual
reason why
anthropic-sdk-kotlin
came to being, to allow me building Claudine and other AI agents.
export ANTHROPIC_API_KEY=your-key-goes-here
./gradlew build
Many unit tests are actually integration tests calling Anthropic APIs and asserting against results. This is the reason why they might be flaky from time to time. For example if the test image is misinterpreted, or Claude is randomly fantasizing too much.