diff --git a/_community_members/chull.md b/_community_members/chull.md
new file mode 100644
index 0000000000..296a3e38cc
--- /dev/null
+++ b/_community_members/chull.md
@@ -0,0 +1,35 @@
+---
+# short_name: 'chull'
+short_name: charliehull
+name: 'Charlie Hull'
+photo: '/assets/media/community/members/charliehull.jpg'
+job_title_and_company: 'Marketing Director at Open Source Connections'
+primary_title: 'Charlie Hull'
+title: 'OpenSearch Community Member: Charlie Hull'
+breadcrumbs:
+ icon: community
+ items:
+ - title: Community
+ url: /community/index.html
+ - title: Members
+ url: /community/members/index.html
+ - title: "Charlie Hull Profile"
+ url: '/community/members/charlie-hull.html'
+mastodon:
+ url: https://hachyderm.io/@flaxsearch
+ name: flaxsearch
+twitter: 'flaxsearch'
+github: flaxsearch
+linkedin: 'charliehullsearch'
+session_track:
+ - conference_id: "2024-europe"
+ name: "Workshops"
+permalink: '/community/members/charlie-hull.html'
+personas:
+ - conference_speaker
+ - author
+conference_id:
+ - "2024-europe"
+---
+
+**Charlie Hull** is the Marketing Director at OSC and also leads client projects. He keeps a strategic view on developments in the search industry and is in demand as a speaker at conferences across the world.
diff --git a/_community_members/epugh.md b/_community_members/epugh.md
new file mode 100644
index 0000000000..ff9e193c59
--- /dev/null
+++ b/_community_members/epugh.md
@@ -0,0 +1,35 @@
+---
+# short_name: 'epugh'
+short_name: ericpugh
+name: 'Eric Pugh'
+photo: '/assets/media/community/members/ericpugh.jpg'
+job_title_and_company: 'Co-founder and CEO of Open Source Connections'
+primary_title: 'Eric Pugh'
+title: 'OpenSearch Community Member: Eric Pugh'
+breadcrumbs:
+ icon: community
+ items:
+ - title: Community
+ url: /community/index.html
+ - title: Members
+ url: /community/members/index.html
+ - title: "Eric Pugh Profile"
+ url: '/community/members/eric-pugh.html'
+# mastodon:
+# url: https://hachyderm.io/@flaxsearch
+# name: flaxsearch
+twitter: 'dep4b'
+github: epugh
+linkedin: 'epugh'
+session_track:
+ - conference_id: "2024-europe"
+ name: "Workshops"
+permalink: '/community/members/eric-pugh.html'
+personas:
+ - conference_speaker
+ - author
+conference_id:
+ - "2024-europe"
+---
+
+**Eric Pugh** is the co-founder and CEO of OpenSource Connections. Today he helps OSC’s clients, especially those in the ecommerce space, build their own search teams and improve their search maturity, both by leading projects and by acting as a trusted advisor.
diff --git a/_events/2024-0409-fossasia-summit-2024.markdown b/_events/2024-0409-fossasia-summit-2024.markdown
index 902650eaec..3989a86f20 100644
--- a/_events/2024-0409-fossasia-summit-2024.markdown
+++ b/_events/2024-0409-fossasia-summit-2024.markdown
@@ -14,4 +14,4 @@ Be sure to catch the talk from Senior Researcher, OpenSearch Project - [Aparna S
"**User Perception as it Informs AI Perception of Performance**"
-Testing is an important aspect of improvement AI models. An untapped and less understood are of product development is to test results in a way the provides greater opportunities to frame AI responses. In this talk, I offer an example of user testing, take aways and how we improved the user experience of an AI Assistant. I cover discoveries at the early stages of development, the way in which this users use design heuristics in evaluating AI. User make judgments on information presented by AI in a way that can overwhelm, undermine or boost trust or confidence in the AI model. I conclude by providing design decisions that can help the user disambiguate the information offered by the AI model.
+Testing is an important aspect of improving AI models. An untapped and less understood area of product development is to test results in a way that provides greater opportunities to frame AI responses. In this talk, I offer an example of user testing, take aways and how we improved the user experience of an AI Assistant. I cover discoveries at the early stages of development, the way in which users use design heuristics in evaluating AI. Users make judgments on information presented by AI in a way that can overwhelm, undermine or boost trust or confidence in the AI model. I conclude by providing design decisions that can help the user disambiguate the information offered by the AI model.
diff --git a/_includes/downloads/opensearch-docker.markdown b/_includes/downloads/opensearch-docker.markdown
index a6fa7b3542..4a21000de5 100644
--- a/_includes/downloads/opensearch-docker.markdown
+++ b/_includes/downloads/opensearch-docker.markdown
@@ -5,7 +5,7 @@ The best way to try out OpenSearch is to use [Docker Compose](https://docs.docke
- **Linux**: Ensure `vm.max_map_count` is set to at least 262144 as per the [documentation](/docs/opensearch/install/important-settings/).
2. Download [docker-compose.yml](/samples/docker-compose.yml) into your desired directory
- **Note for OpenSearch 2.12 or later:**
- When setting up the security demo configuration, set the initial admin password using the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` environment variable. For more information, see [Setting up a demo configuration](https://opensearch.org/docs/latest//security/configuration/demo-configuration/).
+ When setting up the security demo configuration, set the initial admin password using the `OPENSEARCH_INITIAL_ADMIN_PASSWORD` environment variable. For more information, see [Setting up a demo configuration](https://opensearch.org/docs/latest/security/configuration/demo-configuration/).
3. Run `docker-compose up`
4. Have a nice coffee while everything is downloading and starting up
5. Navigate to [http://localhost:5601/](http://localhost:5601) for OpenSearch Dashboards
diff --git a/_opensearchcon_workshops/2024-europe-think-like-a-relevance-engineer.md b/_opensearchcon_workshops/2024-europe-think-like-a-relevance-engineer.md
new file mode 100644
index 0000000000..1ca27fc379
--- /dev/null
+++ b/_opensearchcon_workshops/2024-europe-think-like-a-relevance-engineer.md
@@ -0,0 +1,34 @@
+---
+primary_presenter: charliehull
+conference_id: 2024-europe
+speaker_talk_title: 'Think Like a Relevance Engineer'
+primary_title: 'Think Like a Relevance Engineer'
+title: 'OpenSearchCon 2024 Training: Think Like a Relevance Engineer by Open Source Connections'
+breadcrumbs:
+ icon: community
+ items:
+ - title: OpenSearchCon
+ url: /events/opensearchcon/index.html
+ - title: 2024
+ url: /events/opensearchcon/2024/index.html
+ - title: Europe
+ url: /events/opensearchcon/2024/north-america/index.html
+ - title: Workshops
+ url: /events/opensearchcon/2024/europe/workshops/index.html
+session_time: '2024-05-06 - 9:00am-5:30pm'
+session_room: 'Vilnius – Workshop'
+session_track: 'Workshop'
+presenters:
+ - charliehull
+ - ericpugh
+permalink: '/events/opensearchcon/2024/europe/workshops/think-like-a-relevance-engineer-training.html'
+---
+
+The training is a one-day intensive course based on Think Like a Relevance Engineer (TLRE) OpenSearch and taught by the team at OpenSource Connections.
+
+- Part One - Managing, Measuring, and Testing Search Relevance: Understand how working on relevance requires different thinking than other engineering problems. We teach you to measure search quality, take a hypothesis-driven approach to search projects, and safely ‘fail fast’ towards ever improving business KPIs.
+- Part Two - Engineering Relevance with OpenSearch: This part of the training demonstrates relevance tuning techniques that actually work. Relevance can’t be achieved by just tweaking field weights: boosting strategies, synonyms, and semantic search are discussed.
+
+## [Purchase your ticket with Open Source Connections here](https://www.eventbee.com/v/opensearch-tlre-intensive-at-opensearchcon-eu-24/event?eid=276614264#/tickets){:target="blank"}.
+
+Entry to OpenSearchCon on Monday 6th May and Tuesday 7th May is included with the purchase of a training ticket.
diff --git a/_partners/hostkey.md b/_partners/hostkey.md
new file mode 100644
index 0000000000..ea5aa858f1
--- /dev/null
+++ b/_partners/hostkey.md
@@ -0,0 +1,67 @@
+---
+name: 'Hostkey'
+name_long: 'HOSTKEY B.V.'
+# upload your logo to the following directory - must be square
+logo: '/assets/media/partners/hostkey.png'
+link: 'https://hostkey.com/'
+logo_large: '/assets/media/partners/hostkey/hostkey-logo.png'
+description: 'We offer a wide range of dedicated servers from entry-level up to high-performance GPU servers and private cloud solutions. Our servers are hosted in TIER III data centers in the Netherlands, Germany, Finland, Iceland, Turkey and the USA.'
+business_type: 'Server infrastructure provider.'
+region: 'North America, Europe, Middle East, Africa, Asia Pacific, Australia'
+contact: 'support@hostkey.com'
+opensearch_tech: 'We provide pre-installed and ready-to-use OpenSearch for our clients in our App Marketplace.'
+industries: 'business services, consumer services'
+multiple_office_locations:
+ - name: 'Main office - Netherlands'
+ location: |
+ Willem Frederik Hermansstraat 91 1011DG
+ Amsterdam, Netherlands
+ - name: 'euNetworks - data center in Netherlands'
+ location: |
+ Paul van Vlissingenstraat 16
+ Amsterdam, Netherlands
+ - name: 'Frankfurt 1 Data Center'
+ location: |
+ Eschborner Landstraße 100
+ Frankfurt, Germany
+ - name: 'Digita - data center in Finland'
+ location: |
+ 5 Uutiskatu
+ Helsinki, Finland
+ - name: 'Long Island Interconnect - data center in the USA'
+ location: |
+ 1025 Old Country Road in Westbury
+ New York, USA
+ - name: 'Verne Global - data center in Iceland'
+ location: |
+ Valhallarbraut 868, 262 Reykjanesbaer
+ Keflavik, Iceland
+ - name: 'TI Sparkle Turkey Telekomünikasyon A.S. - data center in Turkey'
+ location: |
+ Çobançeşme, Kımız Sokaği No:30, 34196
+ Bahçelievler/İstanbul, Türkiye
+resources:
+ - url: 'https://hostkey.com/blog/'
+ title: 'HOSTKEY Blog'
+ type: 'blog'
+ - url: 'https://hostkey.com/documentation/'
+ title: 'Knowledge base'
+ type: 'Documentation & FAQ'
+social_links:
+ - url: 'https://twitter.com/Hostkey'
+ icon: 'twitter'
+ - url: 'https://www.linkedin.com/company/hostkey.com/'
+ icon: 'linkedin'
+ - url: 'https://www.facebook.com/people/Hostkey/100030190886222/'
+ icon: 'facebook'
+products:
+ - url: 'https://hostkey.com/vps/'
+ name: 'VPS Servers'
+ description: 'VPS hosting in multiple locations.'
+ - url: 'https://hostkey.com/dedicated-servers/'
+ name: 'Dedicated servers'
+ description: 'Dedicated servers hosting in multiple locations.'
+ - url: 'https://hostkey.com/apps/databases/opensearch/'
+ name: 'OpenSearch page on Apps Marketplace.'
+ description: 'OpenSearch page on Apps Marketplace.'
+---
diff --git a/_posts/2023-12-21-customer-expectations-of-an-intelligent-dashboard-assistant.md b/_posts/2023-12-21-customer-expectations-of-an-intelligent-dashboard-assistant.md
index 436fa97de5..7e5a846da0 100644
--- a/_posts/2023-12-21-customer-expectations-of-an-intelligent-dashboard-assistant.md
+++ b/_posts/2023-12-21-customer-expectations-of-an-intelligent-dashboard-assistant.md
@@ -72,7 +72,7 @@ Trust in technology was a key theme for users of all types. Users communicated a
We presented the findings of this research at the Graylog GO 2023 user conference.
-{%include youtube-player.html id="zBmEkTN7Jb8" %}
+{%include youtube-player.html id="aJawKuFl7PU" %}
**References**
diff --git a/_posts/2024-02-23-enhanced-multi-vector-support-in-opensearch-knn.md b/_posts/2024-02-23-enhanced-multi-vector-support-in-opensearch-knn.md
new file mode 100644
index 0000000000..f97ccc0305
--- /dev/null
+++ b/_posts/2024-02-23-enhanced-multi-vector-support-in-opensearch-knn.md
@@ -0,0 +1,191 @@
+---
+layout: post
+title: "Enhanced multi-vector support for OpenSearch k-NN search with nested fields"
+authors:
+- heemin
+- vamshin
+- dylantong
+date: 2024-03-28 00:00:00 -0700
+categories:
+- technical-posts
+meta_keywords: OpenSearch multi vector, OpenSearch k-NN nested field
+meta_description: Improvement in OpenSearch k-NN with Nested Field, specifically focusing on multi-vector support
+excerpt: In OpenSearch 2.12, users can now obtain diverse search results from a k-NN index, even when multiple nearest vectors belong to just a few documents. This is expected to enhance both the efficiency and quality of search outcomes.
+---
+
+OpenSearch 2.12 significantly enhances k-NN indexes that use the HNSW algorithm and Faiss or Lucene engines, boosting result diversity by intelligently handling multi-vector support. This improvement ensures better search outcomes by effectively eliminating duplicate vectors from the same document during k-NN searches.
+
+## Why is this important?
+
+With the rise of large language models (LLMs), many users are turning to vector databases for indexing, storing, and retrieving information, particularly for building retrieval-augmented generation (RAG) systems. Vector databases rely on text embedding models to convert text to embeddings, preserving semantic information. However, models have limitations on the number of tokens to consider for embedding generation, thus requiring large documents to be chunked and stored as multiple embeddings in a single document. Without multi-vector support, every chunk of the document needs to be treated as a separate entity, which leads to duplication of document metadata across multiple chunks unless the user separates the document and its metadata. This duplication of metadata can result in significant storage and memory overhead. Furthermore, users must independently devise a merging mechanism to ensure that only a single document is retrieved from multiple chunks during search. Multi-vector support simplifies handling large documents, alleviating these challenges.
+
+## Understanding previous limitations
+
+There were two limitations in k-NN search with nested fields prior to OpenSearch 2.12.
+
+### Fewer results
+In previous versions of OpenSearch, when searching for nested k-NN fields with a specified number of nearest neighbors (k value), the search might return fewer than k documents. This occurred because the search operated at the nested field level or chunk level rather than the document level. In the worst-case scenario, it was possible that k neighbors could be the chunks belonging to a single document, returning just one document in the search results. While increasing the k value could result in a greater number of retrieved documents, it introduced unnecessary search overhead.
+
+For example, consider three documents labeled 1, 2, and 3, each containing two vectors in the nested k-NN field. Document 1 contains vectors A and B, document 2 contains vectors C and F, and document 3 contains vectors D and E. Let's say the two nearest vectors are A and B, both belonging to document 1. When searching for the nested k-NN field, the search returns only document 1, even if the k value is set to 2, as shown in the following image.
+
+
+
+### Low recall
+Returning fewer results can also lead to low recall. In OpenSearch, an index consists of multiple shards, each containing multiple segments. Search operates at the segment level, with results aggregated at the shard level. After this, the results from all shards are aggregated and returned to the user. Consequently, if a single segment contains all of the top k documents but returns fewer than k documents, the final results will contain k documents but will not represent the true top k documents. This is because the search algorithm operates at the segment level, potentially missing relevant documents from other segments that should have been included in the top k results.
+
+To illustrate this, consider the following example. Suppose there are seven indexed documents, each containing either one or two vectors. Let's assume that, when searching the nested vector field, the order of vectors from nearest to farthest is as follows: A, B, C, D, E, F, G, H, I, and J. The search occurs at the segment level, and with a k value of 2, only document 1 is returned from segment 1. Consequently, the final aggregated results contain document 1 and document 3, whereas the expected results should include document 1 and document 2, as shown in the following image.
+
+
+
+## Improvements in OpenSearch 2.12
+
+In version 2.12, OpenSearch uses document and vector mapping data to deduplicate search results. This process occurs when vectors belonging to the same document are already collected in the search queue. During deduplication, OpenSearch keeps only the nearest vector to the query vector for each document. The distance value of the selected vector is then converted back to an OpenSearch search score, ultimately becoming the document's score.
+
+Let’s take a look at an example, presented in the following diagrams. Consider two documents, document 1 and document 2. Document 1 contains vectors A and B, while document 2 contains vectors C and D (Fig. 1). During the search, vector A is found and added to the search queue (Fig. 2). Subsequently, vector B is found. Because both vectors A and B belong to document 1, their distances are compared. The distance between the query vector and vector B is 0.9, which is less than the distance between the query vector and vector A (1.0). As a result, vector A is removed from the search queue, and vector B is added (Fig. 3).
+
+After that, vector D is found. Because vector D belongs to document 2, whose vector is not in the search queue, vector D is added to the queue (Fig. 4). Then vector C is found. Because vector C belongs to document 2, whose vector is already in the search queue, the distances between the query vector and vectors C and D are compared. The distance between the query vector and vector D (1.1) is less than the distance between the query vector and vector C (1.2), so vector D remains in the search queue (Fig. 5).
+
+Both document 1 and document 2 are returned, and their scores are calculated based on the distances between the query vector and the vectors collected in the search queue.
+
+
+
+Now, even with a k value of 2, the search returns two documents, regardless of whether the two nearest vectors belong to one or multiple documents. Additionally, this enhancement improves recall because each segment now returns the k nearest documents instead of just the k nearest vectors belonging to the documents.
+
+## How to use a nested field to store a multi-vector
+
+Let's dive into the process of creating a k-NN index with nested fields and conducting searches on it.
+
+First, create a k-NN index by setting the `knn` value to `true` in the index settings. Additionally, set the `type` to `knn_vector` within the nested field. All other parameters for the `knn_vector` remain the same as those for a regular `knn_vector` type:
+```json
+PUT my-knn-index
+{
+ "settings": {
+ "index": {
+ "knn": true
+ }
+ },
+ "mappings": {
+ "properties": {
+ "my_vectors": {
+ "type": "nested",
+ "properties": {
+ "my_vector": {
+ "type": "knn_vector",
+ "dimension": 2,
+ "method": {
+ "name": "hnsw",
+ "space_type": "l2",
+ "engine": "faiss"
+ }
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+Next, insert your data. The number of data entries within a nested field is not fixed; you can index a different number of nested field items for each document. For instance, in this example, document 1 contains three nested field items and document 2 contains two nested field items:
+```json
+PUT _bulk?refresh=true
+{ "index": { "_index": "my-knn-index", "_id": "1" } }
+{"my_vectors":[{"my_vector":[1,1]},{"my_vector":[2,2]},{"my_vector":[3,3]}]}
+{ "index": { "_index": "my-knn-index", "_id": "2" } }
+{"my_vectors":[{"my_vector":[10,10]},{"my_vector":[20,20]}]}
+```
+
+When you search the data, note that the query structure differs slightly from a regular k-NN search. Wrap your query in a nested query with the specified path. Additionally, your field name should specify both the nested field name and the `knn_vector field name`, separated by a dot (in the following example, `my_vectors.my_vector`):
+```json
+GET my-knn-index/_search
+{
+ "query": {
+ "nested": {
+ "path": "my_vectors",
+ "query": {
+ "knn": {
+ "my_vectors.my_vector": {
+ "vector": [1,1],
+ "k": 2
+ }
+ }
+ }
+ }
+ }
+}
+```
+
+When specifying a k value of 2, you'll retrieve two documents instead of one, even if the three nearest vectors belong to document 1:
+```json
+{
+ "took": 1,
+ "timed_out": false,
+ "_shards": {
+ "total": 1,
+ "successful": 1,
+ "skipped": 0,
+ "failed": 0
+ },
+ "hits": {
+ "total": {
+ "value": 2,
+ "relation": "eq"
+ },
+ "max_score": 1.0,
+ "hits": [
+ {
+ "_index": "my-knn-index",
+ "_id": "1",
+ "_score": 1.0,
+ "_source": {
+ "my_vectors": [
+ {
+ "my_vector": [
+ 1,
+ 1
+ ]
+ },
+ {
+ "my_vector": [
+ 2,
+ 2
+ ]
+ },
+ {
+ "my_vector": [
+ 3,
+ 3
+ ]
+ }
+ ]
+ }
+ },
+ {
+ "_index": "my-knn-index",
+ "_id": "2",
+ "_score": 0.006134969,
+ "_source": {
+ "my_vectors": [
+ {
+ "my_vector": [
+ 10,
+ 10
+ ]
+ },
+ {
+ "my_vector": [
+ 20,
+ 20
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+}
+```
+
+## Summary
+
+In OpenSearch 2.12, you can now obtain more accurate search results from a k-NN index, even when multiple nearest vectors belong to just a few documents. This represents a significant stride toward establishing OpenSearch as a competitive vector database. Additional enhancements that will support multi-vector functionality, such as inner hit support[[#1447](https://github.com/opensearch-project/k-NN/issues/1447)] and automatic chunking[[#548](https://github.com/opensearch-project/neural-search/issues/548)], are currently in the pipeline. If you want to see these features implemented, please upvote the corresponding GitHub issue. As always, feel free to submit new issues for any other ideas or requests regarding the OpenSearch k-NN functionality in the [k-NN repository](https://github.com/opensearch-project/k-NN).
+
diff --git a/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-after-1.png b/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-after-1.png
new file mode 100644
index 0000000000..1a6b700b76
Binary files /dev/null and b/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-after-1.png differ
diff --git a/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-before-1.png b/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-before-1.png
new file mode 100644
index 0000000000..59896faa9b
Binary files /dev/null and b/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-before-1.png differ
diff --git a/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-before-2.png b/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-before-2.png
new file mode 100644
index 0000000000..171aadaec3
Binary files /dev/null and b/assets/media/blog-images/2024-02-23-multi-vector-support-in-knn/multi-vector-before-2.png differ
diff --git a/assets/media/community/members/charliehull.jpg b/assets/media/community/members/charliehull.jpg
new file mode 100644
index 0000000000..08b639f674
Binary files /dev/null and b/assets/media/community/members/charliehull.jpg differ
diff --git a/assets/media/community/members/ericpugh.jpg b/assets/media/community/members/ericpugh.jpg
new file mode 100644
index 0000000000..a3ebf620d5
Binary files /dev/null and b/assets/media/community/members/ericpugh.jpg differ
diff --git a/assets/media/partners/hostkey.png b/assets/media/partners/hostkey.png
new file mode 100644
index 0000000000..a09bf9b4f5
Binary files /dev/null and b/assets/media/partners/hostkey.png differ
diff --git a/assets/media/partners/hostkey/hostkey-logo.png b/assets/media/partners/hostkey/hostkey-logo.png
new file mode 100644
index 0000000000..29ec1f79fd
Binary files /dev/null and b/assets/media/partners/hostkey/hostkey-logo.png differ
diff --git a/events/opensearchcon/2024/europe/index.md b/events/opensearchcon/2024/europe/index.md
index 0d0c86b146..fb237815f3 100644
--- a/events/opensearchcon/2024/europe/index.md
+++ b/events/opensearchcon/2024/europe/index.md
@@ -29,8 +29,8 @@ conference_sections_button_stack:
url: /events/opensearchcon/2024/europe/speakers/index.html
- label: Sessions
url: /events/opensearchcon/2024/europe/sessions/index.html
-# - label: Unconference
-# url: /events/opensearchcon/2024/europe/unconference/index.html
+ - label: Unconference
+ url: /events/opensearchcon/2024/europe/unconference/index.html
- label: Workshops
url: /events/opensearchcon/2024/europe/workshops/index.html
related_articles:
diff --git a/events/opensearchcon/2024/europe/unconference/index.md b/events/opensearchcon/2024/europe/unconference/index.md
index 8861706e7f..90acb13592 100644
--- a/events/opensearchcon/2024/europe/unconference/index.md
+++ b/events/opensearchcon/2024/europe/unconference/index.md
@@ -14,13 +14,52 @@ breadcrumbs:
- title: Unconference
url: /events/opensearchcon/2024/europe/unconference/index.html
speaker_talk_title: Unconference
-#session_time: '2023-09-27 - 1:00pm-5:30pm'
-#session_room: Grand Sheraton Seattle Willow Room
+session_time: '2024-05-06 - 1:00pm-5:00pm'
+session_room: Asgabat - Second Stage
session_track: Unconference
-#primary_speaker_name: nknize
+primary_speaker_name: krisfreedain
# hero_banner_image: /assets/media/opensearchcon/Uncon_web-1399x627.png
presenters:
- krisfreedain
conference_id: 2024-europe
permalink: /events/opensearchcon/2024/europe/unconference/index.html
---
+
+We had a fantastic time in Seattle last year, so we're bringing the Unconference to Berlin!
+Make plans to join us and come ready to pitch
+your favorite speaking topic! This is an opportunity for the community to
+come together and kick off OpenSearchCon Europe with an action-packed afternoon
+of sharing and discovery. With no pre-planned talks, what you want to hear
+about will be determined by you and your fellow conference-goers.
+
+### Attendees/Speakers
+
+This is a first-come, first-served event; the room holds up to 100 individuals
+and anyone registered for OpenSearchCon is welcome to attend until the
+room is full. We advise you to show up a little early to make sure you get
+your spot.
+
+Each speaker has 15 minutes to do with as they see fit! Want to talk for 10
+minutes and have 5 minutes of questions? Great! Have a lot to say and want
+to talk for the whole 15? That works too! Are you a maintainer and want to
+hold a lightning round of questions with the audience? Fantastic! You get the
+picture. Just be mindful of your 15 minutes!
+
+### Voting
+
+At 1:00 PM, each attendee will receive a card and three gold-star stickers.
+Those who would like to give a talk will write their title and brief description
+on the card, put their name on the back, and post them to the board for
+voting.
+
+At 1:10, you'll have 10 minutes to walk the board and place a gold star on
+one of the talks you would like to hear. Two rules: please do not vote for
+yourself, and, if a card already has all 5 spaces filled, you can no longer vote
+for it.
+
+At 1:20, Kris and a lucky volunteer will collect and sort the cards, then select the day's
+talks from the cards receiving the most votes while ensuring the widest
+arrangement of topics are covered.
+
+At 1:30, we will return the cards to the wall in the order each talk will be
+given. Be ready to talk and participate!
\ No newline at end of file
diff --git a/events/opensearchcon/2024/europe/workshops/index.md b/events/opensearchcon/2024/europe/workshops/index.md
index 4c6b7271a8..8e60e7ef53 100644
--- a/events/opensearchcon/2024/europe/workshops/index.md
+++ b/events/opensearchcon/2024/europe/workshops/index.md
@@ -1,13 +1,13 @@
---
layout: opensearchcon_workshops
-primary_title: OpenSearchCon 2024 Workshops
-title: OpenSearchCon 2024 Workshops
+primary_title: "OpenSearchCon 2024 Workshops"
+title: "OpenSearchCon 2024 Workshops"
breadcrumbs:
icon: community
items:
- title: OpenSearchCon
url: /events/opensearchcon/index.html
- - title: '2024'
+ - title: 2024
url: /events/opensearchcon/2024/index.html
- title: Europe
url: /events/opensearchcon/2024/europe/index.html