-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE] OpenSearch Plugin Connectors and other commands #178
Comments
@uriofferup Are you willing to try https://github.com/opensearch-project/opensearch-go/blob/main/guides/json.md |
Hey @uriofferup thanks for your recommendation, I had an offline sync up with @rblcoder, the terraform provider should be handling the CRUD operations, having to give users directly execute the Also just curious is this Thanks |
Hi there, I just realized my issue #203 was a duplicate of this, so closed. One thing I will add is that registering the connector as a model needs to also be supported. This is because downstream tasks that are supported in tf will need to point to the registered model. I'm looking to setup a full neural search pipeline in tf, and I believe these are the last necessary components, so would appreciate if validation could include that as a requirement. Also, curious if this feature is on the roadmap? |
Is your feature request related to a problem?
I'm trying to automate deployments of OpenSearch with its associated applications. In the past I tried using
awscurl
to create indices but that has its limitations. Using the OpenSearch Terraform Provider helped, but I still found some other required initial configurations that are hard to automate.What solution would you like?
I'm looking to create something like an
opensearch_execute_command
that helps simplify many configurations as an initial step before creating specialized resources that correctly manage the lifecycle.This can be set in a resource as something like:
Then the resource could expose a result with the response body.
What alternatives have you considered?
Using a null resource with
awscurl
but that has its challenges with authentication and parsing the results.Do you have any additional context?
For example following the documentation for implementing external ML Models, its required to send something like:
The text was updated successfully, but these errors were encountered: