-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PLNSRVCE-769: Add minio tenant for tekton results #443
PLNSRVCE-769: Add minio tenant for tekton results #443
Conversation
I'll start working on the shellcheck / yamllint items |
/assign @AndrienkoAleksandr |
static checks clean |
hmmm can't seem to assign @AndrienkoAleksandr nor add him as a Reviewer |
OK ci failed with
There is an issue with my github ID and the pipeline-service CI cluster, and I cannot see the full logs. @Roming22 is working with me on slack to see if we can sort it out. Depending on the outcome, we'll see then about local repro's, others looking at the logs and suggesting changes, etc. |
have a debug commit up based on the full logs @Roming22 provided me (he's called out to someone else to see if they can fix my ci cluster auth issues) |
Well the fact we can't dump events from these scripts should be adjusted for debug down the road:
I'll see about getting the rest of the logs tomorrow either with my hopefully fixed ID on the test cluster, or getting somebody to pastebin again. |
Looks like @AndrienkoAleksandr 's work around from #419 (comment)
I'll see about bumping the minio level, pick up that change that got merged up there. That way, we can leverage openshift's randomized setting of the UID. Other various items were also noted as forbidden in that event. Looks like we'll need to associated an SCC with some additional permissions to get that Pod's creation to pass, but I'll fix the UID first, then see. @adambkaplan FYI ^^ |
UPDATE - only 4.5.7 is available through the normal channel. At least for now you have to pay for 4.5.8 (which has the PR 1403 change), per when I try to install from the console on my personal 4.11 cluster for the day. Conversely, @AndrienkoAleksandr alternative minio PR has not been merged. IIRC the uid range itself may be somewhat random, but I'll try something withing Otherwise, we may have to bump permissions on this thing beyond non-root. |
553366e
to
1b05848
Compare
OK we have passing tests @adambkaplan @Roming22 @AndrienkoAleksandr !! I'm going to squash a couple of commits specific to minio, and then generalize my debug additions a bit, and then post here about final reviews / merging. I suspect minimally we'll want more testing perhaps, but first blush I suggest doing that in follow up PRs, as this one and @AndrienkoAleksandr #419 have been the lucky recipients of multiple rebases with other PRs merging ahead. |
1b05848
to
ea760cf
Compare
OK this is ready for review @adambkaplan @Roming22 @AndrienkoAleksandr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally looks good. I ask that the debug commits be squashed into a single one so it's clearer that we're adding RBAC to debug failing tests.
ea760cf
to
7664797
Compare
thanks and done @adambkaplan |
@gabemontero Hello. Thanks a lot for help. But, sorry, I think my workaround with security context was wrong. If I understood docs correctly, Openshift provides different range for userId and groupId. So manually provided group and user id can work on the one cluster and it won't work in the another. Looks like we need to provide more complex security context stuff... But I need to read more docs and play with this staff to figure out how to provide more universal solution... |
With @AndrienkoAleksandr latest comment, I'm going to put a "WIP" in the PR description so we know not to merge it at the moment. |
Yeah I think you may be correct @AndrienkoAleksandr, and that ties in with the suspicion I noted in #443 (comment) So I'll engage in a couple of items for this, short term / today
Yeah if 1403/4.5.8 is in fact not addressing things, then we push on your PR 1407, given them the errors we see on 4.11 with 4.5.8, provide yamls so that they can reproduce, perhaps given them instructions for bringing up a 4.11 cluster through OKD or one of our free tiers at cloud.openshift.com if that is possible, whatever. Now, to "officially" determine the state of 4.5.8, until it lands in stable channel, I'm not sure how we test it in our CI here. But maybe there is some operator hub nuance I'm missing that @adambkaplan may be aware of (he has messed with that a lot more than I). @Roming22 FYI and based on where the findings land, if we have an acceptable work around for this pod permission item, I'll remove the WIP tag and we'll go from there. thanks everyone |
afe03e4
to
f725e67
Compare
hmmm .... now it is failing in the
there has been 1 commit since yesterday, though I'm not sure if it has any bearing here sidetracked looking into this after lunch |
f725e67
to
e17ed2c
Compare
+1 to control operator version. It will save us from unexpected bugs. |
7b5feda
to
784f5c2
Compare
fyi @adambkaplan @AndrienkoAleksandr and myself finally had an opportunity to have a voice to voice conversation. Net:
Assuming CI passes, I'll remove the WIP designation, make sure @Roming22 and team have no more comments, and we'll merge. @AndrienkoAleksandr can than pursue any tuning / post start up work arounds needed in follow up PRs. |
784f5c2
to
a9c0a08
Compare
OK all green @Roming22 - from api team perspective we are good here per #443 (comment); do you want one more round of infra team review, or can one of us hit the merge button? |
@gabemontero perhaps squash commits, then I'll let @Roming22 review so we're in sync. |
a9c0a08
to
473f2d5
Compare
commits squashed @adambkaplan |
kind: Tenant | ||
metadata: | ||
name: storage | ||
namespace: tekton-results |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it make more sense to make the Tentant
a dev overlay?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unclear to me what is the role of operator/gitops/argocd/pipeline-service/tekton-results/overlays/dev
.
if [ ! -e "$results_kustomize" ]; then | ||
results_dir="$(dirname "$results_kustomize")" | ||
mkdir -p "$results_dir" | ||
if [[ -z $TEKTON_RESULTS_DATABASE_USER || -z $TEKTON_RESULTS_DATABASE_PASSWORD ]]; then | ||
printf "[ERROR] Tekton results database variable is not set, either set the variables using \n \ | ||
the config.yaml under tekton_results_db \n \ | ||
Or create '%s' \n" "$results_minio_secret" >&2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't you mix up $results_minio_secret
and $results_secret
in this block and following block?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
again I'll need to defer to @AndrienkoAleksandr to respond and comment on his original intent here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gabemontero remove this line, please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed / pushed
Or create '%s' \n" "$results_secret" >&2 | ||
exit 1 | ||
fi | ||
|
||
kubectl create namespace tekton-results --dry-run=client -o yaml > "$results_namespace" | ||
kubectl create secret generic -n tekton-results tekton-results-database --from-literal=DATABASE_USER="$TEKTON_RESULTS_DATABASE_USER" --from-literal=DATABASE_PASSWORD="$TEKTON_RESULTS_DATABASE_PASSWORD" --dry-run=client -o yaml > "$results_secret" | ||
|
||
yq e -n '.resources += ["namespace.yaml", "tekton-results-secret.yaml"]' > "$results_kustomize" | ||
echo "--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We avoid inlining yaml files as it prevents linting. Have a template that you modify on the fly, preferably using yq
.
@@ -23,6 +23,11 @@ SCRIPT_DIR="$( | |||
pwd | |||
)" | |||
|
|||
ROOT_DIR=$( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If possible please call it PROJECT_DIR
.
ok I've attempted to address, though I'm not crazy confident, @adambkaplan 's #443 (comment) but given @Roming22 's additional comments, where some of those are getting even more into @AndrienkoAleksandr 's original changes/commits from #419 (which had received prior reviews from various folks) that I just pulled over, and with the new sprint and my new assignment there, plus the openshift work I have to pick up during this sprint's timeframe, it is beginning to feel like my offer to assist @AndrienkoAleksandr may have reached the end of its viability. either he takes this back over full time, assuming he has the cycles, or we apply other resources to help, or something else but I'll report back one more time today after the current CI run completes, and we'll go from there. |
yeah that is a @AndrienkoAleksandr question to answer |
5409f1d
to
7d0ee47
Compare
…ot security context constraint, and dumping subscription information add access to events for debug add access to pod/logs for debug add warning events to deployment/pod error debug
7d0ee47
to
c24b691
Compare
We wanted to provide minio only for dev flow. For production we wanted to use amazon S3 log storage. So for me it was natural to split all minio related changes and move it to dev folder. I thought that we will create one more overlay folder in the another staging cluster project, where will be described aws S3 storage secret and env variables for tekton-results api server to connect to this storage. If I misunderstood how it should be, you can simply revert overlays for tekton-results. Then we can define aws changes in the follow up prs... |
My attempt to move the application of
I'll try just reverting the folder altogether per @AndrienkoAleksandr comment ^^ and we can see where we are at tommorrow AM. |
c24b691
to
a1635f1
Compare
So reverting of I'm about to push some of the minor of @Roming22 comments. On a couple of @Roming22 's questions, I have had to defer to @AndrienkoAleksandr to explain his original intent. I made comments in those threads when that was needed. The in-line yaml and renaming a bash env var are an uptick up, I'll address those if time permits today while I get back to my official assignments. |
@AndrienkoAleksandr If minio is for dev only, and is used to satisfy a pre-requisite to deploy pipeline-service, then the manifests should be in Your comment on adding an overlay for the staging cluster is worrisome. You are mistaking this repository for a gitops repository managing pipeline-service clusters, which it is not. This repository should not be aware of any cluster operating pipeline-service. Any configuration related to the staging cluster should go in infra-deployments. |
@Roming22, for staging cluster configuration we have separated issue. And we are not going to save staging configuration in the pipeline-service project. I only wanted to describe flow in general - what we want to see in the end after some amount of the pull requests. For dev purpose we want to have minio, for staging we want to use another S3 storage. In both cases we need overlay tekton-results to apply desired storage. |
Trying to help out @AndrienkoAleksandr propagate #419 while he has electricity issues
Simply rebased off of main and squashed the commits.
Let's see how CI does here.
@adambkaplan @Roming22 FYI