I've been trying to configure kube-web-view for multiple clusters via the --kubeconfig-path argument. The kubeconfig file has contexts for each cluster and tokens for the kube-web-view service accounts for each one, including the cluster the pod is running on.
This pattern works fine for kube-ops-view and kube-resource-report, but with kube-web-view it only shows me the cluster the pod is running on, as local. It appears to ignore the kubeconfig file I pass to it.
Looking at the code, it appears that the logic is like this:
In my case, since the pod is running in a valid cluster, step 3 works and the kubeconfig file defined by kubeconfig-path is never referenced. If I'm correct, step 4 will never happen if you are running in a cluster, just if you are running outside of one.
If I'm reading that right, can the logic be flipped? Only use the service account to discover the local cluster if there is no kubeconfig file provided?
I have this working, I have a helm chart - maybe I contribute the helm chart as suggested in the other ticket
@mkanoap did you check https://kube-web-view.readthedocs.io/en/latest/setup.html ?
OK, this is a bug! Will fix!
Should be fixed in 19.9.4: https://codeberg.org/hjacobs/kube-web-view/releases
No due date set.
This issue currently doesn't have any dependencies.
Deleting a branch is permanent. It CANNOT be undone. Continue?