Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
As a user, I should be able to configure CSI driver to have a storage topology.
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to hide user perspective(s) based on the customization.
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem
`oc-mirror` does not work as expected with relative path for OCI format copy
How reproducible:
always
Steps to Reproduce:
Copy the operator image with OCI format to localhost with relative path my-oci-catalog;
cat imageset-copy.yaml
kind: ImageSetConfiguration
apiVersion: mirror.openshift.io/v1alpha2
mirror:
operators:
`oc mirror --config imageset-copy.yaml --use-oci-feature --oci-feature-action=copy oci://my-oci-catalog`
Actual results:
2. will create a dir with name : my-file-catalog , but no use for user specified dir: my-oci-catalog
ls -tl
total 20
drwxr-xr-x. 3 root root 4096 Dec 6 13:58 oc-mirror-workspace
drwxr-xr-x. 3 root root 4096 Dec 6 13:58 olm_artifacts
drwxr-x---. 3 root root 4096 Dec 6 13:58 my-file-catalog
drwxr-xr-x. 2 root root 4096 Dec 6 13:58 my-oci-catalog
rw-rr-. 1 root root 206 Dec 6 12:39 imageset-copy.yaml
Expected results:
2. Use the user specified directory .
Additional info:
``oc-mirror --config config-operator.yaml oci:///home/ocmirrortest/noo --use-oci-feature --oci-feature-action=copy` with full path works well.
This is a clone of issue OCPBUGS-3018. The following is the description of the original issue:
—
Description of problem:
When running an overnight run in dev-scripts (COMPACT_IPV4) with repeated installs I saw this panic in WaitForBootstrapComplete occur once. level=debug msg=Agent Rest API Initialized E1101 05:19:09.733309 1802865 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 1 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x4086520?, 0x1d875810}) /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00056fb00?}) /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x4086520, 0x1d875810}) /usr/local/go/src/runtime/panic.go:838 +0x207 github.com/openshift/installer/pkg/agent.(*NodeZeroRestClient).getClusterID(0xc0001341e0) /home/stack/go/src/github.com/openshift/installer/pkg/agent/rest.go:121 +0x53 github.com/openshift/installer/pkg/agent.(*Cluster).IsBootstrapComplete(0xc000134190) /home/stack/go/src/github.com/openshift/installer/pkg/agent/cluster.go:183 +0x4fc github.com/openshift/installer/pkg/agent.WaitForBootstrapComplete.func1() /home/stack/go/src/github.com/openshift/installer/pkg/agent/waitfor.go:31 +0x77 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x1d8fa901?) /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:157 +0x3e k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001958c0?, {0x1a53c7a0, 0xc0011d4a50}, 0x1, 0xc0001958c0) /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:158 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009ab860?, 0x77359400, 0x0, 0xa?, 0x8?) /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:135 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /home/stack/go/src/github.com/openshift/installer/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:92 github.com/openshift/installer/pkg/agent.WaitForBootstrapComplete({0x7ffd7fccb4e3?, 0x40d7e7?}) /home/stack/go/src/github.com/openshift/installer/pkg/agent/waitfor.go:30 +0x1bc github.com/openshift/installer/pkg/agent.WaitForInstallComplete({0x7ffd7fccb4e3?, 0x5?}) /home/stack/go/src/github.com/openshift/installer/pkg/agent/waitfor.go:73 +0x56 github.com/openshift/installer/cmd/openshift-install/agent.newWaitForInstallCompleteCmd.func1(0xc0003b6c80?, {0xc0004d67c0?, 0x2?, 0x2?}) /home/stack/go/src/github.com/openshift/installer/cmd/openshift-install/agent/waitfor.go:73 +0x126 github.com/spf13/cobra.(*Command).execute(0xc0003b6c80, {0xc0004d6780, 0x2, 0x2}) /home/stack/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:876 +0x67b github.com/spf13/cobra.(*Command).ExecuteC(0xc0013b0a00) /home/stack/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:990 +0x3b4 github.com/spf13/cobra.(*Command).Execute(...) /home/stack/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:918 main.installerMain() /home/stack/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:61 +0x2b0 main.main() /home/stack/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:38 +0xff panic: runtime error: invalid memory address or nil pointer dereference [recovered] panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x33d3cd3]
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-25-210451
How reproducible:
Occurred on the 12th run, all previous installs were successfule
Steps to Reproduce:
1.Set up dev-scripts for AGENT_E2E_TEST_SCENARIO=COMPACT_IPV4, no mirroring 2. Run 'make clean; make agent' in a loop 3. After repeated installs got the failure
Actual results:
Panic in WaitForBootstrapComplete
Expected results:
No failure
Additional info:
It looks like clusterResult is used here even on failure, which causes the dereference - https://github.com/openshift/installer/blob/master/pkg/agent/rest.go#L121
Description of problem:
Customer is not able anymore to provision new baremetal nodes in 4.10.35 using the same rootDeviceHints used in 4.10.10. Customer uses HP DL360 Gen10, with exteranal SAN storage that is seen by the system as a multipath device. Latest IPA versions are implementing some changes to avoid wiping shared disks and this seems to affect what we should provide as rootDeviceHints. They used to put /dev/sda as rootDeviceHints, in 4.10.35 it doesn't make the IPA write the image to the disk anymore because it sees the disk as part of a multipath device, we tried using the on top multipath device /dev/dm-0, the system is then able to write the image to the disk but then it gets stuck when it tried to issue a partprobe command, rebooting the systems to boot from the disk does not seem to help complete the provisioning, no workaround so far.
Version-Release number of selected component (if applicable):
How reproducible:
by trying to provisioning a baremetal node with a multipath device.
Steps to Reproduce:
1. Create a new BMH using a multipath device as rootDeviceHints 2. 3.
Actual results:
The node does not get provisioned
Expected results:
the node gets provisioned correctly
Additional info:
Description of problem
`oc-mirror` will hit error when use docker without namespace for OCI format mirror
How reproducible:
always
Steps to Reproduce:
Copy the operator image with OCI format to localhost;
cat copy.yaml
apiVersion: mirror.openshift.io/v1alpha2
kind: ImageSetConfiguration
mirror:
operators:
`oc-mirror --config mirror.yaml --use-oci-feature --oci-feature-action=mirror --dest-skip-tls docker://localhost:5000`
Actual results:
2. Hit error:
`oc-mirror --config mirror.yaml --use-oci-feature --oci-feature-action=mirror --dest-skip-tls docker://localhost:5000`
……
info: Mirroring completed in 30ms (0B/s)
error: mirroring images "localhost:5000//multicluster-engine/mce-operator-bundle@sha256:e7519948bbcd521390d871ccd1489a49aa01d4de4c93c0b6972dfc61c92e0ca2" is not a valid image reference: invalid reference format
Expected results:
2. No error
Additional info:
`oc-mirror --config mirror.yaml --use-oci-feature --oci-feature-action=mirror --dest-skip-tls docker://localhost:5000/ocmir` works well.
Job was in terrible shape even before but it looks like upgrade started more consistently failing around Oct 2-4.
Looks like we fully lose the api (service unavailable), no artifacts get gathered, mass disruption reported.
Description of problem:
When spot instances with taints are added to the cluster on AWS, machine-api-termination-handler daemonset pods do not launch on these instances because of the taints. machine-api-termination-handler is used for checking the notification of intance termination, so if it doesn't launch properly, application pods on spot instances could stop without normal shut down procedures. It is common to use taint-toleration to specify workloads on spot instances, because it does not require changing application manifests of other workloads.
Version-Release number of selected component (if applicable):
How reproducible:
100%
Steps to Reproduce:
1. Creating ROSA cluster 2. Adding spot instances with taints on OCM 3. oc get daemonset machine-api-termination-handler -n openshift-machine-api
Actual results:
machine-api-termination-handler pods do not launch on spot instances
Expected results:
machine-api-termination-handler pods launch on spot instances
Additional info:
Adding followings to machine-api-termination-handler daemonset could resolve the problem. --- tolerations: - operator: Exists
This is a clone of issue OCPBUGS-1557. The following is the description of the original issue:
—
Seen in an instance created recently by a 4.12.0-ec.2 GCP provider:
"scheduling": { "automaticRestart": false, "onHostMaintenance": "MIGRATE", "preemptible": false, "provisioningModel": "STANDARD" },
From GCP's docs, they may stop instances on hardware failures and other causes, and we'd need automaticRestart: true to auto-recover from that. Also from GCP docs, the default for automaticRestart is true. And on the Go provider side, we doc:
If omitted, the platform chooses a default, which is subject to change over time, currently that default is "Always".
But the implementing code does not actually float the setting. Seems like a regression here, which is part of 4.10:
$ git clone https://github.com/openshift/machine-api-provider-gcp.git $ cd machine-api-provider-gcp $ git log --oneline origin/release-4.10 | grep 'migrate to openshift/api' 44f0f958 migrate to openshift/api
But that's not where the 4.9 and earlier code is located:
$ git branch -a | grep origin/release remotes/origin/release-4.10 remotes/origin/release-4.11 remotes/origin/release-4.12 remotes/origin/release-4.13
Hunting for 4.9 code:
$ oc adm release info --commits quay.io/openshift-release-dev/ocp-release:4.9.48-x86_64 | grep gcp gcp-machine-controllers https://github.com/openshift/cluster-api-provider-gcp c955c03b2d05e3b8eb0d39d5b4927128e6d1c6c6 gcp-pd-csi-driver https://github.com/openshift/gcp-pd-csi-driver 48d49f7f9ef96a7a42a789e3304ead53f266f475 gcp-pd-csi-driver-operator https://github.com/openshift/gcp-pd-csi-driver-operator d8a891de5ae9cf552d7d012ebe61c2abd395386e
So looking there:
$ git clone https://github.com/openshift/cluster-api-provider-gcp.git $ cd cluster-api-provider-gcp $ git log --oneline | grep 'migrate to openshift/api' ...no hits... $ git grep -i automaticRestart origin/release-4.9 | grep -v '"description"\|compute-gen.go' origin/release-4.9:vendor/google.golang.org/api/compute/v1/compute-api.json: "automaticRestart": {
Not actually clear to me how that code is structured. So 4.10 and later GCP machine-API providers are impacted, and I'm unclear on 4.9 and earlier.
Description of problem:
unset field networks in topology of each failureDomain, but defines platform.vsphere.vcenters.
in install-config.yaml:
vcenters: - server: xxx user: xxx password: xxx datacenters: - IBMCloud - datacenter-2 failureDomains: - name: us-east-1 region: us-east zone: us-east-1a topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 datastore: multi-zone-ds-shared server: ibmvcenter.vmc-ci.devcluster.openshift.com - name: us-east-2 region: us-east zone: us-east-2a topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 datastore: multi-zone-ds-shared server: ibmvcenter.vmc-ci.devcluster.openshift.com - name: us-east-3
Launch installer to create cluster, get panic error
sh-4.4$ ./openshift-install create cluster --dir ipi --log-level debug DEBUG OpenShift Installer 4.12.0-0.nightly-2022-09-25-071630 DEBUG Built from commit 1fb1397635c89ff8b3645fed4c4c264e4119fa84 DEBUG Fetching Metadata... ... DEBUG Reusing previously-fetched Master Ignition Config DEBUG Generating Master Machines... panic: runtime error: index out of range [0] with length 0goroutine 1 [running]: github.com/openshift/installer/pkg/asset/machines/vsphere.getDefinedZones(0xc0003bec80) /go/src/github.com/openshift/installer/pkg/asset/machines/vsphere/machinesets.go:122 +0x4f8 github.com/openshift/installer/pkg/asset/machines/vsphere.Machines({0xc0011ca0b0, 0xd}, 0xc001080c80, 0xc0005cad50, {0xc000651d10, 0x13}, {0x4ab5773, 0x6}, {0x4ad49bb, 0x10}) /go/src/github.com/openshift/installer/pkg/asset/machines/vsphere/machines.go:37 +0x250 github.com/openshift/installer/pkg/asset/machines.(*Master).Generate(0xc001118bd0, 0x5?)
Field platform.vsphere.failureDomains.topology.netowrks is not required in documentation.
sh-4.4$ ./openshift-install explain installconfig.platform.vsphere.failureDomains.topology
KIND: InstallConfig
VERSION: v1RESOURCE: <object>
Topology describes a given failure domain using vSphere constructsFIELDS:
computeCluster <string> -required-
computeCluster as the failure domain This is required to be a path datacenter <string> -required-
datacenter is the vCenter datacenter in which virtual machines will be located and defined as the failure domain. datastore <string> -required-
datastore is the name or inventory path of the datastore in which the virtual machine is created/located. folder <string>
folder is the name or inventory path of the folder in which the virtual machine is created/located. networks <[]string>
networks is the list of networks within this failure domain resourcePool <string>
resourcePool is the absolute path of the resource pool where virtual machines will be created. The absolute path is of the form /<datacenter>/host/<cluster>/Resources/<resourcepool>.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
always when setting platform.vsphere.vcenters and unsetting platform.vsphere.failureDomains.topology.networks It works if no set platform.vsphere.vcenters and set platform.vsphere.failureDomains.topology.networks
Steps to Reproduce:
1. configure zones in install-config.yaml, set platform.vsphere.vcenters and unset platform.vsphere.failureDomains.topology.networks 2. install IPI cluster 3.
Actual results:
installer get panic error
Expected results:
installation is successful.
Additional info:
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
We need to merge downstream on master the new retry code from https://github.com/ovn-org/ovn-kubernetes/pull/3171, so that we can also merge the retry logic applied to ovn-k node.
Sample archive with both resources:
archives/compressed/3c/3cc4318d-e564-450b-b16e-51ef279b87fa/202209/30/200617.tar.gz
Sample query to find more archives:
with t as ( select cluster_id, file_path, json_extract_scalar(content, '$.kind') as kind from raw_io_archives where date = '2022-09-30' and file_path like 'config/storage/%' ) select cluster_id, count(*) as cnt from t group by cluster_id order by cnt desc;
Description of problem:
i18n translation missing in "Remove component node from application" modal
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Navigate to dev console and create a workload under an Application group 2. On the Toplogy remove the workload from the Application group 3. See the i18n error in the console
Actual results:
Missing i18n key "Remove component node from application" in namespace "topology" and language "en." in console
Expected results:
No i18n error should be shown in the console.
Additional info:
This is a clone of issue OCPBUGS-3524. The following is the description of the original issue:
—
Description of problem:
Install fully private cluster on Azure against 4.12.0-0.nightly-2022-11-10-033725, sa for coreOS image have public access.
$ az storage account list -g jima-azure-11a-f58lp-rg --query "[].[name,allowBlobPublicAccess]" -o tsv
clusterptkpx True
imageregistryjimaazrsgcc False
same profile on 4.11.0-0.nightly-2022-11-10-202051, sa for coreos image are not publicly accessible.
$ az storage account list -g jima-azure-11c-kf9hw-rg --query "[].[name,allowBlobPublicAccess]" -o tsv
clusterr8wv9 False
imageregistryjimaaz9btdx False
Checked that terraform-provider-azurerm version is different between 4.11 and 4.12.
4.11: v2.98.0
4.12: v3.19.1
In terraform-provider-azurerm v2.98.0, it use property allow_blob_public_access to manage sa public access, the default value is false.
In terraform-provider-azurerm v3.19.1, property allow_blob_public_access is renamed to allow_nested_items_to_be_public , the default value is true.
https://github.com/hashicorp/terraform-provider-azurerm/blob/main/CHANGELOG.md#300-march-24-2022
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-10-033725
How reproducible:
always on 4.12
Steps to Reproduce:
1. Install fully private cluster on azure against 4.12 payload 2. 3.
Actual results:
sa for coreos image is publicly accessible
Expected results:
sa for coreos image should not be publicly accessible
Additional info:
only happened on 4.12
Description of problem:
Setting up Github App from the console is lacking the required permission
Events and Permissions: https://pipelinesascode.com/docs/install/github_apps/
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Setup Github App from administrator perspective.
2. Create Repository and configure it to use the Github App method.
Actual results:
Creates Github App with limited permission.
Expected results:
Created Github App should contain all the required permission and should trigger the pipelinerun successfully on git events.
Additional info:
Console needs to update the default_events and default_permissions here it has to be matching with the CLI - refer this.
we need to update the See Github permission section in the UI as well.
Description of problem:
When providing the openshift-install agent create command with installconfig + agentconfig manifests that contain the InstallConfig Proxy section, the Proxy configuration does not get applied.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Define InstallConfig with Proxy section 2.openshift-install agent create image 3.Boot ISO 4.Check /etc/assisted/manifests for InfraEnv to contain its Proxy section
Actual results:
Missing proxy
Expected results:
Proxy present and matching InstallConfig's
Additional info:
The dependency on openshift/api needs to be bumped in openshift/kubernetes in order to pull in the fix from https://issues.redhat.com/browse/OCPBUGS-3635.
Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-1941.
This is a clone of issue OCPBUGS-3096. The following is the description of the original issue:
—
While the installer binary is statically linked, the terraform binaries shipped with it are dynamically linked.
This could give issues when running the installer on Linux and depending on the GLIBC version the specific Linux distribution has installed. It becomes a risk when switching the base image of the builders from ubi8 to ubi9 and trying to run the installer in cs8 or rhel8.
For example, building the installer on cs9 and trying to run it in a cs8 distribution leads to:
time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform init -no-color -force-copy -input=false -backend=true -get=true -upgrade=false -plugin-dir=/root/test/terraform/plugins" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="failed to fetch Cluster: failed to generate asset \"Cluster\": failure applying terraform for \"cluster\" stage: failed to create cluster: failed doing terraform init: exit status 1\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)\n"
How reproducible:Always
Steps to Reproduce:{code:none} 1. Build the installer on cs9 2. Run the installer on cs8 until the terraform binary are started 3. Looking at the terrform binary with ldd or file, you can get it is not a statically linked binary and the error above might occur depending on the glibc version you are running on
Actual results:
Expected results:
The terraform and providers binaries have to be statically linked as well as the installer is.
Additional info:
This comes from a build of OKD/SCOS that is happening outside of Prow on a cs9-based builder image. One can use the Dockerfile at images/installer/Dockerfile.ci and replace the builder image with one like https://github.com/okd-project/images/blob/main/okd-builder.Dockerfile
Console should be using v1 version of the ConsolePlugin model rather then the old v1alpha1.
CONSOLE-3077 was updating this version, but did not made the cut for the 4.12 release. Based on discussion with Samuel Padgett we should be backporting to 4.12.
The risk should be minimal since we are only updating the model itself + validation + Readme
Description of problem:
When providing the openshift-install agent create command with installconfig + agentconfig manifests that contain the InstallConfig Proxy section, the Proxy configuration does not get configured cluster-wide.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Define InstallConfig with Proxy section 2.openshift-install agent create image 3.Boot ISO 4.Check /etc/assisted/manifests for agent-cluster-install.yaml to contain the Proxy section
Actual results:
Missing proxy
Expected results:
Proxy should be present and match with the InstallConfig
Additional info:
Description of problem:
metal3 pod does not come up on SNO when creating Provisioning with provisioningNetwork set to Disabled The issue is that on SNO, there is no Machine, and no BareMetalHost, it is looking of Machine objects to populate the provisioningMacAddresses field. However, when provisioningNetwork is Disabled, provisioningMacAddresses is not used anyway. You can work around this issue by populating provisioningMacAddresses with a dummy address, like this: kind: Provisioning metadata: name: provisioning-configuration spec: provisioningMacAddresses: - aa:aa:aa:aa:aa:aa provisioningNetwork: Disabled watchAllNamespaces: true
Version-Release number of selected component (if applicable):
4.11.17
How reproducible:
Try to bring up Provisioning on SNO in 4.11.17 with provisioningNetwork set to Disabled apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: Disabled watchAllNamespaces: true
Steps to Reproduce:
1. 2. 3.
Actual results:
controller/provisioning "msg"="Reconciler error" "error"="machines with cluster-api-machine-role=master not found" "name"="provisioning-configuration" "namespace"="" "reconciler group"="metal3.io" "reconciler kind"="Provisioning"
Expected results:
metal3 pod should be deployed
Additional info:
This issue is a result of this change: https://github.com/openshift/cluster-baremetal-operator/pull/307 See this Slack thread: https://coreos.slack.com/archives/CFP6ST0A3/p1670530729168599
Description of problem:
when install private cluster, firstly failed , then need
ibmcloud is security-group-rule-add "${infra}-sg-kube-api-lb" inbound tcp --port-min 6443 --port-max 6443 --remote $sg
then openshift-install wait-for again.
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. try to create cluster with BYON, in install-config.yaml publish: Internal, install failed
Actual results:
firstly time, install failed
Expected results:
Just need install once. need not manually security-group-rule-add.
Additional info:
https://coreos.slack.com/archives/C01U40AM37F/p1664439142279079?thread_ts=1663769891.358229&cid=C01U40AM37F
this issue blocked set up private cluster automatically
ovnkube-trace: ofproto/trace fails for IPv6
[akaris@linux go-controller (fix-ovnkube-trace-ipv6)]$ oc exec -ti ovn-trace-two -n ovn-tests-two -- ovnkube-trace -src-namespace ovn-tests-two -src ovn-trace-two -dst-ip 2404:6800:4003:c06::69 -tcp I1021 12:16:56.478752 3356 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace from pod to IP indicates success from ovn-trace-two to 2404:6800:4003:c06::69 F1021 12:16:57.075803 3356 ovnkube-trace.go:601] ovs-appctl ofproto/trace pod to IP error command terminated with exit code 2 stdOut: stdErr: Bad openflow flow syntax: in_port=73af56a18042ab9, tcp, dl_src=0a:58:17:2b:b6:42, dl_dst=0a:58:69:bd:ba:d8, nw_src=fd01:0:0:5::13, nw_dst=2404:6800:4003:c06::69, nw_ttl=64, tcp_dst=80, tcp_src=12345: bad value for nw_src (fd01:0:0:5::13: invalid IP address) ovs-appctl: ovs-vswitchd: server returned an error command terminated with exit code 1 [akaris@linux go-controller (fix-ovnkube-trace-ipv6)]$ oc exec -ti ovn-trace-two -n ovn-tests-two -- ovnkube-trace -src-namespace ovn-tests-two -src ovn-trace-two -dst-namespace ovn-tests -dst ovn-trace -udp I1021 12:17:26.695325 3386 ovs.go:90] Maximum command line arguments set to: 191102 ovn-trace source pod to destination pod indicates success from ovn-trace-two to ovn-trace ovn-trace destination pod to source pod indicates success from ovn-trace to ovn-trace-two F1021 12:17:27.708822 3386 ovnkube-trace.go:601] ovs-appctl ofproto/trace source pod to destination pod error command terminated with exit code 2 stdOut: stdErr: Bad openflow flow syntax: in_port=73af56a18042ab9, udp, dl_src=0a:58:17:2b:b6:42, dl_dst=0a:58:69:bd:ba:d8, nw_src=fd01:0:0:5::13, nw_dst=fd01:0:0:5::14, nw_ttl=64, udp_dst=80, udp_src=12345: bad value for nw_src (fd01:0:0:5::13: invalid IP address) ovs-appctl: ovs-vswitchd: server returned an error command terminated with exit code 1
Description of problem:
In a complete disconnected cluster, the dev catalog is taking too much time in loading
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. A complete disconnected cluster
2. In add page go to the All services page
3.
Actual results:
Taking too much time too load
Expected results:
Time taken should be reduced
Additional info:
Attached a gif for reference
Description of problem:
vSphere 4.12 CI jobs are failing with: admission webhook "validation.csi.vsphere.vmware.com" denied the request: AllowVolumeExpansion can not be set to true on the in-tree vSphere StorageClass https://search.ci.openshift.org/?search=can+not+be+set+to+true+on+the+in-tree+vSphere+StorageClass&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Version-Release number of selected component (if applicable):
4.12 nigthlies
How reproducible:
consistently in CI
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This appears to have started failing in the past 36 hours.
Description of problem:
We got a feedback from the support team that it is confusing to see switch in the Notifications column for the Alerting rule which have no alerts associated to it as user can not silence the Alerting rule.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. oc apply -f https://gist.githubusercontent.com/vikram-raj/727629797eb9d9bfcfa2721cae2ade86/raw/7c2305e14115a1a4f4f88ebb74cdad32cbec4132/Alerting%2520rule%2520without%2520alert 2. navigate to the Developer perspective Observe -> Alerts 3. Try to silence the VersionAlert alerting rule, nothing will happen
Actual results:
Silence the alerting rule using the switch will do nothing
Expected results:
No switch for silence the alerting rule should be visible if no alerts are associated to the alerting rule.
Additional info:
Description of problem:
NPE on topology if creates a k8s svc and KSVC which has no metadata in template
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. create a KSVC from admin -> serving -> create service 2. create a k8s svc from search service (create)
Actual results:
topology breaks (see attached screenshot)
Expected results:
topology shouldn't break
Additional info:
This is a clone of issue OCPBUGS-3441. The following is the description of the original issue:
—
Update the cluster-authentication-operator to not go degraded when it can’t determine the console url. This risks masking certain cases where we would want to raise an error to the admin, but the expectation is that this failure mode is rare.
Risk could be avoided by looking at ClusterVersion's enabledCapabilities to decide if missing Console was expected or not (unclear if the risk is high enough to be worth this amount of effort).
AC: Update the cluster-authentication-operator to not go degraded when console config CRD is missing and ClusterVersion config has Console in enabledCapabilities.
Test output:
=== RUN TestAll/serial/TestCanaryRoute
canary_test.go:78: failed to create pod openshift-ingress-canary/canary-route-check: pods "canary-route-check" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "curl" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "curl" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "curl" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "curl" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
Description of problem:
Upgrade OCP 4.11 --> 4.12 fails with one 'NotReady,SchedulingDisabled' node and MachineConfigDaemonFailed.
Version-Release number of selected component (if applicable):
Upgrade from OCP 4.11.0-0.nightly-2022-09-19-214532 on top of OSP RHOS-16.2-RHEL-8-20220804.n.1 to 4.12.0-0.nightly-2022-09-20-040107. Network Type: OVNKubernetes
How reproducible:
Twice out of two attempts.
Steps to Reproduce:
1. Install OCP 4.11.0-0.nightly-2022-09-19-214532 (IPI) on top of OSP RHOS-16.2-RHEL-8-20220804.n.1. The cluster is up and running with three workers: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True False 51m Cluster version is 4.11.0-0.nightly-2022-09-19-214532 2. Run the OC command to upgrade to 4.12.0-0.nightly-2022-09-20-040107: $ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 --allow-explicit-upgrade --force=true warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead warning: The requested upgrade image is not one of the available updates.You have used --allow-explicit-upgrade for the update to proceed anyway warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Requesting update to release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 3. The upgrade is not succeeds: [0] $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True True 17h Unable to apply 4.12.0-0.nightly-2022-09-20-040107: wait has exceeded 40 minutes for these operators: network One node degrided to 'NotReady,SchedulingDisabled' status: $ oc get nodes NAME STATUS ROLES AGE VERSION ostest-9vllk-master-0 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-1 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-2 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-worker-0-4x4pt NotReady,SchedulingDisabled worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-h6kcs Ready worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-xhz9b Ready worker 18h v1.24.0+3882f8f $ oc get pods -A | grep -v -e Completed -e Running NAMESPACE NAME READY STATUS RESTARTS AGE openshift-openstack-infra coredns-ostest-9vllk-worker-0-4x4pt 0/2 Init:0/1 0 18h $ oc get events LAST SEEN TYPE REASON OBJECT MESSAGE 7m15s Warning OperatorDegraded: MachineConfigDaemonFailed /machine-config Unable to apply 4.12.0-0.nightly-2022-09-20-040107: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] 7m15s Warning MachineConfigDaemonFailed /machine-config Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-20-040107 True False False 18h baremetal 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-credential 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cluster-autoscaler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h config-operator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h console 4.12.0-0.nightly-2022-09-20-040107 True False False 18h control-plane-machine-set 4.12.0-0.nightly-2022-09-20-040107 True False False 17h csi-snapshot-controller 4.12.0-0.nightly-2022-09-20-040107 True False False 19h dns 4.12.0-0.nightly-2022-09-20-040107 True True False 19h DNS "default" reports Progressing=True: "Have 5 available node-resolver pods, want 6." etcd 4.12.0-0.nightly-2022-09-20-040107 True False False 19h image-registry 4.12.0-0.nightly-2022-09-20-040107 True True False 18h Progressing: The registry is ready... ingress 4.12.0-0.nightly-2022-09-20-040107 True False False 18h insights 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-apiserver 4.12.0-0.nightly-2022-09-20-040107 True True False 18h NodeInstallerProgressing: 1 nodes are at revision 11; 2 nodes are at revision 13 kube-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-scheduler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-storage-version-migrator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-api 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-approver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-config 4.11.0-0.nightly-2022-09-19-214532 False True True 16h Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] marketplace 4.12.0-0.nightly-2022-09-20-040107 True False False 19h monitoring 4.12.0-0.nightly-2022-09-20-040107 True False False 18h network 4.12.0-0.nightly-2022-09-20-040107 True True True 19h DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2022-09-20T14:16:13Z... node-tuning 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-apiserver 4.12.0-0.nightly-2022-09-20-040107 True False False 18h openshift-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-samples 4.12.0-0.nightly-2022-09-20-040107 True False False 17h operator-lifecycle-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h service-ca 4.12.0-0.nightly-2022-09-20-040107 True False False 19h storage 4.12.0-0.nightly-2022-09-20-040107 True True False 19h ManilaCSIDriverOperatorCRProgressing: ManilaDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods... [0] http://pastebin.test.redhat.com/1074531
Actual results:
OCP 4.11 --> 4.12 upgrade fails.
Expected results:
OCP 4.11 --> 4.12 upgrade success.
Additional info:
Attached logs of the NotReady node - [^journalctl_ostest-9vllk-worker-0-4x4pt.log.tar.gz]
This is a clone of issue OCPBUGS-4724. The following is the description of the original issue:
—
Description of problem: Installing OCP4.12 on top of Openstack 16.1 following the multi-availabilityZone installation is creating a cluster where the egressIP annotations ("cloud.network.openshift.io/egress-ipconfig") are created with empty value for the workers:
$ oc get nodes NAME STATUS ROLES AGE VERSION ostest-kncvv-master-0 Ready control-plane,master 9h v1.25.4+86bd4ff ostest-kncvv-master-1 Ready control-plane,master 9h v1.25.4+86bd4ff ostest-kncvv-master-2 Ready control-plane,master 9h v1.25.4+86bd4ff ostest-kncvv-worker-0-qxr5g Ready worker 8h v1.25.4+86bd4ff ostest-kncvv-worker-1-bmvvv Ready worker 8h v1.25.4+86bd4ff ostest-kncvv-worker-2-pbgww Ready worker 8h v1.25.4+86bd4ff $ oc get node ostest-kncvv-worker-0-qxr5g -o json | jq -r '.metadata.annotations' { "alpha.kubernetes.io/provided-node-ip": "10.196.2.156", "cloud.network.openshift.io/egress-ipconfig": "null", "csi.volume.kubernetes.io/nodeid": "{\"cinder.csi.openstack.org\":\"8327aef0-c6a7-4bf6-8f8f-d25c9abd9bce\",\"manila.csi.openstack.org\":\"ostest-kncvv-worker-0-qxr5g\"}", "k8s.ovn.org/host-addresses": "[\"10.196.2.156\",\"172.17.5.154\"]", "k8s.ovn.org/l3-gateway-config": "{\"default\":{\"mode\":\"shared\",\"interface-id\":\"br-ex_ostest-kncvv-worker-0-qxr5g\",\"mac-address\":\"fa:16:3e:7e:b5:70\",\"ip-addresses\":[\"10.196.2.156/16\"],\"ip-address\":\"10.196.2.156/16\",\"next-hops\":[\"10.196.0.1\"],\"next-hop\":\"10.196.0.1\",\"node-port-enable\":\"true\",\"vlan-id\":\"0\"}}", "k8s.ovn.org/node-chassis-id": "fd777b73-aa64-4fa5-b0b1-70c3bebc2ac6", "k8s.ovn.org/node-gateway-router-lrp-ifaddr": "{\"ipv4\":\"100.64.0.6/16\"}", "k8s.ovn.org/node-mgmt-port-mac-address": "42:e8:4f:42:9f:7d", "k8s.ovn.org/node-primary-ifaddr": "{\"ipv4\":\"10.196.2.156/16\"}", "k8s.ovn.org/node-subnets": "{\"default\":\"10.128.2.0/23\"}", "machine.openshift.io/machine": "openshift-machine-api/ostest-kncvv-worker-0-qxr5g", "machineconfiguration.openshift.io/controlPlaneTopology": "HighlyAvailable", "machineconfiguration.openshift.io/currentConfig": "rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/desiredConfig": "rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/desiredDrain": "uncordon-rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/lastAppliedDrain": "uncordon-rendered-worker-31323caf2b510e5b81179bb8ec9c150f", "machineconfiguration.openshift.io/reason": "", "machineconfiguration.openshift.io/state": "Done", "volumes.kubernetes.io/controller-managed-attach-detach": "true" }
Furthermore, Below is observed on openshift-cloud-network-config-controller:
$ oc logs -n openshift-cloud-network-config-controller cloud-network-config-controller-5fcdb6fcff-6sddj | grep egress I1212 00:34:14.498298 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-2-pbgww I1212 00:34:15.777129 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-0-qxr5g I1212 00:38:13.115115 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-1-bmvvv I1212 01:58:54.414916 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-0-drd5l I1212 02:01:03.312655 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-1-h976w I1212 02:04:11.656408 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: null' on node: ostest-kncvv-worker-2-zxwrv
Version-Release number of selected component (if applicable):
RHOS-16.1-RHEL-8-20221206.n.1 4.12.0-0.nightly-2022-12-09-063749
How reproducible:
Always
Steps to Reproduce:
1. Run AZ job on D/S CI (Openshift on Openstack QE CI) 2. Run conformance/serial tests
Actual results:
conformance/serial TCs are failing because it is not finding the egressIP annotation on the workers
Expected results:
Tests passing
Additional info:
Links provided on private comment.
This is a clone of issue OCPBUGS-2891. The following is the description of the original issue:
—
Deprovisioning can fail with the error:
level=warning msg=unrecognized elastic load balancing resource type listener arn=arn:aws:elasticloadbalancing:us-west-2:460538899914:listener/net/a9ac9f1b3019c4d1299e7ededc92b42b/a6f0655da877ddd4/45e05ee69d99bab0
Further background is available in this write up:
https://docs.google.com/document/d/1TsTqIVwHDmjuDjG7v06w_5AAbXSisaDX-UfUI9-GVJo/edit#
Incident channel:
incident-aws-leaking-tags-for-deleted-resources
Description of problem:
KafkSink current desctiption in odc is `Kafka Sink is Addressable, it receives events and send them to a Kafka topic.` and this should be `A KafkaSink takes a CloudEvent, and sends it to an Apache Kafka Topic. Events can be specified in either Structured or Binary mode.` as provided by Serverless team
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Install Serverless operator 2. Create CR for knativeKafka in knative-eventing ns 3. go to dev perspective -> add -> event sink 4. Check the description of kafka sink
Actual results:
Expected results:
Update the description to as provided by serverless team
Additional info:
Since 4.11 OCP comes with OperatorHub definition which declares a capability
and enables all catalog sources. For OKD we want to enable just community-operators
as users may not have Red Hat pull secret set.
This commit would ensure that OKD version of marketplace operator gets
its own OperatorHub manifest with a custom set of operator catalogs enabled
This is a clone of issue OCPBUGS-4950. The following is the description of the original issue:
—
Description of problem:
A PR bumping OLM's k8s dependencies to 1.25 wasn't merged into openshift 4.12
Version-Release number of selected component (if applicable):
openshift-4.12
How reproducible:
Always
Steps to Reproduce:
1. Check OLM's repository for k8s dependencies in the 4.12 branch
Actual results:
Has 1.24 k8s dependencies
Expected results:
Has 1.25 k8s dependencies
Additional info:
This bug is a backport clone of [Bugzilla Bug 2100429](https://bugzilla.redhat.com/show_bug.cgi?id=2100429). The following is the description of the original bug:
—
Description of problem:
[apiserver-auth] default SCC restricted allow volumes don't have "ephemeral" caused deployment with Generic Ephemeral Volumes stuck at Pending
Version-Release number of selected component (if applicable):
Cluster version is 4.11.0-0.nightly-2022-06-22-190830
$ oc version
Client Version: 4.11.0-0.nightly-2022-05-11-054135
Kustomize Version: v4.5.4
Server Version: 4.11.0-0.nightly-2022-06-22-190830
Kubernetes Version: v1.24.0+284d62a
How reproducible:
Always
Steps to Reproduce:
1. Set up a AWS OCP cluster with 4.11 nightly
2. Create a deployment with Generic Ephemeral Volumes
3. Waiting for the deployment ready and check the volume could write and read data
Test data:
wangpenghao@MacBook-Pro ~ cat temp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dep
spec:
replicas: 1
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
Actual results:
In Step3 : The deployment stucked at Pending caused by unable to validate against any security context constraint
Expected results:
In Step3 : The deployment should ready with the default scc restricted, the default scc restricted should allow
volumes:
Additional info:
Generic ephemeral volumes are the safer option of these two - it just creates/deletes PVCs on behalf of users. And most users can already create PVCs.
ephemeral type volume not in scc.volumes list definition
https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#authorization-cont[…]ing-internal-oauth
So currently if customers want to use ephemeral type volume have to use scc with:
volumes:
Discuss record: https://coreos.slack.com/archives/CB48XQ4KZ/p1655465586780419
Generic Ephemeral Volumes docs:
https://kubernetes.io/blog/2020/09/01/ephemeral-volumes-with-storage-capacity-tracking/#generic-ephemeral-volumes
Master Log:
Node Log (of failed PODs):
PV Dump:
PVC Dump:
StorageClass Dump (if StorageClass used by PV/PVC):
This is a clone of issue OCPBUGS-1627. The following is the description of the original issue:
—
Description of problem:
Two issues when setting user-defined folder in failureDomain.
1. installer get error when setting folder as a path of user-defined folder in failureDomain.
failureDomains setting in install-config.yaml:
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: /IBMCloud/vm/qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: /IBMCloud/vm/qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: /IBMCloud/vm/qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: ibmvcenter.vmc-ci.devcluster.openshift.com topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
Error message in terraform after completing ova image import:
DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m40s elapsed] DEBUG vsphereprivate_import_ova.import[3]: Creation complete after 1m40s [id=vm-367860] DEBUG vsphereprivate_import_ova.import[1]: Creation complete after 1m49s [id=vm-367863] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Creation complete after 2m2s [id=vm-367862] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m10s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Creation complete after 2m20s [id=vm-367861] DEBUG data.vsphere_virtual_machine.template[0]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Reading... DEBUG data.vsphere_virtual_machine.template[1]: Reading... DEBUG data.vsphere_virtual_machine.template[2]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Read complete after 1s [id=42054e33-85d6-e310-7f4f-4c52a73f8338] DEBUG data.vsphere_virtual_machine.template[1]: Read complete after 2s [id=42053e17-cc74-7c89-f5d1-059c9030ecc7] DEBUG data.vsphere_virtual_machine.template[2]: Read complete after 2s [id=4205019f-26d8-f9b4-ac0c-2c073fd70b35] DEBUG data.vsphere_virtual_machine.template[0]: Read complete after 2s [id=4205eaf2-c727-c647-ad44-bd9ad7023c56] ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "pre-bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR
2. installer get panic error when setting folder as user-defined folder name in failure domains.
failure domain in install-config.yaml
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: xxx topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
panic error message in installer:
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.12/412.86.202208101039-0/x86_64/rhcos-412.86.202208101039-0-vmware.x86_64.ova?sha256='
INFO The file was found in cache: /home/user/.cache/openshift-installer/image_cache/rhcos-412.86.202208101039-0-vmware.x86_64.ova. Reusing...
panic: runtime error: index out of range [1] with length 1goroutine 1 [running]:
github.com/openshift/installer/pkg/tfvars/vsphere.TFVars({{0xc0013bd068, 0x3, 0x3}, {0xc000b11dd0, 0x12}, {0xc000b11db8, 0x14}, {0xc000b11d28, 0x14}, {0xc000fe8fc0, ...}, ...})
/go/src/github.com/openshift/installer/pkg/tfvars/vsphere/vsphere.go:79 +0x61b
github.com/openshift/installer/pkg/asset/cluster.(*TerraformVariables).Generate(0x1d1ed360, 0x5?)
/go/src/github.com/openshift/installer/pkg/asset/cluster/tfvars.go:847 +0x4798
Based on explanation of field folder, looks like folder name should be ok. If it is not allowed to use folder name, need to validate the folder and update explain.
sh-4.4$ ./openshift-install explain installconfig.platform.vsphere.failureDomains.topology.folder KIND: InstallConfig VERSION: v1RESOURCE: <string> folder is the name or inventory path of the folder in which the virtual machine is created/located.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-20-095559
How reproducible:
always
Steps to Reproduce:
see description
Actual results:
installation has errors when set user-defined folder
Expected results:
installation is successful when set user-defined folder
Additional info:
Description of problem:
Each LB created for a Service type LoadBalancer results in 1 client rule and <# of public subnets> health rules being created. The rules per SG quota in AWS is quite small; 60 by default, and 200 hard max. OCP has about 40 rules OOTB. Assuming an HA cluster in 3 AZs, that is 4 rules per LB. With default AWS quota, only ~5 LBs can be create and with the hard max of 200, only ~40 LBs can be created.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Create Service type LoadBalancer and observe increase in master-sg and worker-sg rules sets 2. 3.
Actual results:
4 rules are created
Expected results:
1 rules is created when the client rule is a superset of the per-subnet health rules
Additional info:
This ~4x the number of Services of type LoadBalancer. This is required for Hypershift.
Description of problem:
Network policy code has some problems, most of them are races, therefore it can be difficult to reproduce and verify, here is the list 1. all kinds of add/delete port to/from default deny port group failures, possible symptoms: - port should’ve been added to default deny port group, but wasn’t: connections that should’ve been dropped are allowed - port should’ve been deleted from default deny port group, but wasn’t: connections that should be allowed are dropped - db ops failures when an attempt to add/delete port to/from default deny port group fails, e.g. because this operation already was done 2. default deny port group was overwritten when 2 network policies are created in a namespace at the same time. Can lead to ports not being added to the default deny port group => denied connections will be allowed 3. handle error when getting local pod from the cache fails, possible symptoms - "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" log message - pod is not added to netpol port groups, network policy is not applied 4. creating deleted namespace via ensureNamespaceLocked, symptoms: - namespace was deleted, but address set is present in the db 5. policy acl loglevel update wasn’t applied, possible symptoms: - netpol acl log level isn’t set/updated to namespace loglevel 6. netpol cleanup failures, symptoms: - network policy failed to be deleted, something is still left in the db, error messages like - "failed to destroy network policy" - "Rollback of default port groups and acls for policy: %s/%s failed, Unable to ensure namespace for network policy" 7. concurrent write to sets.String - this will panic, you won’t miss 8. retry for network policy handler after network policy was deleted, you should see failures saying that some network policy related object is nil or doesn’t exist, e.g. - "peer AddressSet is nil, cannot add <object>" 9. host network and completed pods selected by network policy can produce error logs, no real harm - "Failed to get LSP for pod <namespace>/<name> for networkPolicy %s refetching err" 10. namespace pod handlers are never stopped, can affect memory usage and look like a memory leak 11. add local pod failure, since netpol port group is not committed to db yet, error looks like - "Failed to create *factory.localPodSelector <name>, error: object not found"
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
Example 1 1. Create network policy with [in/e]gress selector that applies to a namespace labeled project: myproject apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: myproject 2. Use oc apply to delete network policy and crate a pod in project: myproject namespace at the same time 3. check ovnkube-master logs for "peer AddressSet is nil, cannot add peer pod(s)", this should retry with the same error 15 times 4. This may not work from the first try, since we need to hit specific order of network policy delete and pod add handling 5. With the new version no error messages should be present Example 2 1. create network policy that applies to a namespace test piVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: 2. Create host network pod in namespace test 3. Check 15 logs saying "Failed to get LSP for pod %s/%s for networkPolicy %s refetching err: " 4. check final log "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" 5. With the new version no error message should be present All the other cases are difficult to reproduce, maybe just running some standard network policy tests and making sure everything works will be a good verification.
Actual results:
Expected results:
Additional info:
Description of problem:
See https://github.com/metal3-io/baremetal-operator/issues/1045
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of the problem:
In case we are installing a cluster using the kubeapi the installer fails to send the logs due to a missing volume mount of the caCert
time="2022-07-06T08:25:59Z" level=info msg="failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- podman run --rm --privileged --net=host --pid=host -v /run/systemd/journal/socket:/run/systemd/journal/socket -v /var/log:/var/log quay.io/edge-infrastructure/assisted-installer-agent@sha256:20d9e31e37f881fcd34aed44b2ee9f143382f87cbf4b634325d2260f8dffe6c2 logs_sender -cluster-id 4d4be932-42a8-4d37-b5d2-41f42a487821 -url https://assisted-service-assisted-installer.apps.ostest.test.metalkube.org -host-id 17babad0-f2d0-419f-a69b-8c6895df26f4 -infra-env-id 37c26d69-6416-4888-bd2e-aec610f241b3 -pull-secret-token <SECRET> -insecure=false -bootstrap=true -cacert=/etc/assisted-service/service-ca-cert.crt], env vars [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm container=oci http_proxy= https_proxy= NO_PROXY= OPENSHIFT_BUILD_NAME=assisted-installer PULL_SECRET_TOKEN=<SECRET> no_proxy= HTTP_PROXY= HTTPS_PROXY= OPENSHIFT_BUILD_NAMESPACE=ci-op-8wiv6td6 BUILD_LOGLEVEL=0 HOME=/root HOSTNAME=extraworker-0], error exit status 1, waitStatus 1, Output \"time=\"06-07-2022 08:25:59\" level=fatal msg=\"Failed to initialize connection: &{%!e(string=open) %!e(string=/etc/assisted-service/service-ca-cert.crt) %!e(syscall.Errno=2)}\" file=\"send_logs.go:92\"\ntime=\"2022-07-06T08:25:59Z\" level=warning msg=\"lstat /sys/fs/cgroup/devices/machine.slice/libpod-8b070b62a9482fc0add228b77844b2c4e0a614e2b171ca87f76f56a4305a6ee7.scope: no such file or directory\"\"" time="2022-07-06T08:25:59Z" level=error msg="upload installation logs failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- podman run --rm --privileged --net=host --pid=host -v /run/systemd/journal/socket:/run/systemd/journal/socket -v /var/log:/var/log quay.io/edge-infrastructure/assisted-installer-agent@sha256:20d9e31e37f881fcd34aed44b2ee9f143382f87cbf4b634325d2260f8dffe6c2 logs_sender -cluster-id 4d4be932-42a8-4d37-b5d2-41f42a487821 -url https://assisted-service-assisted-installer.apps.ostest.test.metalkube.org -host-id 17babad0-f2d0-419f-a69b-8c6895df26f4 -infra-env-id 37c26d69-6416-4888-bd2e-aec610f241b3 -pull-secret-token <SECRET> -insecure=false -bootstrap=true -cacert=/etc/assisted-service/service-ca-cert.crt], Error exit status 1, LastOutput \"... :92\"\ntime=\"2022-07-06T08:25:59Z\" level=warning msg=\"lstat /sys/fs/cgroup/devices/machine.slice/libpod-8b070b62a9482fc0add228b77844b2c4e0a614e2b171ca87f76f56a4305a6ee7.scope: no such file or directory\"\""
How reproducible:
100%
Steps to reproduce:
1. Install a cluster using the kubeapi
2. look for the host logs after the host reboots or the installation complete
3.
Actual results:
no host logs
Expected results:
...
in order to have more info to be able to debug router issue in sno , we want to see if router is healthy from node network point of view and enable router access logs,
Lets revert when https://bugzilla.redhat.com/show_bug.cgi?id=2097041 will be found
Description of problem:
TestUnmanagedDNSToManagedDNSInternalIngressController E2E test is failing on the error: { unmanaged_dns_test.go:272: failed to verify connectivity with workload with reqURL http://10.0.128.7 using external client: timed out waiting for the condition
How reproducible:
About 75% of the time.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
75%
Steps to Reproduce:
1. Run CI E2E tests on cluster-ingress-operator or make test-e2e TEST=TestUnmanagedDNSToManagedDNSInternalIngressController
Actual results:
E2E test fails about 75% of the time
Expected results:
E2E should always pass
Additional info:
Description of problem: upon attempting to install OCP 4.10 UPI on baremetal ppc64le, the openshift-install gather command returns `panic: unsupported platform "none"`
Version-Release number of selected component (if applicable):
OCP 4.10.16
openshift-install 4.10.24
How reproducible:
easily
Steps to Reproduce:
1. create install config
2. create manifests
3. create ignition configs
4. openshift-install gather bootstrap --log-level "debug"
Actual results:
DEBUG OpenShift Installer 4.10.24
DEBUG Built from commit d63a12ba0ec33d492093a8fc0e268a01a075f5da
DEBUG Fetching Bootstrap SSH Key Pair...
DEBUG Loading Bootstrap SSH Key Pair...
DEBUG Using Bootstrap SSH Key Pair loaded from state file
DEBUG Reusing previously-fetched Bootstrap SSH Key Pair
DEBUG Fetching Install Config...
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Loading Install Config from both state file and target directory
DEBUG On-disk Install Config matches asset in state file
DEBUG Using Install Config loaded from state file
DEBUG Reusing previously-fetched Install Config
panic: unsupported platform "none"
goroutine 1 [running]:
github.com/openshift/installer/pkg/terraform/stages/platform.StagesForPlatform({0x146f2d0a, 0x1619aa08})
/go/src/github.com/openshift/installer/pkg/terraform/stages/platform/stages.go:55 +0x2ff
main.runGatherBootstrapCmd({0x14d8e028, 0x1})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:115 +0x2d6
main.newGatherBootstrapCmd.func1(0xc001364500, {0xc0005a0b40, 0x2, 0x2})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:65 +0x59
github.com/spf13/cobra.(*Command).execute(0xc001364500, {0xc0005a0b20, 0x2, 0x2})
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0xc001334c80)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:902
main.installerMain()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:72 +0x29e
main.main()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:50 +0x125
Expected results:
I'm not really sure what I expected to happen. I've never used that gather before..
I would assume at least no panicking.
Additional info:
Description of problem:
csi-snapshot-controller-operator logs:
The operator misses some RBAC:
2022-09-07T12:13:09.457345845Z F0907 12:13:09.457292 1 cmd.go:138] unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
Surprisingly, standalone OCP jobs work well.
Description of problem:
OVNKubernetesControllerDisconnectedSouthboundDatabase alert seems to fire in the e2e-aws-ovn-serial CI job. Note that something funny happens in the job itself, which is that a set of ovnkube-node pods get created and then deleted and then get recreated again and test runs. But the alert gets fired for the first set of pods that got deleted. From the initial screening of artifacts alone its not clear what happened to the old pods. This needs investigation
Version-Release number of selected component (if applicable):
4.12 OCP
How reproducible:
Seems like always
Steps to Reproduce:
1.https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1568166237639282688 2. https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1567913444936519680
Actual results:
Alert is fired
Expected results:
Alert shouldn't be fired and this is expected in the serial job then we need to silence that alert for that job, make it flaky and not fail hard if that alert fires.
Additional info:
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
The application dropdown menu uses a custom component with a configuration to favorite applications, like the Project selection menu favorites projects, but its UX is inconsistent in the way it looks and behaves.
The Project selection UI element uses the PatternFly Menu component. It would be better to have the Application dropdown menu looks and behavior be consistent with the PatternFly Menu component.
Description of problem:
When user selects a installed operator (for example, openshift elastic search) in operator hub and navigating to installed operator page from operator information page
with the help of "view it here" option, "404 Not found" information has wrongly shown/appeared although it navigates to the installed operator at the end.
Version-Release number of selected components (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
Actual results:
Wrong message "404: Not found" while the user selects an installed operator and navigates from operator hub to installed operator page.
Browser console log indicate as below
main-chunk-525818b154a57a9b220a.min.js:1 unhandled error: Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:125992) at HTMLDivElement.l (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:126387) TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) window.onerror @ main-chunk-525818b154a57a9b220a.min.js:1 vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) c @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 l @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 scroll (async) componentWillUnmount @ vendor-patternfly-core-chunk-006bb1499791fa7cfea7.min.js:38397 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Oc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Ac @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 pc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 (anonymous) @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Vi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 qi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 De @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Yt @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 main-chunk-525818b154a57a9b220a.min.js:1 GET https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/api/kubernetes/apis/operators.coreos.com/v1alpha1/clusterserviceversions/elasticsearch-operator.5.5.0 404 (Not Found)
Expected results:
Installed operator details should show without any error when the user selects an installed operator and navigates from operator hub to installed operator page.
Additional info:
Reproduced in both chrome[103.0.5060.114 (Official Build) (64-bit)] and firefox[91.11.0esr (64-bit)] browsers
Attached screen share for the same issue InstalledOperatorNavigation404.mp4
Description of problem:
Pod and PDB list page just report "Not found" when no resources found
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-15-094115
How reproducible:
Always
Steps to Reproduce:
1. normal user has a new empty project 2. normal user visit PDB list page via Workloads -> PodDisruptionBudgets 3.
Actual results:
2. it just reports 'Not found'
Expected results:
2. for other workloads, it will report "No <resource> found", for example No HorizontalPodAutoscalers found No StatefulSets found No Deployments found so for Pods and PodDisruptionBudgets list page, when no resource can be found, it's better that we also reports "No pods found" and "No PodDisruptionBudgets found"
Additional info:
Description of problem:
To address: 'Static Pod is managed but errored" err="managed container xxx does not have Resource.Requests'
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Already merged in https://github.com/openshift/cluster-kube-apiserver-operator/pull/1398
Description of problem:
When the user selects Serverless as an import strategy and tried to import a Devfile, the import fails because of an invalid Deployment.
Could reproduce this already in 4.11, but its even more prominent in 4.12 when the console automatically selects the resource type serverless when the Serverless operator is installed.
Version-Release number of selected component (if applicable):
Works on 4.10
Failed on 4.11 and 4.12 master
How reproducible:
Always
Steps to Reproduce:
1. Install and setup Serverless operator
1. Switch to dev perspective, navigate to add > import from git
3. Enter a non-Devfile git URL like https://github.com/jerolimov/nodeinfo
4. On 4.11 select resource type Serverless (on 4.12 this should be selected automatically)
5. Update the git URL to a repo with a Devfile like https://github.com/nodeshift-starters/devfile-sample
6. Press create
Actual results:
Import fails with error:
Error "Invalid value: "": name part must be non-empty" for field "spec.template.labels".
Expected results:
Devfile should be imported
Additional info:
As an OpenShift operator, i would like to be able to add labels to my MachineSets and nodes which contain unique values, while also using the cluster autoscaler's ability to balance similar node groups. Being able to specify additional labels through the ClusterAutoscaler CRD would allow me to do that.
Something that has arisen during the investigation of https://bugzilla.redhat.com/show_bug.cgi?id=2001027 is the notion that each CSI driver could create its own zone topology labels, and that they do not have to be consistent with the well known kubernetes label.
It is possible, although not entirely confirmed, that a CSI driver might add these labels even when not in use (although running in the cluster).
Additionally, users may need the option to specify more labels to ignore (as illustrated in the discussion of the bug).
With CSISnapshot capability is disabled, all Azure Disk CSI Driver Operator gets Degraded.
The reason is that cluster-csi-snapshot-controller-operator does not create VolumeSnapshotClass CRD, which the operator expects to exist.
This is a clone of issue OCPBUGS-4401. The following is the description of the original issue:
—
Description of problem:
cluster-policy-controller has unnecessary permissions and is able to operate on all leases in KCM namespace. This also applies to namespace-security-allocation-controller that was moved some time ago and does not need lock mechanism.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
In ZTP input, we can put AdditionalNTPSources in order to have assisted-service mix the provided sources with those the nodes receive from DHCP. AdditionalNTPSources in AgentConfig needs to be generated in InfraEnv in order for it to be applied in the installation
Version-Release number of selected component (if applicable):
4.11 MVP patch 2
How reproducible:
100%
Steps to Reproduce:
1. Create AgentConfig with AdditionalNTPSources like for example "0.fedora.pool.ntp.org" 2. Generate ISO 3. Deploy 4. Check the resulting cluster nodes /etc/chrony.conf
Actual results:
chrony.conf only contains DHCP provided NTP sources (if not static network deplooyment)
Expected results:
/etc/chrony.conf in all the cluster nodes should have at least a server listed: server 0.fedora.pool.ntp.org iburst
Additional info:
Description of problem:
cloud-network-config-controller pod crashloops in proxy deployments as it tries to reach Openstack keystone API directly (not through the proxy) and there is no connectivity. NAMESPACE NAME READY STATUS RESTARTS AGE openshift-cloud-network-config-controller cloud-network-config-controller-c4867b748-vlq9h 0/1 CrashLoopBackOff 158 (2m10s ago) 13h $ oc -n openshift-cloud-network-config-controller logs -p cloud-network-config-controller-c4867b748-vlq9h W0927 05:48:18.678947 1 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0927 05:48:18.680269 1 leaderelection.go:248] attempting to acquire leader lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock... I0927 05:48:26.754377 1 leaderelection.go:258] successfully acquired lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock I0927 05:48:26.755413 1 openstack.go:121] Custom CA bundle found at location '/kube-cloud-config/ca-bundle.pem' - reading certificate information F0927 05:48:28.233519 1 main.go:101] Error building cloud provider client, err: Get "https://10.46.44.10:13000/": dial tcp 10.46.44.10:13000: connect: no route to host goroutine 51 [running]: k8s.io/klog/v2.stacks(0x1) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:860 +0x8a k8s.io/klog/v2.(*loggingT).output(0x37696c0, 0x3, 0x0, 0xc000636000, 0x1, {0x2cbcbd8?, 0x1?}, 0xc000438400?, 0x0) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:825 +0x686 k8s.io/klog/v2.(*loggingT).printfDepth(0x37696c0, 0x237798a?, 0x0, {0x0, 0x0}, 0x7fff81041af7?, {0x23a20d0, 0x2d}, {0xc00052c050, 0x1, ...}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2 k8s.io/klog/v2.(*loggingT).printf(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:612 k8s.io/klog/v2.Fatalf(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:1516 main.main.func1({0x26e5638, 0xc00016c040}) /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:101 +0x26d created by k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:211 +0x11bgoroutine 1 [select]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00052bb60?, {0x26cee20, 0xc000581740}, 0x1, 0xc00052bb60) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x135 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00016c080?, 0x60db88400, 0x0, 0x20?, 0x7fea470ec108?) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew(0xc0000a8120, {0x26e5638?, 0xc00016c040?}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:268 +0xd0 k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0000a8120, {0x26e5638, 0xc00025fcc0}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:212 +0x12f k8s.io/client-go/tools/leaderelection.RunOrDie({0x26e5638, 0xc00025fcc0}, {{0x26e7430, 0xc00062afa0}, 0x1fe5d61a00, 0x18e9b26e00, 0x60db88400, {0xc00065e630, 0xc000634810, 0x0}, ...}) /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:226 +0x94 main.main() /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:86 +0x450
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-26-050728
How reproducible:
Always
Steps to Reproduce:
1. Install OCP with proxy
Actual results:
Bootstrap failure and pod crashloop
Expected results:
Successful installation
Additional info:
Please find the must-gather here.
Description of problem:
Agent based installation fails during the 3+1 deployment. I found that the machine-api-operator degraded due to minimum worker replica count is 2 and for 3+1 deployment we need to define one worker node.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create agent.iso (openshift-install agent create image) using install-config.yaml and agent-config.yaml (PFA sample files) 2. Deploy a 3+1 cluster using agent.iso 3. Execute "openshift-install agent wait-for install-complete" command to wait for install complete.
Actual results:
Getting below error: ERROR Cluster operator kube-controller-manager Degraded is True with GarbageCollector_Error: GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host INFO Cluster operator machine-api Progressing is True with SyncingResources: Progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 ERROR Cluster operator machine-api Degraded is True with SyncingFailed: Failed when progressing towards operator: 4.12.0-0.nightly-2022-10-05-053337 because minimum worker replica count (2) not yet met: current running replicas 1, waiting for [] INFO Cluster operator machine-api Available is False with Initializing: Operator is initializing INFO Cluster operator monitoring Available is False with UpdatingPrometheusOperatorFailed: Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error. ERROR Cluster operator monitoring Degraded is True with UpdatingPrometheusOperatorFailed: Failed to rollout the stack. Error: updating prometheus operator: reconciling Prometheus Operator Admission Webhook Deployment failed: updating Deployment object failed: waiting for DeploymentRollout of openshift-monitoring/prometheus-operator-admission-webhook: got 1 unavailable replicas INFO Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack. INFO Cluster operator network ManagementStateDegraded is False with : ERROR Cluster initialization failed because one or more operators are not functioning properly. ERROR The cluster should be accessible for troubleshooting as detailed in the documentation linked below, ERROR https://docs.openshift.com/container-platform/latest/support/troubleshooting/troubleshooting-installations.html
Expected results:
3+1 deployment should be successful.
Additional info:
I found that there is a condition in the machine-api-operator to check that the worker node count should be 2 which is preventing the 3+1 deployment. https://github.com/openshift/machine-api-operator/blob/master/pkg/operator/sync.go#L322
Description of problem:
On the alert details page and alerting rule details page, clicking on a field that has a popover help throws an uncaught JavaScript error.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Go to Observe > Alerting pages 2. Click on an alert (or go to the rules tab then click on a rule) 3. Click on one of the underlined fields (those that have a popover help)
Actual results:
Expected results:
Additional info:
Description of problem:
On MicroShift, the Route API is served by kube-apiserver as a CRD. Reusing the same defaulting implementation as vanilla OpenShift through a patch to kube- apiserver is expected to resolve OCPBUGS-4189 but have no detectable effect on OCP.
Additional info:
This patch will be inert on OCP, but is implemented in openshift/kubernetes because MicroShift ingests kube-apiserver through its build-time dependency on openshift/kubernetes.
Description of problem:
node_exporter collects network metrics for "virtual" interfaces like br-*. When OVN is used, it also reports metrics for ovs-*, ovn, and genev_sys_* interfaces.
Version-Release number of selected component (if applicable):
4.12 (and before)
How reproducible:
Always
Steps to Reproduce:
1. Launch a 4.12 cluster. 2. Run the following PromQL query: "group by(device) (node_network_info)" 3.
Expected results:
Only real host interfaces should be present.
Additional info:
Description of problem:
The API Explorer page layout is incorrect, please check the attachment for more details
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1. Login OCP, Go to Home -> API Explorer page
2. Check if there is an extra blank line between the dropdown filter and the list
Actual results:
There is an extra blank line between the dropdown filter and the list
Expected results:
Use right patternfly package, remove the extra blank line
Additional info:
104.0.5112.79 (Official Build) (64-bit)
This is a clone of issue OCPBUGS-2841. The following is the description of the original issue:
—
Currently the agent installer supports only x86_64 arch. The image creation command must fail if some other arch is configured different from x86_64
We want to have an allowed list of architectures.
allowed = ['x86_64', 'amd64']
This is a clone of issue OCPBUGS-3499. The following is the description of the original issue:
—
Description of problem:
On clusters serving Route via CRD (i.e. MicroShift), Route validation does not perform the same validation as on OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
$ cat<<EOF | oc apply --server-side -f- apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift spec: to: kind: Service name: hello-microshift EOF route.route.openshift.io/hello-microshift serverside-applied $ oc get route hello-microshift -o yaml apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: "2022-11-11T23:53:33Z" generation: 1 name: hello-microshift namespace: default resourceVersion: "2659" uid: cd35cd20-b3fd-4d50-9912-f34b3935acfd spec: host: hello-microshift-default.cluster.local to: kind: Service name: hello-microshift wildcardPolicy: None $ cat<<EOF | oc apply --server-side -f- apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift spec: to: kind: Service name: hello-microshift wildcardPolicy: "" EOF
Actual results:
route.route.openshift.io/hello-microshift serverside-applied
Expected results:
The Route "hello-microshift" is invalid: spec.wildcardPolicy: Invalid value: "": field is immutable
Additional info:
** This change will be inert on OCP, which already has the correct behavior. **
The path used by --rotated-pod-logs to gather the rotated pod logs from /var/log/pods node folder via /api/v1/nodes/${NODE}/proxy/logs/${LOG_PATH} is only valid for regular pods but not for static pods.
The main problem is that, while normal pods have their rotated logs at this /var/log/pods/${POD_NAME}_${POD_UID_IN_API}/${CONTAINER_NAME}, static pods have them at /var/log/pods/${POD_NAME}_${CONFIG_HASH}/${CONTAINER_NAME} because the UID cannot be known at the time that the static pod is born (because static pods are created by kubelet before registering them in the kube-apiserver, and UID is assigned by the kube-apiserver).
The visible results of that are:
4.10
Always if there are static pods.
1. oc adm inspect --rotated-pod-logs ns/openshift-etcd (or any other project with static pods).
error: errors occurred while gathering data: one or more errors occurred while gathering pod-specific data for namespace: openshift-etcd [one or more errors occurred while gathering container data for pod etcd-master-0.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-1.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-2.example.net: the server could not find the requested resource]
No errors like the ones above and rotated pod logs to be gathered, if present.
Despite being marked as experimental, this --rotated-pod-logs is used in must-gather, so this issue can be easily reproduced by just running a default must-gather. I focused on bare oc adm inspect reproducers for simplicity.
This bug is a backport clone of [Bugzilla Bug 2073220](https://bugzilla.redhat.com/show_bug.cgi?id=2073220). The following is the description of the original bug:
—
Description of problem:
Version-Release number of selected component (if applicable): 4.*
How reproducible: always
Steps to Reproduce:
1. Set audit profile to WriteRequestBodies
2. Wait for api server rollout to complete
3. tail -f /var/log/kube-apiserver/audit.log | grep routes/status
Actual results:
Write events to routes/status are recorded at the RequestResponse level, which often includes keys and certificates.
Expected results:
Events involving routes should always be recorded at the Metadata level, per the documentation at https://docs.openshift.com/container-platform/4.10/security/audit-log-policy-config.html#about-audit-log-profiles_audit-log-policy-config
Additional info:
Description of problem:
Deploy IPI cluster on multi datacenter/cluster vsphere env, installer failed with some reason, then tried to destroy cluster, and found that one vm folder under one of datacenters is not deleted. When installer exit, following objects are attached with tag jima15b-cq7z7 sh-4.4$ govc tags.attached.ls jima15b-cq7z7 | xargs govc ls -L /IBMCloud/vm/jima15b-cq7z7 /datacenter-2/vm/jima15b-cq7z7 /datacenter-2/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-west-us-west-1a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-east-us-east-2a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-east-us-east-3a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-rhcos-us-east-us-east-1a /IBMCloud/vm/jima15b-cq7z7/jima15b-cq7z7-bootstrap sh-4.4$ ./openshift-install destroy cluster --dir ipi_missingzones/ INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-west-us-west-1a INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-east-us-east-2a INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-east-us-east-3a INFO Destroyed VirtualMachine=jima15b-cq7z7-rhcos-us-east-us-east-1a INFO Destroyed VirtualMachine=jima15b-cq7z7-bootstrap INFO Destroyed Folder=jima15b-cq7z7 INFO Deleted Tag=jima15b-cq7z7 INFO Deleted TagCategory=openshift-jima15b-cq7z7 INFO Time elapsed: 55s After destroying cluster, folder jima15b-cq7z7 is still there, not deleted. sh-4.4$ govc ls /datacenter-2/vm/ | grep jima15b-cq7z7 /datacenter-2/vm/jima15b-cq7z7
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-18-141547
How reproducible:
always when installer fails to create infrastructure, it works when installation is successful.
Steps to Reproduce:
1. deploy IPI cluster on vsphere env configured multi datacenter/cluster 2. installer failed to create infrastructure with some reason 3. destroy cluster 4. one folder is not deleted
Actual results:
one folder is not deleted
Expected results:
All infrastructures created by installer should be removed
Additional info:
Description of problem:
pkg/devfile/sample_test.go fails after devfile registry was updated (https://github.com/devfile/registry/pull/126)
This issue is about updating our assertion so that the CI job runs successfully again. We might want to backport this as well.
OCPBUGS-1678 is about updating the code that the test should use a mock response instead of the latest registry content OR check some specific attributes instead of comparing the full JSON response.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Clone openshift/console
2. Run ./test-backend.sh
Actual results:
Unit tests fail
Expected results:
Unit tests should pass again
Additional info:
Description of problem:
egressip healthcheck through GRPC on dualstack cluster only uses v6 address when it trying to re-connect to egressIP node
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-04-081353
How reproducible:
Steps to Reproduce:
1. on dualstack OVN cluster, label one node to be egressip assignable 2. check leader ovnkube-master pod's log for egressip health check messages 3. set iptable to drop tcp port 9107 on the egress node, check leader ovnkube-master pod's log again $ oc -n openshift-ovn-kubernetes logs ovnkube-master-s8gl4 -c ovnkube-master | grep health I1004 17:10:13.752545 1 egressip_healthcheck.go:168] Connected to master-01.jechen-1004d.qe.devcluster.openshift.com (10.129.0.2:9107) I1004 17:10:13.754308 1 egressip_healthcheck.go:168] Connected to master-00.jechen-1004d.qe.devcluster.openshift.com (10.128.0.2:9107) I1004 17:10:13.757856 1 egressip_healthcheck.go:168] Connected to worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) I1004 17:10:13.760742 1 egressip_healthcheck.go:168] Connected to worker-02.jechen-1004d.qe.devcluster.openshift.com (10.131.0.2:9107) I1004 17:10:13.763491 1 egressip_healthcheck.go:168] Connected to master-02.jechen-1004d.qe.devcluster.openshift.com (10.130.0.2:9107) I1004 17:10:13.766653 1 egressip_healthcheck.go:168] Connected to worker-01.jechen-1004d.qe.devcluster.openshift.com (10.128.2.2:9107) I1004 17:10:18.749573 1 egressip_healthcheck.go:177] Closing connection with worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) I1004 17:10:18.749624 1 egressip_healthcheck.go:177] Closing connection with worker-01.jechen-1004d.qe.devcluster.openshift.com (10.128.2.2:9107) I1004 17:10:18.749635 1 egressip_healthcheck.go:177] Closing connection with master-01.jechen-1004d.qe.devcluster.openshift.com (10.129.0.2:9107) I1004 17:10:18.749645 1 egressip_healthcheck.go:177] Closing connection with master-00.jechen-1004d.qe.devcluster.openshift.com (10.128.0.2:9107) I1004 17:10:18.749654 1 egressip_healthcheck.go:177] Closing connection with worker-02.jechen-1004d.qe.devcluster.openshift.com (10.131.0.2:9107) I1004 17:10:18.749663 1 egressip_healthcheck.go:177] Closing connection with master-02.jechen-1004d.qe.devcluster.openshift.com (10.130.0.2:9107) I1004 18:21:13.753154 1 egressip_healthcheck.go:168] Connected to worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) I1004 18:21:19.749592 1 egressip_healthcheck.go:177] Closing connection with worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) W1004 18:21:24.750727 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:21:29.750396 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:21:34.749900 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:21:39.750830 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:21:44.750599 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:21:49.750640 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:21:54.749998 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:21:59.750512 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:22:04.749911 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:22:09.750500 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:22:14.750400 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:22:19.750448 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:22:24.749497 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:22:29.750366 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded I1004 18:24:03.020413 1 egressip_healthcheck.go:168] Connected to worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) I1004 18:24:09.750273 1 egressip_healthcheck.go:177] Closing connection with worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) W1004 18:24:14.749580 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:19.750138 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:24.750291 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:29.750526 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:34.750725 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:39.750496 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:44.750182 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:49.750172 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:54.749791 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:24:59.749548 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:25:04.750806 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:25:09.750666 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:25:14.750602 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:25:19.750717 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded I1004 18:28:58.561054 1 egressip_healthcheck.go:168] Connected to worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) I1004 18:29:04.749940 1 egressip_healthcheck.go:177] Closing connection with worker-00.jechen-1004d.qe.devcluster.openshift.com (10.129.2.2:9107) W1004 18:29:09.749710 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded W1004 18:29:14.749689 1 egressip_healthcheck.go:164] Could not connect to worker-00.jechen-1004d.qe.devcluster.openshift.com ([fd01:0:0:6::2]:9107): context deadline exceeded
Actual results:
uses v6 mgmtIP address to try to reconnect
Expected results:
Should use both v4 and v6 address to try to reconnect
Additional info:
Description of problem:
openshift-install does not detect releaseImage mismatches between cluster-image-set.yaml and registries.conf
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Create ZTP inputs for image generation where registries.conf does not have any source matching the binary releaseimage (the binary image which can be obtained by running "openshift-install version". You can also set this value in ZTP manifest cluster-image-set.yaml 2.run openshift-install agent create image
Actual results:
Image is generated with no warnings
Expected results:
Image is generated with warning message - "The ImageContentSources configuration in install-config.yaml should have at-least one source field matching the releaseImage value %s", releaseImagePath
Additional info:
Description of problem:
acquiring node lock for assigning ip address, node: %s, ip: %sci-ln-g470i52-1d09d-slz7m-worker-westus-6wt7k10.0.128.102
This is a clone of issue OCPBUGS-4491. The following is the description of the original issue:
—
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Hi,
Bare Metal IPI provisioning is failing to provision the worker nodes. The metal3-machine-os-downloader InitContainer is getting in CrashLoopBackOff state because it cannot find virt-* commands in the container image.
> oc -n openshift-machine-api get pods | grep -v Running NAME READY STATUS metal3-fc66f5846-gtq9m 0/7 Init:CrashLoopBackOff metal3-image-cache-d4qcz 0/1 Init:1/2 metal3-image-cache-djzcf 0/1 Init:1/2 metal3-image-cache-p5mwg 0/1 Init:1/2
> oc -n openshift-machine-api logs deployment/metal3 -c metal3-machine-os-downloader [omitted] ++ LIBGUESTFS_BACKEND=direct ++ virt-filesystems -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -l /usr/local/bin/get-resource.sh: line 88: virt-filesystems: command not found ++ grep boot ++ cut -f1 '-d ' + BOOT_DISK= ++ LIBGUESTFS_BACKEND=direct ++ virt-ls -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -m '' /boot/loader/entries /usr/local/bin/get-resource.sh: line 90: virt-ls: command not found + BOOT_ENTRIES= + rm -fr /shared/tmp/tmp.CnCd2E3kxN
OpenShift 4.12.0-ec.0+
Since https://github.com/openshift/ocp-build-data/pull/1757, the ironic-machine-os-downloader container image is built using RHEL9 repositories.
However, following upstream move of guestfs tools to a dedicated repository [1], the libguestfs packaging differs between RHEL8 and RHEL9:
Since the Dockerfile specifies only the libguestfs-tools package, the virt-* commands are not installed when using RHEL9 repositories.
A trivial fix is to update the Dockerfile to install the guestfs-tools package instead of the libguestfs-tools package.
Regards,
Denis
Copied from an upstream issue: https://github.com/operator-framework/operator-lifecycle-manager/issues/2830
What did you do?
When attempting to reinstall an operator that uses conversion webhooks by
The resulting InstallPlan enters a failed state with message similar to
error validating existing CRs against new CRD's schema for "devworkspaces.workspace.devfile.io": error listing resources in GroupVersionResource schema.GroupVersionResource{Group:"workspace.devfile.io", Version:"v1alpha1", Resource:"devworkspaces"}: conversion webhook for workspace.devfile.io/v1alpha2, Kind=DevWorkspace failed: Post "https://devworkspace-controller-manager-service.test-namespace.svc:443/convert?timeout=30s": service "devworkspace-controller-manager-service" not found
When the original CSVs are deleted, the operator's main deployment and service are removed, but CRDs are left in-cluster. However, since the service/CA bundle/deployment that serve the conversion webhook are removed, conversion webhooks are broken at that point. Eventually this impacts garbage collection on the cluster as well.
This can be reproduced by installing the DevWorkspace Operator from the Red Hat catalog. (I can provide yamls/upstream images that reproduce as well, if that's helpful). It may be necessary to create a DevWorkspace in the cluster before deletion, e.g. by oc apply -f https://raw.githubusercontent.com/devfile/devworkspace-operator/main/samples/plain.yaml
What did you expect to see?
Operator is able to be reinstalled without removing CRDs and all instances.
What did you see instead? Under which circumstances?
It's necessary to completely remove the operator including CRDs. For our operator (DevWorkspace), this also makes uninstall especially complicated as finalizers are used (so CRDs cannot be deleted if the controller is removed, and the controller cannot be restored by reinstalling)
Environment
operator-lifecycle-manager version: 4.10.24
Kubernetes version information: Kubernetes Version: v1.23.5+012e945 (OpenShift 4.10.24)
Kubernetes cluster kind: OpenShift
This is a clone of issue OCPBUGS-3993. The following is the description of the original issue:
—
Description of problem:
On Openshift on Openstack CI, we are deploying an OCP cluster with an additional network on the workers in install-config.yaml for integration with Openstack Manila.
compute: - name: worker platform: openstack: zones: [] additionalNetworkIDs: ['0eeae16f-bbc7-4e49-90b2-d96419b7c30d'] replicas: 3
As a result, the egressIP annotation includes two interfaces definition:
$ oc get node ostest-hp9ld-worker-0-gdp5k -o json | jq -r '.metadata.annotations["cloud.network.openshift.io/egress-ipconfig"]' | jq . [ { "interface": "207beb76-5476-4a05-b412-d0cc53ab00a7", "ifaddr": { "ipv4": "10.46.44.64/26" }, "capacity": { "ip": 8 } }, { "interface": "2baf2232-87f7-4ad5-bd80-b6586de08435", "ifaddr": { "ipv4": "172.17.5.0/24" }, "capacity": { "ip": 10 } } ]
According to Huiran Wang, egressIP only works for primary interface on the node.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-11-22-012345 RHOS-16.1-RHEL-8-20220804.n.1
How reproducible:
Always
Steps to Reproduce:
Deploy cluster with additional Network on the workers
Actual results:
It is possible to select an egressIP network for a secondary interface
Expected results:
Only primary subnet can be chosen for egressIP
Additional info:
https://issues.redhat.com/browse/OCPQE-12968
Failures like:
$ oc login --token=... Logged into "https://api..." as "..." using the token provided. Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get projects.project.openshift.io)
break login, which tries to gather information before saving the configuration, including a giant project list.
Ideally login would be able to save the successful login credentials, even when the informative gathering had difficulties. And possibly the informative gathering could be made conditional (--quiet or similar?) so expensive gathering could be skipped in use-cases where the context was not needed.
Description of problem:
Using OLM descriptor components.Using OLM descriptor components deletes operand
Steps to Reproduce:
Description of problem:
$ oc adm must-gather -- gather_ingress_node_firewall [must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dec5a08681e11eedcd31f075941b74f777b9187f0e711a498a212f9d96adb2f When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 0ef60b50-4378-431d-8ca2-faa5af098274 ClusterVersion: Stable at "4.12.0-0.nightly-2022-09-26-111919" ClusterOperators: clusteroperator/insights is not available (Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed ) because Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed[must-gather ] OUT namespace/openshift-must-gather-fr7kc created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xx2fh created [must-gather ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3dec5a08681e11eedcd31f075941b74f777b9187f0e711a498a212f9d96adb2f created [must-gather-xvfj4] POD 2022-09-28T16:57:00.887445531Z /bin/bash: /usr/bin/gather_ingress_node_firewall: Permission denied [must-gather-xvfj4] OUT waiting for gather to complete [must-gather-xvfj4] OUT downloading gather output [must-gather-xvfj4] OUT receiving incremental file list [must-gather-xvfj4] OUT ./ [must-gather-xvfj4] OUT [must-gather-xvfj4] OUT sent 27 bytes received 40 bytes 26.80 bytes/sec [must-gather-xvfj4] OUT total size is 0 speedup is 0.00 [must-gather ] OUT namespace/openshift-must-gather-fr7kc deleted [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-xx2fh deleted Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 0ef60b50-4378-431d-8ca2-faa5af098274 ClusterVersion: Stable at "4.12.0-0.nightly-2022-09-26-111919" ClusterOperators: clusteroperator/insights is not available (Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed ) because Reporting was not allowed: your Red Hat account is not enabled for remote support or your token has expired: UHC services authentication failed
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Our Prometheus alerts are inconsistent with both upstream and sometimes our own vendor folder. Let's do a clean update run before the next release is branched off.
This is a clone of issue OCPBUGS-3123. The following is the description of the original issue:
—
Description of problem:
Support for tech preview API extensions was introduced in https://github.com/openshift/installer/pull/6336 and https://github.com/openshift/api/pull/1274 . In the case of https://github.com/openshift/api/pull/1278 , config/v1/0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml was introduced which seems to result in both 0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml and 0000_10_config-operator_01_infrastructure-Default.crd.yaml being rendered by the bootstrap. As a result, both CRDs are created during bootstrap. However, one of them(in this case the tech preview CRD) fails to be created. We may need to modify the render command to be aware of feature gates when rendering manifests during bootstrap. Also, I'm open hearing other views on how this might work.
Version-Release number of selected component (if applicable):
https://github.com/openshift/cluster-config-operator/pull/269 built and running on 4.12-ec5
How reproducible:
consistently
Steps to Reproduce:
1. bump the version of OpenShift API to one including a tech preview version of the infrastructure CRD 2. install openshift with the infrastructure manifest modified to incorporate tech preview fields 3. those fields will not be populated upon installation Also, checking the logs from bootkube will show both being installed, but one of them fails.
Actual results:
Expected results:
Additional info:
Excerpts from bootkube log Nov 02 20:40:01 localhost.localdomain bootkube.sh[4216]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml Nov 02 20:40:01 localhost.localdomain bootkube.sh[4216]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_infrastructure-Default.crd.yaml Nov 02 20:41:23 localhost.localdomain bootkube.sh[5710]: Created "0000_10_config-operator_01_infrastructure-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n Nov 02 20:41:23 localhost.localdomain bootkube.sh[5710]: Skipped "0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n as it already exists
Description of problem:
According to OCP 4.11 doc (https://docs.openshift.com/container-platform/4.11/installing/installing_gcp/installing-gcp-account.html#installation-gcp-enabling-api-services_installing-gcp-account), the Service Usage API (serviceusage.googleapis.com) is an optional API service to be enabled. But, the installation cannot succeed if this API is disabled.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
Always, if the Service Usage API is disabled in the GCP project.
Steps to Reproduce:
1. Make sure the Service Usage API (serviceusage.googleapis.com) is disabled in the GCP project. 2. Try IPI installation in the GCP project.
Actual results:
The installation would fail finally, without any worker machines launched.
Expected results:
Installation should succeed, or the OCP doc should be updated.
Additional info:
Please see the attached must-gather logs (http://virt-openshift-05.lab.eng.nay.redhat.com/jiwei/jiwei-0926-03-cnxn5/) and the sanity check results. FYI if enabling the API, and without changing anything else, the installation could succeed.
Description of problem:
While running scale tests with ACM provisioning 1200+ SNOs via ZTP, converged flow was enabled. With converged flow the rate at which clusters begin install is much slower than what was witnessed without converged flow. Example: Without converged flow - 1250/1269 SNOs completed install in 3hrs and 11m With converged flow - 487/1250 SNOs completed install in 10hours The test actually hit timeouts so we don't exactly know how long it took, but you can see we only managed 487 SNOs to be provisioned in 10 hours. The concurrency measurement scripts show that converged flow ran at a concurrency of 68 SNOs installing at a time vs non-converged flow peaking at 507. Something within the converged flow is bottlenecking the SNOs install.
Version-Release number of selected component (if applicable):
Hub/SNO OCP 4.11.8 ACM 2.6.1-DOWNSTREAM-2022-09-08-02-53-38
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
converged flow to match previous provisioning speeds/rates
Additional info:
Must gather will be provided.
Description of problem:
The Alertmanager silence create / edit form got a new "Negative matcher" option in 4.12 (see https://issues.redhat.com/browse/OCPBUGSM-47734). However, there is nothing to explain what this option means and it will likely not be obvious from the label alone unless you are already quite familiar with Alertmanager. After discussion with the docs team, it was decided that adding some explanation in context in the UI would be much better than adding an explanation to the documentation.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Go to Admin perspective 2. Go to Observe > Alerting > Silences page 3. Click on the Create button ("Negative matcher" option is shown with no explanation)
Actual results:
Expected results:
Additional info:
Catastrophic job runs where high numbers of tests fail are common. There are likely many root causes, but let's try to find one. This is a hard task because it's not "this one test failed, figure out why."
Clusters of failures are more common on certain platforms, it may be fruitful to start with the worst.
NURP's that average > 5 openshift-tests or openshift-tests-upgrade failures:
variants | avg -----------------------------------------------------+------------------------ {azure,amd64,ovn,upgrade,upgrade-micro,single-node} | 124.5294117647058824 {azure,amd64,ovn,upgrade,upgrade-minor,single-node} | 92.9090909090909091 {openstack,amd64,ovn,ha} | 49.2105263157894737 {azure,amd64,sdn,ha,fips} | 25.6666666666666667 {metal-ipi,amd64,ovn,ha} | 24.6000000000000000 {openstack,amd64,ovn,ha,fips} | 23.5000000000000000 {azure,amd64,ovn,ha,hypershift} | 22.6666666666666667 {s390x,sdn,ha} | 22.5454545454545455 {gcp,amd64,ovn,ha} | 21.5714285714285714 {ppc64le,sdn,ha} | 17.9545454545454545 {metal-ipi,amd64,sdn,ha} | 17.6000000000000000 {openstack,amd64,ovn,ha,serial} | 15.3333333333333333 {azure,amd64,ovn,ha} | 15.1627906976744186 {promote} | 15.0000000000000000 {aws,amd64,ovn,ha} |