Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
As a user, I should be able to configure CSI driver to have a storage topology.
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
With CSISnapshot capability is disabled, all CSI driver operators are Degraded. For example AWS EBS CSI driver operator during installation:
18:12:16.895: Some cluster operators are not ready: storage (Degraded=True AWSEBSCSIDriverOperatorCR_AWSEBSDriverStaticResourcesController_SyncError: AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): the server could not find the requested resource AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverStaticResourcesControllerDegraded: ) Ginkgo exit error 1: exit with code 1}
Version-Release number of selected component (if applicable):
4.12.nightly
The reason is that cluster-csi-snapshot-controller-operator does not create VolumeSnapshotClass CRD, which AWS EBS CSI driver operator expects to exist.
CSI driver operators must skip VolumeSnapshotClass creation if the CRD does not exist.
Description of problem:
acquiring node lock for assigning ip address, node: %s, ip: %sci-ln-g470i52-1d09d-slz7m-worker-westus-6wt7k10.0.128.102
Description of problem:
When running node-density (245 pods/node) on a 120 node cluster, we see that there is a huge spike (~22s) in Avg pod-latency. When the spike occurs we see all the ovnkube-master pods go through a restart.
The restart happens because of (ovnkube-master pods)
2022-08-10T04:04:44.494945179Z panic: reflect: call of reflect.Value.Len on ptr Value
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-09-114621
How reproducible:
Steps to Reproduce:
1. Run node-density on a 120 node cluster
Actual results:
Spike observed in pod-latency graph ~22s
Expected results:
Steady pod-latency graph ~4s
Additional info:
Description of problem:
According to OCP 4.11 doc (https://docs.openshift.com/container-platform/4.11/installing/installing_gcp/installing-gcp-account.html#installation-gcp-enabling-api-services_installing-gcp-account), the Service Usage API (serviceusage.googleapis.com) is an optional API service to be enabled. But, the installation cannot succeed if this API is disabled.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
Always, if the Service Usage API is disabled in the GCP project.
Steps to Reproduce:
1. Make sure the Service Usage API (serviceusage.googleapis.com) is disabled in the GCP project. 2. Try IPI installation in the GCP project.
Actual results:
The installation would fail finally, without any worker machines launched.
Expected results:
Installation should succeed, or the OCP doc should be updated.
Additional info:
Please see the attached must-gather logs (http://virt-openshift-05.lab.eng.nay.redhat.com/jiwei/jiwei-0926-03-cnxn5/) and the sanity check results. FYI if enabling the API, and without changing anything else, the installation could succeed.
Both `[sig-devex][Feature:ImageEcosystem][mysql][Slow] openshift mysql image Creating from a template should instantiate the template [apigroup:apps.openshift.io]` and `[sig-devex][Feature:ImageEcosystem][mariadb][Slow] openshift mariadb image Creating from a template should instantiate the template [apigroup:image.openshift.io][apigroup:operator.openshift.io][apigroup:config.openshift.io][apigroup:apps.openshift.io]` are repeatedly failing over multiple PRs.
More links in https://github.com/openshift/origin/pull/27502#issuecomment-1304613482
Opening this issue to temporarily skip the broken tests to unblocking merging PRs in openshift/origin:master
More details in https://issues.redhat.com/browse/OCPBUGS-3339
This is a clone of issue OCPBUGS-1428. The following is the description of the original issue:
—
Description of problem:
When using an OperatorGroup attached to a service account, AND if there is a secret present in the namespace, the operator installation will fail with the message: the service account does not have any API secret sa=testx-ns/testx-sa This issue seems similar to https://bugzilla.redhat.com/show_bug.cgi?id=2094303 - which was resolved in 4.11.0 - however, the new element now, is that the presence of a secret in the namespace is causing the issue. The name of the secret seems significant - suggesting something somewhere is depending on the order that secrets are listed in. For example, If the secret in the namespace is called "asecret", the problem does not occur. If it is called "zsecret", the problem always occurs.
"zsecret" is not a "kubernetes.io/service-account-token". The issue I have raised here relates to Opaque secrets - zsecret is an Opaque secret. The issue may apply to other types of secrets, but specifically my issue is that when there is an opaque secret present in the namespace, the operator install fails as described. I aught to be allowed to have an opaque secret present in the namespace where I am installing the operator.
Version-Release number of selected component (if applicable):
4.11.0 & 4.11.1
How reproducible:
100% reproducible
Steps to Reproduce:
1.Create namespace: oc new-project testx-ns 2. oc apply -f api-secret-issue.yaml
Actual results:
Expected results:
Additional info:
API YAML:
cat api-secret-issue.yaml
apiVersion: v1
kind: Secret
metadata:
name: zsecret
namespace: testx-ns
annotations:
kubernetes.io/service-account.name: testx-sa
type: Opaque
stringData:
mykey: mypass
—
apiVersion: v1
kind: ServiceAccount
metadata:
name: testx-sa
namespace: testx-ns
—
kind: OperatorGroup
apiVersion: operators.coreos.com/v1
metadata:
name: testx-og
namespace: testx-ns
spec:
serviceAccountName: "testx-sa"
targetNamespaces:
- testx-ns
—
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: testx-role
namespace: testx-ns
rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: testx-rolebinding
namespace: testx-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: testx-role
subjects:
—
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: etcd-operator
namespace: testx-ns
spec:
channel: singlenamespace-alpha
installPlanApproval: Automatic
name: etcd
source: community-operators
sourceNamespace: openshift-marketplace
This is a clone of issue OCPBUGS-4305. The following is the description of the original issue:
—
Description of problem:
Please add an option to DISABLE debug in ironic-api. Presently it is enabled by default and there is no way to disable it or reduce log level
https://github.com/metal3-io/ironic-image/blob/main/ironic-config/ironic.conf.j2#L3
Version-Release number of selected component (if applicable): none
How reproducible: Every time
Steps to Reproduce:
Please check source code here: https://github.com/metal3-io/ironic-image/blob/main/ironic-config/ironic.conf.j2#L3
It is enabled by default and there is no way to disable it or reduce log level
Actual results:
Please check Case: 03371411, the log file grew to 409 GB
Expected results: Need a way to disable debug
Additional info: Case 03371411. A cluster must gather and log file can be found in the case.
Description of problem:
For some reason, some of the packets on a DNS conversation to the {{openshift-dns/dns-default}} service cluster IP don't get properly denatted, i.e. the reply packet has the pod IP as source IP instead of the service IP.
Version-Release number of selected component (if applicable):
4.10.25
How reproducible:
Sometimes
Steps to Reproduce:
1. Try to resolve DNS with cluster DNS
Actual results:
DNS timeout. Reply packets have the pod IP instead of the service IP the request was sent to.
Expected results:
DNS working.
Additional info:
I'll elaborate about this in the attachments, but I could find nothing wrong in nbdb or any OVN-Kubernetes or OVN logs that rang a bell. The only interesting thing I could see was that `conntrack -L` had no reference to this conversation, so it makes kind of sense that the reply packet address is not translated back to the service IP one, but I have not been able to find the reason of this. The query/response packets can be correlated via DNS transaction ID.
Description of problem:
When the user runs:
openshift-install agent create image --dir cluster-manifests
But the manifests are either not in cluster-manifests or are missing, the error code generated by the tool leads users to believe that they are missing some tool dependency:
ERROR failed to write asset (Agent Installer ISO) to disk: image reader not available
Version-Release number of selected component (if applicable):4.11.0
How reproducible: 100%
Steps to Reproduce:
1. rm -fr /tmp/cluster-manifests && mkdir /tmp/cluster-manifests
2.openshift-install agent create image --dir cluster-manifests
Actual results:
ERROR failed to write asset (Agent Installer ISO) to disk: image reader not available
Expected results:
Error: Missing manifets in the specified cluster manifest directory: "/tmp/cluster-manifests"
Additional info:
Description of problem:
The default dns-default pod is missing the "target.workload.openshift.io/management:" annotation. As a result when the workload partitioning feature is enabled on SNO, this pod resources will not get mutated and pinned to the reserved cpuset. This is a regresion from 4.10. Pod spec from 4.10.17 Annotations: ... resources.workload.openshift.io/dns: {"cpushares": 51} resources.workload.openshift.io/kube-rbac-proxy: {"cpushares": 10} target.workload.openshift.io/management {"effect":"PreferredDuringScheduling"}
Version-Release number of selected component (if applicable):
4.11.0
How reproducible:
100%
Steps to Reproduce:
1. Install a SNO and check the annotation 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
The pod count for maxUnavailable of 2 or more is displayed as singular
Version-Release number of selected component (if applicable):
4.12.0-ec.2
How reproducible:
Steps to Reproduce:
1. Create a Deployment 2. Add a PDB to the Deployment and set the maxUnavailable to 2 3. Goto Deployment details page
Actual results:
The Max unavailable 6 of 3 pod
Expected results:
Should be Max unavailable 6 of 3 pods
Additional info:
Description of problem:
The current version of openshift/router vendors Kubernetes 1.24 packages. OpenShift 4.12 is based on Kubernetes 1.25.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Check https://github.com/openshift/router/blob/release-4.12/go.mod
Actual results:
Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.
Expected results:
Kubernetes packages are at version v0.25.0 or later.
Additional info:
Using old Kubernetes API and client packages brings risk of API compatibility issues.
Description of problem:
Invalid documentation link in knative-plugin README https://github.com/openshift/console/blob/master/frontend/packages/knative-plugin/README.md
Description of problem:
When a user tries to run `oc debug,` they end up getting errors about pod security labels:
Ensure the target namespace has the appropriate security level set or consider creating a dedicated privileged namespace using: "oc create ns <namespace> -o yaml | oc label -f - security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged". Original error: pods "ip-10-0-129-209ec2internal-debug" is forbidden: violates PodSecurity "restricted:latest": host namespaces (hostNetwork=true, hostPID=true, hostIPC=true), privileged (container "container-00" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (container "container-00" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "container-00" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volume "host" uses restricted volume type "hostPath"), runAsNonRoot != true (pod or container "container-00" must set securityContext.runAsNonRoot=true), runAsUser=0 (container "container-00" must not set runAsUser=0), seccompProfile (pod or container "container-00" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") command failed, 3 retries left
This happens since https://docs.openshift.com/container-platform/4.11/authentication/understanding-and-managing-pod-security-admission.html
Fixing it requires the user running something like
oc create ns fips-check -o yaml | \
oc label -f - \
security.openshift.io/scc.podSecurityLabelSync=false \
pod-security.kubernetes.io/enforce=privileged \
pod-security.kubernetes.io/audit=privileged \
pod-security.kubernetes.io/warn=privileged
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Try to run `oc debug node/....` in a new namespace
Actual results:
Error message
Expected results:
oc debug works without the user having to perform additional steps. If namespace is omitted, perhaps oc debug could create a temporary one with the correct pod security labels?
Additional info:
Job was in terrible shape even before but it looks like upgrade started more consistently failing around Oct 2-4.
Looks like we fully lose the api (service unavailable), no artifacts get gathered, mass disruption reported.
We do not have a well defined method to find these all just yet, identifying that would be a good first step.
Description of problem:
Users on a disconnected cluster with a proxy could not import a Devfile (from GitHub).
The API call /api/devfile/ takes 30 seconds until it fails with 504 Gateway timeout.
Version-Release number of selected component (if applicable):
This might happen since 4.8
Tested this yet only on 4.12.0-0.nightly-2022-09-07-112008
How reproducible:
Always
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
The console Pod log contains this error:
E0909 10:28:18.448680 1 devfile-handler.go:74] Failed to parse devfile: failed to populateAndParseDevfile: Get "https://registry.devfile.io/devfiles/go": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
This bug is a backport clone of [Bugzilla Bug 2092811](https://bugzilla.redhat.com/show_bug.cgi?id=2092811). The following is the description of the original bug:
—
+++ This bug was initially created as a clone of Bug #1926943 +++
The customer is facing this issue:
I0530 05:19:11.481797 1 vsphere_check.go:220] CheckDefaultDatastore failed: defaultDatastore "FI-HML-DC2-CONT-1" in vSphere configuration: datastore FI-HML-DC2-CONT-1: datastore name is too long: escaped volume path "var-lib-kubelet-plugins-kubernetes.io-vsphere\\x2dvolume-mounts\\x5bFI\\x2dHML\\x2dDC2\\x2dCONT\\x2d1\\x5d\\x2000000000\\x2d0000\\x2d0000\\x2d0000\\x2d000000000000-fi\\x2dhmy\\x2dsas\\x2dprod\\x2dnp868\\x2d\\x2dpvc\\x2d00000000\\x2d0000\\x2d0000\\x2d0000
x2d000000000000.vmdk" must be under 255 characters, got 255
Looks like the bug has resurfaced.
Name: DNS
Description: Please change the "DNS" component to be a subcomponent "DNS" of the "Networking" component.
Component: change to "Networking".
Subcomponent: change to "DNS".
Existing fields (default assignee, default QA contact, default CC email list, etc.) should remain the same as they currently are.
Default Assignee: aos-network-edge-staff@bot.bugzilla.redhat.com
Default QA Contact: hongli@redhat.com
Default CC List: aos-network-edge-staff@bot.bugzilla.redhat.com
Additional Notes:
I filled in "Default CC email list" because the form validation would not permit me to omit it. However, it can be left empty in Bugzilla (it is currently empty).
If possible, we would like this change to be done prior to the Bugzilla-to-Jira migration to avoid the need to make the change after the migration.
A related slack thread: here
The error:
which: no kustomize in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin:/go/bin) + curl -L --retry 5 https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv4.3.0/kustomize_v4.3.0_linux_amd64.tar.gz + tar -zx -C /usr/bin/ % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 1523 0 1523 0 0 27196 0 --:--:-- --:--:-- --:--:-- 26719 Warning: Problem : HTTP error. Will retry in 300 seconds. 5 retries left. 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 gzip: stdin: not in gzip format tar: Child died with signal 13 tar: Error is not recoverable: exiting now
A related job search: https://search.ci.openshift.org/?search=gzip%3A+stdin%3A+not+in+gzip+format&maxAge=336h&context=1&type=junit&name=assisted&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
And possibly other alerts. Declaring namespace labels on alerts makes it easy to find the source or affected resource, as described here. But because Insights alerts are based on metrics exported by the cluster-version operator, they inherit source information from the CVO, and end up looking like:
ALERTS{alertname="SimpleContentAccessNotAvailable", alertstate="firing", condition="SCAAvailable", endpoint="metrics", instance="10.58.57.116:9099", job="cluster-version-operator", name="insights", namespace="openshift-cluster-version", pod="cluster-version-operator-5d8579fb58-p5hfn", prometheus="openshift-monitoring/k8s", reason="NotFound", receive="true", service="cluster-version-operator", severity="info"}
Adding namespace: openshift-insights to the labels block for InsightsDisabled and SimpleContentAccessNotAvailable would avoid this confusion.
You might also want to clear the job and service labels as irrelevant source information. And you might want to clear the pod label to avoid churning alerts when the CVO rolls out a new pod. You can get the label clearing by wrapping the expr with max without (job, pod, service) (...) or similar.
Description of problem:
csi-snapshot-controller-operator logs:
The operator misses some RBAC:
2022-09-07T12:13:09.457345845Z F0907 12:13:09.457292 1 cmd.go:138] unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
Surprisingly, standalone OCP jobs work well.
Description of the problem:
Noticed there were no thread IDs in the assisted-installer logs when debugging 240 node cluster deployment with MCE (slack thread) making it difficult to debug.
How reproducible: 100%
Steps to reproduce:
1. Create cluster using assisted service and start the install
2. Look at the assisted-installer logs
Actual results:
Logs look like
time="2022-07-14T16:17:31Z" level=info msg="Start complete installation step, with params success: true, error info: "
Expected results: Thread ID would also print so we can understand which thread it came from
Adding setReportCaller to true will also help
Description of problem:
The default catalogSources in the openshift-4.12 payload are using the 4.12 image tag
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.12 OpenShift cluster 2. Inspect the default catalogSource image tags.
Actual results:
The default catalogSources reference the 4.11 image tags.
Expected results:
The default catalogSources reference the 4.12 image tags.
Additional info:
https://github.com/openshift/origin/pull/27444 was intended to move the scaling test out of serial to it's own test suite, but it added it to parallel – meaning it's running in all our normal upgrade jobs, causing them to frequently fail with repeating pathological events as well as greatly increasing their run time.
See https://github.com/openshift/origin/pull/27444#discussion_r991296925 for more info
Description of problem:
Install a single node cluster on AWS, then enable TechPreview, cause the cluster error. The CMA and CAPI CMA shouldn't be on the same port.
Version-Release number of selected component (if applicable):
4.11.9
How reproducible:
always
Steps to Reproduce:
1.Launch 4.11.9 single node cluster on AWS liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.9 True False 34m Cluster version is 4.11.9 liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.9 True False False 31m baremetal 4.11.9 True False False 49m cloud-controller-manager 4.11.9 True False False 52m cloud-credential 4.11.9 True False False 53m cluster-autoscaler 4.11.9 True False False 48m config-operator 4.11.9 True False False 50m console 4.11.9 True False False 37m csi-snapshot-controller 4.11.9 True False False 49m dns 4.11.9 True False False 48m etcd 4.11.9 True False False 47m image-registry 4.11.9 True False False 43m ingress 4.11.9 True False False 86s insights 4.11.9 True False False 43m kube-apiserver 4.11.9 True False False 43m kube-controller-manager 4.11.9 True False False 47m kube-scheduler 4.11.9 True False False 44m kube-storage-version-migrator 4.11.9 True False False 50m machine-api 4.11.9 True False False 44m machine-approver 4.11.9 True False False 49m machine-config 4.11.9 True False False 49m marketplace 4.11.9 True False False 48m monitoring 4.11.9 True False False 56s network 4.11.9 True False False 52m node-tuning 4.11.9 True False False 49m openshift-apiserver 4.11.9 True False False 72s openshift-controller-manager 4.11.9 True False False 39m openshift-samples 4.11.9 True False False 43m operator-lifecycle-manager 4.11.9 True False False 49m operator-lifecycle-manager-catalog 4.11.9 True False False 49m operator-lifecycle-manager-packageserver 4.11.9 True False False 104s service-ca 4.11.9 True False False 50m storage 4.11.9 True False False 49m liuhuali@Lius-MacBook-Pro huali-test % oc get node NAME STATUS ROLES AGE VERSION ip-10-0-137-222.us-east-2.compute.internal Ready master,worker 53m v1.24.0+dc5a2fd 2.Enable TechPreview spec: featureSet: TechPreviewNoUpgrade liuhuali@Lius-MacBook-Pro huali-test % oc edit featuregate featuregate.config.openshift.io/cluster edited 3.Check the cluster liuhuali@Lius-MacBook-Pro huali-test % oc get pod -n openshift-cloud-controller-manager NAME READY STATUS RESTARTS AGE aws-cloud-controller-manager-5888c85fc6-28tgt 1/1 Running 12 (10m ago) 55m liuhuali@Lius-MacBook-Pro huali-test % oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.9 True False 111m Error while reconciling 4.11.9: the workload openshift-cluster-machine-approver/machine-approver-capi has not yet successfully rolled out liuhuali@Lius-MacBook-Pro huali-test % oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.11.9 False False False 9m44s OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.huliu-aws411arn2.qe.devcluster.openshift.com/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)... baremetal 4.11.9 True False False 128m cloud-controller-manager 4.11.9 True False False 131m cloud-credential 4.11.9 True False False 133m cluster-api 4.11.9 True False False 41m cluster-autoscaler 4.11.9 True False False 128m config-operator 4.11.9 True False False 129m console 4.11.9 False True False 10m DeploymentAvailable: 0 replicas available for console deployment... csi-snapshot-controller 4.11.9 True False False 4m52s dns 4.11.9 True False False 128m etcd 4.11.9 True False False 127m image-registry 4.11.9 True False False 123m ingress 4.11.9 True False False 3m15s insights 4.11.9 True False False 122m kube-apiserver 4.11.9 True False False 123m kube-controller-manager 4.11.9 True False False 126m kube-scheduler 4.11.9 True False False 124m kube-storage-version-migrator 4.11.9 True False False 129m machine-api 4.11.9 True False False 124m machine-approver 4.11.9 True False False 128m machine-config 4.11.9 True False False 129m marketplace 4.11.9 True False False 128m monitoring 4.11.9 True False False 5m1s network 4.11.9 True False False 131m node-tuning 4.11.9 True False False 128m openshift-apiserver 4.11.9 True False False 23s openshift-controller-manager 4.11.9 True False False 118m openshift-samples 4.11.9 True False False 122m operator-lifecycle-manager 4.11.9 True False False 128m operator-lifecycle-manager-catalog 4.11.9 True False False 128m operator-lifecycle-manager-packageserver 4.11.9 True False False 2m43s service-ca 4.11.9 True False False 129m storage 4.11.9 True False False 69m liuhuali@Lius-MacBook-Pro huali-test %
Actual results:
Cluster is broken CMA is complaining, message: '0/1 nodes are available: 1 node(s) didn''t have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 node(s) didn''t have free ports for the requested pod ports.'
Expected results:
Cluster should be healthy
Additional info:
Talked with dev here https://coreos.slack.com/archives/GE2HQ9QP4/p1666178083034159?thread_ts=1666176493.224399&cid=GE2HQ9QP4 Must-Gather https://drive.google.com/file/d/1Q7Ddnhbg3Cq4ptBA2ycJnGKK01As1JcF/view?usp=sharing If enable TechPreview during installation on single node cluster, the cluster installation failed.
Description of problem:
Disconnected IPI OCP 4.10.22 cluster install on baremetal fails when hostname of master nodes does not include "master"
Version-Release number of selected component (if applicable): 4.10.22
How reproducible: Perform disconnected IPI install of OCP 4.10.22 on bare metal with master nodes that do not contain the text "master"
Steps to Reproduce:
Perform disconnected IPI install of OCP 4.10.22 on bare metal with master nodes that do not contain the text "master"
Actual results: master nodes do come up.
Expected results: master nodes should come up despite that the text "master" is not in their hostname.
Additional info:
Disconnected IPI OCP 4.10.22 cluster install on baremetal fails when hostname of master nodes does not include "master"
The code for the cluster-baremetal-operator at the following link:
The following condition is concerning:
if strings.Contains(bmh.Name, "master") && len(bmh.Spec.BootMACAddress) > 0
The packages reveal that bmh.Name references the name inside the metadata of the BMH object.
Should a customer have masters with names that do not include the text "master", the above condition can never become true, and so, the following slice is never created :
macs = append(macs, bmh.Spec.BootMACAddress)
When installing OCP cluster with worker nodes VM type specified as high performance, some of the configuration settings of said VMs do not match the configuration settings a high performance VM should have.
Specific configurations that do not match are described in subtasks.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
When installing OCP cluster with worker nodes VM type specified as high performance, manual and automatic migration is enabled in the said VMs.
However, high performance worker VMs are created with default values of the engine, so only manual migration should be enabled.
Default configuration settings of high performance VMs:
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/virtual_machine_management_guide/index?extIdCarryOver=true&sc_cid=701f2000001Css5AAC#Configuring_High_Performance_Virtual_Machines_Templates_and_Pools
How reproducible: 100%
How to reproduce:
1. Create install-config.yaml with a vmType field and set it to high performance, i.e.:
apiVersion: v1 baseDomain: basedomain.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: [] vmType: high_performance replicas: 2 ...
2. Run installation
./openshift-install create cluster --dir=resources --log-level=debug
3. Check worker VM's configuration in the RHV webconsole.
Expected:
Only manual migration (under Host) should be enabled.
Actual:
Manual and automatic migration is enabled.
We rely on the user providing accurate information about the MAC addresses in the agent-config, because at the point we read it we haven't seen the hosts yet. However, if the user gets this wrong then chaos may ensue.
Once inventory is available, we should validate that the user has not:
and fail the install if they have.
Description of problem:
The icon color of Alerts in the Topology list view should be based on alert type.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. create a deployment 2. Create a resource quota so that quota alert will be visible in topology list page 3. navigate to topology list page 3.
Actual results:
Alert icon color is black and white. See the screenshots
Expected results:
Alert icon color should be base on alert type.
Additional info:
Description of problem:
https://github.com/openshift/api/pull/1186 - https://issues.redhat.com/browse/CONSOLE-3069 promoted ConsolePlugin CRD to v1. The PR introduces also a conversion webhook from v1alpha1 to v1. In new CRD version I18n ConsolePluginI18n is marked as optional. The conversion webhook will not set a default valid ("Lazy"/"Preload") value writing the v1 object and a v1 object completely omitting spec.i18n will be accepted we no valid default value as well. On the other side, at garbage collection time the object will be stuck forever due to the lack of a valid value for spec.i18n.loadType Example, create a v1 ConsolePlugin object: cat <<EOF | oc apply -f - apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name: test472 spec: backend: service: basePath: / name: test472-service namespace: kubevirt-hyperconverged port: 9443 type: Service displayName: Test 472 Plugin EOF Delete it in foreground mode: stirabos@t14s:~$ oc delete consoleplugin test472 --timeout=30s --cascade='foreground' -v 7 I1011 18:20:03.255605 31610 loader.go:372] Config loaded from file: /home/stirabos/.kube/config I1011 18:20:03.266567 31610 round_trippers.go:463] DELETE https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins/test472 I1011 18:20:03.266581 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.266588 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.266594 31610 round_trippers.go:473] Content-Type: application/json I1011 18:20:03.266600 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.266606 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.688569 31610 round_trippers.go:574] Response Status: 200 OK in 421 milliseconds consoleplugin.console.openshift.io "test472" deleted I1011 18:20:03.688911 31610 round_trippers.go:463] GET https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins?fieldSelector=metadata.name%3Dtest472 I1011 18:20:03.688919 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.688928 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.688935 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.688941 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.840103 31610 round_trippers.go:574] Response Status: 200 OK in 151 milliseconds I1011 18:20:03.840825 31610 round_trippers.go:463] GET https://api.ci-ln-krdzphb-72292.gcp-2.ci.openshift.org:6443/apis/console.openshift.io/v1/consoleplugins?fieldSelector=metadata.name%3Dtest472&resourceVersion=175205&watch=true I1011 18:20:03.840848 31610 round_trippers.go:469] Request Headers: I1011 18:20:03.840884 31610 round_trippers.go:473] Accept: application/json I1011 18:20:03.840907 31610 round_trippers.go:473] User-Agent: oc/4.11.0 (linux/amd64) kubernetes/fcf512e I1011 18:20:03.840928 31610 round_trippers.go:473] Authorization: Bearer <masked> I1011 18:20:03.972219 31610 round_trippers.go:574] Response Status: 200 OK in 131 milliseconds error: timed out waiting for the condition on consoleplugins/test472 and in kube-controller-manager logs we see: 2022-10-11T16:25:32.192864016Z I1011 16:25:32.192788 1 garbagecollector.go:501] "Processing object" object="test472" objectUID=0cc46a01-113b-4bbe-9c7a-829a97d6867c kind="ConsolePlugin" virtual=false 2022-10-11T16:25:32.282303274Z I1011 16:25:32.282161 1 garbagecollector.go:623] remove DeleteDependents finalizer for item [console.openshift.io/v1/ConsolePlugin, namespace: , name: test472, uid: 0cc46a01-113b-4bbe-9c7a-829a97d6867c] 2022-10-11T16:25:32.304835330Z E1011 16:25:32.304730 1 garbagecollector.go:379] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"console.openshift.io/v1", Kind:"ConsolePlugin", Name:"test472", UID:"0cc46a01-113b-4bbe-9c7a-829a97d6867c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:true, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: ConsolePlugin.console.openshift.io "test472" is invalid: spec.i18n.loadType: Unsupported value: "": supported values: "Preload", "Lazy"
Version-Release number of selected component (if applicable):
OCP 4.12.0 ec4
How reproducible:
100%
Steps to Reproduce:
1. cat <<EOF | oc apply -f - apiVersion: console.openshift.io/v1 kind: ConsolePlugin metadata: name: test472 spec: backend: service: basePath: / name: test472-service namespace: kubevirt-hyperconverged port: 9443 type: Service displayName: Test 472 Plugin EOF
2. oc delete consoleplugin test472 --timeout=30s --cascade='foreground' -v 7
Actual results:
2022-10-11T16:25:32.192864016Z I1011 16:25:32.192788 1 garbagecollector.go:501] "Processing object" object="test472" objectUID=0cc46a01-113b-4bbe-9c7a-829a97d6867c kind="ConsolePlugin" virtual=false 2022-10-11T16:25:32.282303274Z I1011 16:25:32.282161 1 garbagecollector.go:623] remove DeleteDependents finalizer for item [console.openshift.io/v1/ConsolePlugin, namespace: , name: test472, uid: 0cc46a01-113b-4bbe-9c7a-829a97d6867c] 2022-10-11T16:25:32.304835330Z E1011 16:25:32.304730 1 garbagecollector.go:379] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"console.openshift.io/v1", Kind:"ConsolePlugin", Name:"test472", UID:"0cc46a01-113b-4bbe-9c7a-829a97d6867c", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:1, readerWait:0}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:true, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, beingDeleted:true, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:0, readerWait:0}, owners:[]v1.OwnerReference(nil)}: ConsolePlugin.console.openshift.io "test472" is invalid: spec.i18n.loadType: Unsupported value: "": supported values: "Preload", "Lazy"
Expected results:
Object correctly deleted
Additional info:
The issue doesn't happen with --cascade='background' which is the default on the CLI client
github rate limit failures for upi image downloading govc.
Description of problem:
When the cluster install finished, wait-for install-complete command didn't exit as expected.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Get the latest agent-installer and build image git clone https://github.com/openshift/installer.git cd installer/ hack/build.sh Edit agent-config and install-config yaml file Create the agent.iso image: OPENSHIFT_INSTALL_RELEASE_IMAGE_OVERRIDE=quay.io/openshift-release-dev/ocp-release:4.12.0-ec.3-x86_64 bin/openshift-install agent create image --log-level debug 2. Install SNO cluster virt-install --connect qemu:///system -n control-0 -r 33000 --vcpus 8 --cdrom ./agent.iso --disk pool=installer,size=120 --boot uefi,hd,cdrom --os-variant=rhel8.5 --network network=default,mac=52:54:00:aa:aa:aa --wait=-1 3. Run 'bin/openshift agent wait-for bootstrap-complete --log-level debug' and the command finished as expected. 4. After 'bootstrap' completion, run 'bin/openshift agent wait-for install-complete --log-level debug', the command didn't finish as expected.
Actual results:
Expected results:
Additional info:
Description of problem:
OCP v4.9.31 cluster didn't have the $search domain in /etc/resolv.conf, which was there in the v4.8.29 OCP cluster. This was observed in all the nodes of the v4.9.31 cluster.
~~~
OpenShift 4.9.31
sh-4.4# cat /etc/resolv.conf
OpenShift 4.8.29
ENV: OpenStack IAD2, IPI installation. Connected cluster.
Version-Release number of selected component (if applicable):
OCP v4.9.31
How reproducible:
Always
Steps to Reproduce:
1. Install IPI cluster on OpenStack IAD2 platform having cluster version 4.9.31
2. Debug to any of the node(master/worker)
3. Check and confirm the missing search domain on all nodes of the cluster.
Actual results:
The search domain was missing when checked in `/etc/resolv.conf` file on all nodes of the cluster causing serious issues in the cluster.
Expected results:
The installer should embed the search domain in /etc/resolv.conf file on all nodes of the cluster.
Additional info:
set -eo pipefail
DISPATCHER_FILE="/etc/NetworkManager/dispatcher.d/30-resolv-prepender"
DOMAINS="$(grep -E '\s*DOMAINS=.*iad2.dc.paas.redhat.com' $DISPATCHER_FILE \
grep -oE '[a-z0-9]*.dev.iad2.dc.paas.redhat.com' \ |
tr '\n' ' ')" |
>&2 echo "IT-PaaS: overwriting search domains in /etc/resolv.conf with: $DOMAINS"
sed -e "/^search/d" \
-e "/Generated by/c# Generated by KNI resolv prepender NM dispatcher script \nsearch $DOMAINS" \
/etc/resolv.conf > /etc/resolv.tmp
mv /etc/resolv.tmp /etc/resolv.conf
~~~
This is a clone of issue OCPBUGS-3123. The following is the description of the original issue:
—
Description of problem:
Support for tech preview API extensions was introduced in https://github.com/openshift/installer/pull/6336 and https://github.com/openshift/api/pull/1274 . In the case of https://github.com/openshift/api/pull/1278 , config/v1/0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml was introduced which seems to result in both 0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml and 0000_10_config-operator_01_infrastructure-Default.crd.yaml being rendered by the bootstrap. As a result, both CRDs are created during bootstrap. However, one of them(in this case the tech preview CRD) fails to be created. We may need to modify the render command to be aware of feature gates when rendering manifests during bootstrap. Also, I'm open hearing other views on how this might work.
Version-Release number of selected component (if applicable):
https://github.com/openshift/cluster-config-operator/pull/269 built and running on 4.12-ec5
How reproducible:
consistently
Steps to Reproduce:
1. bump the version of OpenShift API to one including a tech preview version of the infrastructure CRD 2. install openshift with the infrastructure manifest modified to incorporate tech preview fields 3. those fields will not be populated upon installation Also, checking the logs from bootkube will show both being installed, but one of them fails.
Actual results:
Expected results:
Additional info:
Excerpts from bootkube log Nov 02 20:40:01 localhost.localdomain bootkube.sh[4216]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml Nov 02 20:40:01 localhost.localdomain bootkube.sh[4216]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_infrastructure-Default.crd.yaml Nov 02 20:41:23 localhost.localdomain bootkube.sh[5710]: Created "0000_10_config-operator_01_infrastructure-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n Nov 02 20:41:23 localhost.localdomain bootkube.sh[5710]: Skipped "0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n as it already exists
For the disconnected installation , we should not be able to provision machines successfully with publicIP:true , this has been the behavior earlier till -
4.11 and around 17th Aug nightly released 4.12 , but it has started allowing creation of machines with publicIP:true set in machineset
Issue reproduced on - Cluster version - 4.12.0-0.nightly-2022-08-23-223922
It is always reproducible .
Steps :
Create machineset using yaml with
{"spec":{"providerSpec":{"value":{"publicIP": true}}}}
Machineset created successfully and machine provisioned successfully .
This seems to be regression bug refer - https://bugzilla.redhat.com/show_bug.cgi?id=1889620
Here is the must gather log - https://drive.google.com/file/d/1UXjiqAx7obISTxkmBsSBuo44ciz9HD1F/view?usp=sharing
Here is the test successfully ran for 4.11 , for exactly same profile and machine creation failed with InvalidConfiguration Error- https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Runner/575822/console
We can confirm disconnected cluster using below there would be lot of mirrors used in those -
oc get ImageContentSourcePolicy image-policy-aosqe -o yaml apiVersion: operator.openshift.io/v1alpha1 kind: ImageContentSourcePolicy metadata: creationTimestamp: "2022-08-24T09:08:47Z" generation: 1 name: image-policy-aosqe resourceVersion: "34648" uid: 20e45d6d-e081-435d-b6bb-16c4ca21c9d6 spec: repositoryDigestMirrors: - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/olmqe source: quay.io/olmqe - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshifttest source: quay.io/openshifttest - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6001/openshift-qe-optional-operators source: quay.io/openshift-qe-optional-operators - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: registry.stage.redhat.io - mirrors: - miyadav-2408a.mirror-registry.qe.azure.devcluster.openshift.com:6002 source: brew.registry.redhat.io
Description of problem:
According to https://issues.redhat.com/browse/OCPBUGS-705, thanks Junyun share the test env/result for install part, and we need the fix in vsphere-problem-detector, currently it reports the following missing when using the pre-existing folder and/or resource pool with ReadOnly permission: 1. vcenter cluster set ReadOnly permission: I0902 10:07:50.324782 1 vsphere_check.go:244] CheckComputeClusterPermissions:jima-permission-q84s8-worker-86gd4 failed: missing privileges for compute cluster workloads: Resource.AssignVMToPool, VApp.AssignResourcePool, VApp.Import, VirtualMachine.Config.AddNewDisk 2. datacenter set ReadOnly permission: I0902 08:09:19.462001 1 vsphere_check.go:225] CheckAccountPermissions failed: missing privileges for datacenter OCP-DC: Resource.AssignVMToPool, VApp.Import, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.AdvancedConfig, VirtualMachine.Config.Annotation, VirtualMachine.Config.CPUCount, VirtualMachine.Config.DiskExtend, VirtualMachine.Config.DiskLease, VirtualMachine.Config.EditDevice, VirtualMachine.Config.Memory, VirtualMachine.Config.RemoveDisk, VirtualMachine.Config.Rename, VirtualMachine.Config.ResetGuestInfo, VirtualMachine.Config.Resource, VirtualMachine.Config.Settings, VirtualMachine.Config.UpgradeVirtualHardware, VirtualMachine.Interact.GuestControl, VirtualMachine.Interact.PowerOff, VirtualMachine.Interact.PowerOn, VirtualMachine.Interact.Reset, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.CreateFromExisting, VirtualMachine.Inventory.Delete, VirtualMachine.Provisioning.Clone, VirtualMachine.Provisioning.DeployTemplate, VirtualMachine.Provisioning.MarkAsTemplate, Folder.Create, Folder.Delete
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-02-194931
How reproducible:
Always
Steps to Reproduce:
See Description of problem
Actual results:
The vsphere-problem-detector operator reports privilege missing when using pre-existing folder and/or resource pool with ReadOnly permission
Expected results:
The vsphere-problem-detector operator should not reports privilege missing in that case.
Additional info:
Description of problem:
Bootstrap fail in SNO installation
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. Test this in libvirt env. Agent-config and install-config in attached. 2. Use attached agent-config and install-config to create image 3. Install SNO: virt-install --connect qemu:///system -n control-0 -r 33000 --vcpus 8 --cdrom ./agent.iso --disk pool=installer,size=120 --boot uefi,hd,cdrom --os-variant=rhel8.5 --network network=default,mac=52:54:00:aa:aa:aa --wait=-1 --check mac_in_use=off 4. There is following error in bootkube.service log: -- Logs begin at Fri 2022-09-30 08:58:21 UTC, end at Fri 2022-09-30 09:19:40 UTC. -- Sep 30 09:00:51 test.metalkube.org systemd[1]: Starting Bootkube - bootstrap in place post reboot... Sep 30 09:00:51 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Running bootkube bootstrap-in-place post reboot Sep 30 09:00:52 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Waiting for api ... Sep 30 09:00:57 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Waiting for api ... Sep 30 09:01:02 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Waiting for api ... Sep 30 09:01:07 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Waiting for api ... Sep 30 09:01:12 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Waiting for api ... Sep 30 09:01:17 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Approving csrs ... Sep 30 09:01:17 test.metalkube.org bootstrap-in-place-post-reboot.sh[3045]: error: error executing jsonpath "{.items[0].status.conditions[?(@.type==\"Ready\")].status}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template: Sep 30 09:01:17 test.metalkube.org bootstrap-in-place-post-reboot.sh[3045]: template was: Sep 30 09:01:17 test.metalkube.org bootstrap-in-place-post-reboot.sh[3045]: {.items[0].status.conditions[?(@.type=="Ready")].status} Sep 30 09:01:17 test.metalkube.org bootstrap-in-place-post-reboot.sh[3045]: object given to jsonpath engine was: Sep 30 09:01:17 test.metalkube.org bootstrap-in-place-post-reboot.sh[3045]: map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":""}} Sep 30 09:01:17 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Approving csrs ... Sep 30 09:01:51 test.metalkube.org bootstrap-in-place-post-reboot.sh[3142]: error: error executing jsonpath "{.items[0].status.conditions[?(@.type==\"Ready\")].status}": Error executing template: array index out of bounds: index 0, length 0. Printing more information for debugging the template: Sep 30 09:01:51 test.metalkube.org bootstrap-in-place-post-reboot.sh[3142]: template was: Sep 30 09:01:51 test.metalkube.org bootstrap-in-place-post-reboot.sh[3142]: {.items[0].status.conditions[?(@.type=="Ready")].status} Sep 30 09:01:51 test.metalkube.org bootstrap-in-place-post-reboot.sh[3142]: object given to jsonpath engine was: Sep 30 09:01:51 test.metalkube.org bootstrap-in-place-post-reboot.sh[3142]: map[string]interface {}{"apiVersion":"v1", "items":[]interface {}{}, "kind":"List", "metadata":map[string]interface {}{"resourceVersion":""}} Sep 30 09:01:51 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Approving csrs ... Sep 30 09:02:21 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Approving csrs ... Sep 30 09:02:52 test.metalkube.org bootstrap-in-place-post-reboot.sh[2409]: Approving csrs ...
Actual results:
Expected results:
Additional info:
Description of problem:
Pod and PDB list page just report "Not found" when no resources found
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-15-094115
How reproducible:
Always
Steps to Reproduce:
1. normal user has a new empty project 2. normal user visit PDB list page via Workloads -> PodDisruptionBudgets 3.
Actual results:
2. it just reports 'Not found'
Expected results:
2. for other workloads, it will report "No <resource> found", for example No HorizontalPodAutoscalers found No StatefulSets found No Deployments found so for Pods and PodDisruptionBudgets list page, when no resource can be found, it's better that we also reports "No pods found" and "No PodDisruptionBudgets found"
Additional info:
Clone of https://issues.redhat.com/browse/OCPBUGSM-44162.
Cannot use the original as the bot won't accept a security bug:
When the change merges, the Bugzilla associated with the CVE must be set to MODIFIED. Since the DPTP bugzilla bot is not permitted to scan bugs with the SECURITY group in Bugzilla, The REP will not be able to use the bot's public functionality of moving their bug to MODIFIED.
This is a clone of issue OCPBUGS-4367. The following is the description of the original issue:
—
Description of problem:
The calls to log.Debugf() from image/baseiso.go and image/oc.go are not being output when the "image create" command is run.
Version-Release number of selected component (if applicable):
4.12.0
How reproducible:
Every time
Steps to Reproduce:
1. Run ../bin/openshift-install agent create image --dir ./cluster-manifests/ --log-level debug
Actual results:
No debug log messages from log.Debugf() calls in pkg/asset/agent/image/oc.go
Expected results:
Debug log messages are output
Additional info:
Note from Zane: We should probably also use the real global logger instead of [creating a new one](https://github.com/openshift/installer/blob/2698cbb0ec7e96433a958ab6b864786c0c503c0b/pkg/asset/agent/image/baseiso.go#L109) with the default config that ignores the --log-level flag and prints weird `[0001]` stuff in the output for some reason. (The NMStateConfig manifests logging suffers from the same problem.)
Description of problem:
vSphere privilege checking failing when providing user-defined folder and/or resource pool
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-30-054458
How reproducible:
consistently
Steps to Reproduce:
1. Provide pre-existing folder and/or resource pool to the install-config 2. Perform an installation with an account with read only privileges on the datacenter and cluster 3. The installer will fail with missing privileges for the cluster and datacenter. When a pre-existing folder and resource pool are defined, the account can hold read only privileges on the datacenter and cluster .
Actual results:
Installer reports missing privileges
Expected results:
Installer should succeed
Additional info:
Description of problem:
When disable all helm chart repos the helm navigation item is disabled.
To re-enable the helm charts again the user can search for HCP or PHCPs but the action menu doesn't work if no other helm chart repo is enabled.
Version-Release number of selected component (if applicable):
Only 4.12 (4.11 is fine)
How reproducible:
Always
Steps to Reproduce:
1. Switch to developer perspective
2. Navigate to Helm > Repos > Edit the default repo and disable it
3. Helm Navigation should disappear and the content area maybe switch to 404, that's fine.
4. Navigate to Search and select HelmChartRepository as resource
5. Click on the action menu (kebab icon) to edit the HCR
Actual results:
The action menu is not shown
Expected results:
The action menu should be shown so that the user can edit or delete the HCR.
Additional info:
Description of problem:
In a 4.11 cluster with only openshift-samples enabled, the 4.12 introduced optional COs console and insights are installed. While upgrading to 4.12, CVO considers them to be disabled explicitly and skips reconciling them. So these COs are not upgraded to 4.12. Installed COs cannot be disabled, so CVO is supposed to implicitly enable them. $ oc get clusterversion -oyaml { "apiVersion": "config.openshift.io/v1", "kind": "ClusterVersion", "metadata": { "creationTimestamp": "2022-09-30T05:02:31Z", "generation": 3, "name": "version", "resourceVersion": "134808", "uid": "bd95473f-ffda-402d-8fe3-74f852a9d6eb" }, "spec": { "capabilities": { "additionalEnabledCapabilities": [ "openshift-samples" ], "baselineCapabilitySet": "None" }, "channel": "stable-4.11", "clusterID": "8eda5167-a730-4b39-be1d-214a80506d34", "desiredUpdate": { "force": true, "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "" } }, "status": { "availableUpdates": null, "capabilities": { "enabledCapabilities": [ "openshift-samples" ], "knownCapabilities": [ "Console", "Insights", "Storage", "baremetal", "marketplace", "openshift-samples" ] }, "conditions": [ { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Unable to retrieve available updates: currently reconciling cluster version 4.12.0-0.nightly-2022-09-28-204419 not found in the \"stable-4.11\" channel", "reason": "VersionNotFound", "status": "False", "type": "RetrievedUpdates" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Capabilities match configured spec", "reason": "AsExpected", "status": "False", "type": "ImplicitlyEnabledCapabilities" }, { "lastTransitionTime": "2022-09-30T05:02:33Z", "message": "Payload loaded version=\"4.12.0-0.nightly-2022-09-28-204419\" image=\"registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc\" architecture=\"amd64\"", "reason": "PayloadLoaded", "status": "True", "type": "ReleaseAccepted" }, { "lastTransitionTime": "2022-09-30T05:23:18Z", "message": "Done applying 4.12.0-0.nightly-2022-09-28-204419", "status": "True", "type": "Available" }, { "lastTransitionTime": "2022-09-30T07:05:42Z", "status": "False", "type": "Failing" }, { "lastTransitionTime": "2022-09-30T07:41:53Z", "message": "Cluster version is 4.12.0-0.nightly-2022-09-28-204419", "status": "False", "type": "Progressing" } ], "desired": { "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "version": "4.12.0-0.nightly-2022-09-28-204419" }, "history": [ { "completionTime": "2022-09-30T07:41:53Z", "image": "registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc", "startedTime": "2022-09-30T06:42:01Z", "state": "Completed", "verified": false, "version": "4.12.0-0.nightly-2022-09-28-204419" }, { "completionTime": "2022-09-30T05:23:18Z", "image": "registry.ci.openshift.org/ocp/release@sha256:5a6f6d1bf5c752c75d7554aa927c06b5ea0880b51909e83387ee4d3bca424631", "startedTime": "2022-09-30T05:02:33Z", "state": "Completed", "verified": false, "version": "4.11.0-0.nightly-2022-09-29-191451" } ], "observedGeneration": 3, "versionHash": "CSCJ2fxM_2o=" } } $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-28-204419 True False False 93m cloud-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h56m cloud-credential 4.12.0-0.nightly-2022-09-28-204419 True False False 3h59m cluster-autoscaler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m config-operator 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m console 4.11.0-0.nightly-2022-09-29-191451 True False False 3h45m control-plane-machine-set 4.12.0-0.nightly-2022-09-28-204419 True False False 117m csi-snapshot-controller 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m dns 4.12.0-0.nightly-2022-09-28-204419 True False False 3h53m etcd 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m image-registry 4.12.0-0.nightly-2022-09-28-204419 True False False 3h46m ingress 4.12.0-0.nightly-2022-09-28-204419 True False False 151m insights 4.11.0-0.nightly-2022-09-29-191451 True False False 3h48m kube-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m kube-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-scheduler 4.12.0-0.nightly-2022-09-28-204419 True False False 3h51m kube-storage-version-migrator 4.12.0-0.nightly-2022-09-28-204419 True False False 91m machine-api 4.12.0-0.nightly-2022-09-28-204419 True False False 3h50m machine-approver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m machine-config 4.12.0-0.nightly-2022-09-28-204419 True False False 3h52m monitoring 4.12.0-0.nightly-2022-09-28-204419 True False False 3h44m network 4.12.0-0.nightly-2022-09-28-204419 True False False 3h55m node-tuning 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-apiserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m openshift-controller-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 113m openshift-samples 4.12.0-0.nightly-2022-09-28-204419 True False False 116m operator-lifecycle-manager 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-28-204419 True False False 3h48m service-ca 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m storage 4.12.0-0.nightly-2022-09-28-204419 True False False 3h54m
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-28-204419
How reproducible:
Always
Steps to Reproduce:
1. Install a 4.11 cluster with only openshift-samples enabled 2. Upgrade to 4.12 3.
Actual results:
The 4.12 introduced optional CO console and insights are not upgraded to 4.12
Expected results:
All the installed COs get upgraded
Additional info:
Description of problem:
TestUnmanagedDNSToManagedDNSInternalIngressController E2E test is failing on the error: { unmanaged_dns_test.go:272: failed to verify connectivity with workload with reqURL http://10.0.128.7 using external client: timed out waiting for the condition
How reproducible:
About 75% of the time.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
75%
Steps to Reproduce:
1. Run CI E2E tests on cluster-ingress-operator or make test-e2e TEST=TestUnmanagedDNSToManagedDNSInternalIngressController
Actual results:
E2E test fails about 75% of the time
Expected results:
E2E should always pass
Additional info:
This is a clone of issue OCPBUGS-2260. The following is the description of the original issue:
—
TRT-594 investigates failed CI upgrade runs due to alert KubePodNotReady firing. The case was a pod getting skipped over for scheduling over two successive master node update / restarts. The case was determined valid so the ask is to be able to have the monitoring aware that master nodes are restarting and scheduling may be delayed. Presuming we don't want to change the existing tolerance for the non master node restart cases could we suppress it during those restarts and fall back to a second alert with increased tolerances only during those restarts, if we have metrics indicating we are restarting. Or similar if there are better ways to handle.
The scenario is:
Description of problem:
Each LB created for a Service type LoadBalancer results in 1 client rule and <# of public subnets> health rules being created. The rules per SG quota in AWS is quite small; 60 by default, and 200 hard max. OCP has about 40 rules OOTB. Assuming an HA cluster in 3 AZs, that is 4 rules per LB. With default AWS quota, only ~5 LBs can be create and with the hard max of 200, only ~40 LBs can be created.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Create Service type LoadBalancer and observe increase in master-sg and worker-sg rules sets 2. 3.
Actual results:
4 rules are created
Expected results:
1 rules is created when the client rule is a superset of the per-subnet health rules
Additional info:
This ~4x the number of Services of type LoadBalancer. This is required for Hypershift.
This is a clone of issue OCPBUGS-3440. The following is the description of the original issue:
—
Description of problem:
https://github.com/openshift/cluster-authentication-operator/pull/587 addresses an issue in which the auth operator goes degraded when the console capability is not enabled. The rest is that the console publicAssetURL is not configured when the console is disabled. However if the console capability is later enabled on the cluster, there is no logic in place to ensure the auth operator detects this and performs the configuration. Manually restarting the auth operator will address this, but we should have a solution that handles it automatically.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Install a cluster w/o the console cap 2. Inspect the auth configmap, see that assetPublicURL is empty 3. Enable the console capability, wait for console to start up 4. Inspect the auth configmap and see it is still empty
Actual results:
assetPublicURL does not get populated
Expected results:
assetPublicURL is populated once the console is enabled
Additional info:
This is a clone of issue OCPBUGS-3432. The following is the description of the original issue:
—
Description of problem:
E2E test cases for knative and pipeline packages have been disabled on CI due to respective operator installation issues. Tests have to be enabled after new operator version be available or the issue resolves
References:
https://coreos.slack.com/archives/C6A3NV5J9/p1664545970777239
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
E2E test Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name
This is a clone of issue OCPBUGS-2144. The following is the description of the original issue:
—
Description of problem:
Azure IPI creates boot images using the image gallery API now, it will create two image definition resources for both hyperVGeneration V1 and V2. For arm64 cluster, the architecture in image definition hyperVGeneration V1 is x64, but it should be Arm64
Version-Release number of selected component (if applicable):
./openshift-install version ./openshift-install 4.12.0-0.nightly-arm64-2022-10-07-204251 built from commit 7b739cde1e0239c77fabf7622e15025d32fc272c release image registry.ci.openshift.org/ocp-arm64/release-arm64@sha256:d2569be4ba276d6474aea016536afbad1ce2e827b3c71ab47010617a537a8b11 release architecture arm64
How reproducible:
always
Steps to Reproduce:
1.Create arm cluster using latest arm64 nightly build 2.Check image definition created for hyperVGeneration V1
Actual results:
The architecture field is x64. ### $ az sig image-definition show --gallery-name ${gallery_name} --gallery-image-definition lwanazarm1008-rc8wh --resource-group ${rg} | jq -r ".architecture" x64 The image version under this image definition is for aarch64. ### $ az sig image-version show --gallery-name gallery_lwanazarm1008_rc8wh --gallery-image-definition lwanazarm1008-rc8wh --resource-group lwanazarm1008-rc8wh-rg --gallery-image-version 412.86.20220922 | jq -r ".storageProfile.osDiskImage.source" { "uri": "https://clustermuygq.blob.core.windows.net/vhd/rhcosmuygq.vhd"} $ az storage blob show --container-name vhd --name rhcosmuygq.vhd --account-name clustermuygq --account-key $account_key | jq -r ".metadata" { "Source_uri": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-412.86.202209220538-0-azure.aarch64.vhd"}
Expected results:
Although no VMs with HypergenV1 can be provisioned, the architecture field should be Arm64 even for hyperGenerationV1 image definitions
Additional info:
1.The architecture in image definition hyperVGeneration V2 is Arm64 and installer will use V2 by default for arm64 vm_type, so installation didn't fail by default. But we still need to make architecture consistent in V1. 2.Need to set architecture field for both V1 and V2, now we only set architecture in V2 image definition resource. https://github.com/openshift/installer/blob/master/data/data/azure/vnet/main.tf#L100-L128
This is a clone of issue OCPBUGS-3621. The following is the description of the original issue:
—
Description of problem:
EUS-to-EUS upgrade(4.10.38-4.11.13-4.12.0-rc.0), after control-plane nodes are upgraded to 4.12 successfully, unpause the worker pool to get worker nodes updated. But worker nodes failed to be updated with degraded worker pool: ``` # ./oc get node NAME STATUS ROLES AGE VERSION jliu410-6hmkz-master-0.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-1.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-2.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal Ready,SchedulingDisabled worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-b-9hnb8.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-c-bdv4f.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 ... # ./oc get co machine-config machine-config 4.12.0-rc.0 True False True 3h41m Failed to resync 4.12.0-rc.0 because: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool worker is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 1)] ... # ./oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b81233204496767f2fe32fbb6cb088e1 True False False 3 3 3 0 4h10m worker rendered-worker-a2caae543a144d94c17a27e56038d4c4 False True True 3 0 0 1 4h10m ... # ./oc describe mcp worker Message: Reason: Status: True Type: Degraded Last Transition Time: 2022-11-14T07:19:42Z Message: Node jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal is reporting: "Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1\ntime=\"2022-11-14T07:42:47Z\" level=fatal msg=\"unknown flag: --no-tags\"\n" Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded ... # ./oc logs machine-config-daemon-mg2zn E1114 08:11:27.115577 192836 writer.go:200] Marking Degraded due to: Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1 time="2022-11-14T08:11:25Z" level=fatal msg="unknown flag: --no-tags" ```
Version-Release number of selected component (if applicable):
4.12.0-rc.0
How reproducible:
Steps to Reproduce:
1. EUS upgrade with path 4.10.38-> 4.11.13-> 4.12.0-rc.0 with paused worker pool 2. After master pool upgrade succeed, unpause worker pool 3.
Actual results:
Worker pool upgrade failed
Expected results:
Worker pool upgrade succeed
Additional info:
Description of problem:
etcd and kube-apiserver pods get restarted due to failed liveness probes while deleting/re-creating pods on SNO
Version-Release number of selected component (if applicable):
4.10.32
How reproducible:
Not always, after ~10 attempts
Steps to Reproduce:
1. Deploy SNO with Telco DU profile applied 2. Create multiple pods with local storage volumes attached(attaching yaml manifest) 3. Force delete and re-create pods 10 times
Actual results:
etcd and kube-apiserver pods get restarted, making to cluster unavailable for a period of time
Expected results:
etcd and kube-apiserver do not get restarted
Additional info:
Attaching must-gather. Please let me know if any additional info is required. Thank you!
Description of problem:
Setting up Github App from the console is lacking the required permission
Events and Permissions: https://pipelinesascode.com/docs/install/github_apps/
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Setup Github App from administrator perspective.
2. Create Repository and configure it to use the Github App method.
Actual results:
Creates Github App with limited permission.
Expected results:
Created Github App should contain all the required permission and should trigger the pipelinerun successfully on git events.
Additional info:
Console needs to update the default_events and default_permissions here it has to be matching with the CLI - refer this.
we need to update the See Github permission section in the UI as well.
Description of problem:
The Alertmanager silence create / edit form got a new "Negative matcher" option in 4.12 (see https://issues.redhat.com/browse/OCPBUGSM-47734). However, there is nothing to explain what this option means and it will likely not be obvious from the label alone unless you are already quite familiar with Alertmanager. After discussion with the docs team, it was decided that adding some explanation in context in the UI would be much better than adding an explanation to the documentation.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Go to Admin perspective 2. Go to Observe > Alerting > Silences page 3. Click on the Create button ("Negative matcher" option is shown with no explanation)
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4101. The following is the description of the original issue:
—
Description of problem:
We experienced two separate upgrade failures relating to the introduction of the SYSTEM_RESERVED_ES node sizing parameter, causing kubelet to stop running. One cluster (clusterA) upgraded from 4.11.14 to 4.11.17. It experienced an issue whereby /etc/node-sizing.env on its master nodes contained an empty SYSTEM_RESERVED_ES value: --- cat /etc/node-sizing.env SYSTEM_RESERVED_MEMORY=5.36Gi SYSTEM_RESERVED_CPU=0.11 SYSTEM_RESERVED_ES= --- causing the kubelet to not start up. To restore service, this file was manually updated to set a value (1Gi), and kubelet was restarted. We are uncertain what conditions led to this occuring on the clusterA master nodes as part of the upgrade. A second cluster (clusterB) upgraded from 4.11.16 to 4.11.17. It experienced an issue whereby worker nodes were impacted by a similar problem, however this was because a custom node-sizing-enabled.env MachineConfig which did not set SYSTEM_RESERVED_ES This caused existing worker nodes to go into a NotReady state after the ugprade, and additionally new nodes did not join the cluster as their kubelet would become impacted. For clusterB the conditions are more well-known of why the value is empty. However, for both clusters, if SYSTEM_RESERVED_ES ends up as empty on a node it can cause the kubelet to not start. We have some asks as a result: - Can MCO be made to recover from this situation if it occurs, perhaps through application of a safe default if none exists, such that kubelet would start correctly? - Can there possibly be alerting that could indicate and draw attention to the misconfiguration?
Version-Release number of selected component (if applicable):
4.11.17
How reproducible:
Have not been able to reproduce it on a fresh cluster upgrading from 4.11.16 to 4.11.17
Expected results:
If SYSTEM_RESERVED_ES is empty in /etc/node-sizing*env then a default should be applied and/or kubelet able to continue running.
Additional info:
Description of problem:
In looking at jobs on an accepted payload at https://amd64.ocp.releases.ci.openshift.org/releasestream/4.12.0-0.ci/release/4.12.0-0.ci-2022-08-30-122201 , I observed this job https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-serial/1564589538850902016 with "Undiagnosed panic detected in pod" "pods/openshift-controller-manager-operator_openshift-controller-manager-operator-74bf985788-8v9qb_openshift-controller-manager-operator.log.gz:E0830 12:41:48.029165 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)"
Version-Release number of selected component (if applicable):
4.12
How reproducible:
probably relatively easy to reproduce (but not consistently) given it's happened several times according to this search: https://search.ci.openshift.org/?search=Observed+a+panic%3A+%22invalid+memory+address+or+nil+pointer+dereference%22&maxAge=48h&context=1&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Steps to Reproduce:
1. let nightly payloads run or run one of the presubmit jobs mentioned in the search above 2. 3.
Actual results:
Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)}
Expected results:
no panics
Additional info:
As mentioned in AITRIAGE-3520, there multiple attempts to grab controller logs might fail at some point and override existing logs.
In the case of the ticket I mentioned, we were able to retrieve controller logs from the logs server. However, this might not always be the case for other clusters.
We need to find a way to preserve all logs, or time out log collection differently.
The way we thought it can be handled is by writing logs inside container and in case kube-api is not reachable we will read logs from file
Description of problem:
Tests failure when running dev-console tests locally.
Version-Release number of selected component (if applicable):
At least on 4.11 and 4.12
How reproducible:
Always
Steps to Reproduce:
1. Start cypress: yarn run test-cypress-dev-console
2. Run add-page
Actual results:
Fails
Expected results:
Should pass
Additional info:
Changelog between 3.5.5 and 3.5.4:
https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md#v355-tbd
Changelog between 3.5.3 and 3.5.4:
https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md#v354-2022-04-24
Description of problem:
OVNKubernetesControllerDisconnectedSouthboundDatabase alert seems to fire in the e2e-aws-ovn-serial CI job. Note that something funny happens in the job itself, which is that a set of ovnkube-node pods get created and then deleted and then get recreated again and test runs. But the alert gets fired for the first set of pods that got deleted. From the initial screening of artifacts alone its not clear what happened to the old pods. This needs investigation
Version-Release number of selected component (if applicable):
4.12 OCP
How reproducible:
Seems like always
Steps to Reproduce:
1.https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1568166237639282688 2. https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/27043/pull-ci-openshift-origin-master-e2e-aws-ovn-serial/1567913444936519680
Actual results:
Alert is fired
Expected results:
Alert shouldn't be fired and this is expected in the serial job then we need to silence that alert for that job, make it flaky and not fail hard if that alert fires.
Additional info:
Description of problem:
The platform-operators-aggregated cluster operator wasn't created after enabling "TechPreviewNoUpgrade" featureGate, as follows,
MacBook-Pro:~ jianzhang$ oc patch featuregate cluster -p '{"spec": {"featureSet": "TechPreviewNoUpgrade"}}' --type=merge featuregate.config.openshift.io/cluster patched MacBook-Pro:~ jianzhang$ oc wait --for=condition=Available=True clusteroperators.config.openshift.io/platform-operators-aggregated Error from server (NotFound): clusteroperators.config.openshift.io "platform-operators-aggregated" not found
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-20-095559
How reproducible:
always
Steps to Reproduce:
1. Install OCP 4.12 cluster. 2. Enable "TechPreviewNoUpgrade" feature gate. MacBook-Pro:~ jianzhang$ oc patch featuregate cluster -p '{"spec": {"featureSet": "TechPreviewNoUpgrade"}}' --type=merge featuregate.config.openshift.io/cluster patched 3. Check platform-operators-aggregated cluster operator.
Actual results:
MacBook-Pro:~ jianzhang$ oc wait --for=condition=Available=True clusteroperators.config.openshift.io/platform-operators-aggregated Error from server (NotFound): clusteroperators.config.openshift.io "platform-operators-aggregated" not found
Expected results:
The platform-operators-aggregated cluster operator can be created successfully.
Additional info:
The openshift-platform-operators pods running well.
MacBook-Pro:~ jianzhang$ oc get deploy -n openshift-platform-operators NAME READY UP-TO-DATE AVAILABLE AGE platform-operators-controller-manager 1/1 1 1 126m platform-operators-rukpak-core 1/1 1 1 126m platform-operators-rukpak-webhooks 2/2 2 2 126m MacBook-Pro:~ jianzhang$ oc get co platform-operators-aggregated Error from server (NotFound): clusteroperators.config.openshift.io "platform-operators-aggregated" not found
Description of problem:
i18n translation missing in "Remove component node from application" modal
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Navigate to dev console and create a workload under an Application group 2. On the Toplogy remove the workload from the Application group 3. See the i18n error in the console
Actual results:
Missing i18n key "Remove component node from application" in namespace "topology" and language "en." in console
Expected results:
No i18n error should be shown in the console.
Additional info:
This is a clone of issue OCPBUGS-3280. The following is the description of the original issue:
—
I have a script that does continuous installs using AGENT_E2E_TEST_SCENARIO=COMPACT_IPV4, just starting a new install after the previous one completes. What I'm seeing is that eventually I end up getting installation failures due to the container-images-available validation failure. What gets logged in wait-for bootstrap-complete is:
level=debug msg=Host master-0: New image status quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f6ddae72f6d730ca07a265691401571a8d8f7e62546f1bcda26c9a01628f4d6. result: failure.
level=debug msg=Host master-0: validation 'container-images-available' that used to succeed is now failing
level=debug msg=Host master-0: updated status from preparing-for-installation to preparing-failed (Host failed to prepare for installation due to following failing validation(s): Failed to fetch container images needed for installation from quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f6ddae72f6d730ca07a265691401571a8d8f7e62546f1bcda26c9a01628f4d6. This may be due to a network hiccup. Retry to install again. If this problem persists, check your network settings to make sure you’re not blocked. ; Host couldn't synchronize with any NTP server)
Sometimes the image gets loaded onto the other masters OK and sometimes there are failures with more than one host. In either case the install stalls at this point.
When using a disconnected environment (MIRROR_IMAGES=true) I don't see this occurring.
Containers on host0
[core@master-0 ~]$ sudo podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
00a0eebb989c localhost/podman-pause:4.2.0-1661537366 11 hours ago Up 11 hours ago cef65dd7f170-infra
5d0eced94979 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:caa73897dcb9ff6bc00a4165f4170701f4bd41e36bfaf695c00461ec65a8d589 /bin/bash start_d... 11 hours ago Up 11 hours ago assisted-db
813bef526094 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:caa73897dcb9ff6bc00a4165f4170701f4bd41e36bfaf695c00461ec65a8d589 /assisted-service 11 hours ago Up 11 hours ago service
edde1028a542 quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e43558e28be8fbf6fe4529cf9f9beadbacbbba8c570ecf6cb81ae732ec01807f next_step_runner ... 11 hours ago Up 11 hours ago next-step-runner
Some relevant logs from assisted-service for this container image:
time="2022-11-03T01:48:44Z" level=info msg="Submitting step <container-image-availability> id <container-image-availability-b72665b1> to infra_env <17c8b837-0130-4b8c-ad06-19bcd2a61dbf> host <df170326-772b-43b5-87ef-3dfff91ba1a9> Arguments: <[{\"images\":[\"registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451\",\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca122ab3a82dfa15d72a05f448c48a7758a2c7b0ecbb39011235bcf0666fbc15\",\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f6ddae72f6d730ca07a265691401571a8d8f7e62546f1bcda26c9a01628f4d6\",\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e52a45b47cd9d70e7378811f4ba763fd43ec2580378822286c7115fbee6ef3a\"],\"timeout\":960}]>" func=github.com/openshift/assisted-service/internal/host/hostcommands.logSteps file="/src/internal/host/hostcommands/instruction_manager.go:285" go-id=841 host_id=df170326-772b-43b5-87ef-3dfff91ba1a9 infra_env_id=17c8b837-0130-4b8c-ad06-19bcd2a61dbf pkg=instructions request_id=47cc221f-4f47-4d0d-8278-c0f5af933567
time="2022-11-03T01:49:35Z" level=error msg="Received step reply <container-image-availability-9788cfa7> from infra-env <17c8b837-0130-4b8c-ad06-19bcd2a61dbf> host <845f1e3c-c286-4d2f-ba92-4c5cab953641> exit-code <2> stderr <> stdout <{\"images\":[
{\"name\":\"registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451\",\"result\":\"success\"},{\"download_rate\":159.65409925994226,\"name\":\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ca122ab3a82dfa15d72a05f448c48a7758a2c7b0ecbb39011235bcf0666fbc15\",\"result\":\"success\",\"size_bytes\":523130669,\"time\":3.276650405},{\"name\":\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f6ddae72f6d730ca07a265691401571a8d8f7e62546f1bcda26c9a01628f4d6\",\"result\":\"failure\"},{\"download_rate\":278.8962416008878,\"name\":\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e52a45b47cd9d70e7378811f4ba763fd43ec2580378822286c7115fbee6ef3a\",\"result\":\"success\",\"size_bytes\":402688178,\"time\":1.443863767}]}>" func=github.com/openshift/assisted-service/internal/bminventory.logReplyReceived file="/src/internal/bminventory/inventory.go:3287" go-id=845 host_id=845f1e3c-c286-4d2f-ba92-4c5cab953641 infra_env_id=17c8b837-0130-4b8c-ad06-19bcd2a61dbf pkg=Inventory request_id=3a571ba6-5175-4bbe-b89a-20cdde30b884
time="2022-11-03T01:49:35Z" level=info msg="Adding new image status for quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0f6ddae72f6d730ca07a265691401571a8d8f7e62546f1bcda26c9a01628f4d6 with status failure to host 845f1e3c-c286-4d2f-ba92-4c5cab953641" func="github.com/openshift/assisted-service/internal/host.(*Manager).UpdateImageStatus" file="/src/internal/host/host.go:805" pkg=host-state
This is a clone of issue OCPBUGS-3499. The following is the description of the original issue:
—
Description of problem:
On clusters serving Route via CRD (i.e. MicroShift), Route validation does not perform the same validation as on OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
$ cat<<EOF | oc apply --server-side -f- apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift spec: to: kind: Service name: hello-microshift EOF route.route.openshift.io/hello-microshift serverside-applied $ oc get route hello-microshift -o yaml apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: "2022-11-11T23:53:33Z" generation: 1 name: hello-microshift namespace: default resourceVersion: "2659" uid: cd35cd20-b3fd-4d50-9912-f34b3935acfd spec: host: hello-microshift-default.cluster.local to: kind: Service name: hello-microshift wildcardPolicy: None $ cat<<EOF | oc apply --server-side -f- apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift spec: to: kind: Service name: hello-microshift wildcardPolicy: "" EOF
Actual results:
route.route.openshift.io/hello-microshift serverside-applied
Expected results:
The Route "hello-microshift" is invalid: spec.wildcardPolicy: Invalid value: "": field is immutable
Additional info:
** This change will be inert on OCP, which already has the correct behavior. **
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
We should deprecate and eventually remove react-helmet as a shared plugin dependency. This dependency is small, and plugins can bring their own version if needed.
This requires updated our webpack plugin to allow dependency fallbacks when a shared dependency is not present.
AC:
An RW mutex was introduced to the project auth cache with https://github.com/openshift/openshift-apiserver/pull/267, taking exclusive access during cache syncs. On clusters with extremely high object counts for namespaces and RBAC, syncs appear to be extremely slow (on the order of several minutes). The project LIST handler acquires the same mutex in shared mode as part of its critical path.
Console should be using v1 version of the ConsolePlugin model rather then the old v1alpha1.
CONSOLE-3077 was updating this version, but did not made the cut for the 4.12 release. Based on discussion with Samuel Padgett we should be backporting to 4.12.
The risk should be minimal since we are only updating the model itself + validation + Readme
This is a clone of issue OCPBUGS-2532. The following is the description of the original issue:
—
Description of problem:
Upgrades from OCP 4.11.9 to the latest OCP 4.12 Nightly builds including 4.12.0-ec.4 will fail. When the upgrade fails, there are typically two operators that never get upgraded(all others do upgrade to the targeted 4.12.x release): dns 4.11.9 True True False 11h DNS "default" reports Progressing=True: "Have 4 available DNS pods, want 5."... machine-config 4.11.9 True False False 14h The dns.operator details state it is waiting for a 4/5 pods to become available: # oc describe dns.operator/default ... Status: Cluster Domain: cluster.local Cluster IP: 172.30.0.10 Conditions: Last Transition Time: 2022-10-18T03:21:44Z Message: Enough DNS pods are available, and the DNS service has a cluster IP address. Reason: AsExpected Status: False Type: Degraded Last Transition Time: 2022-10-18T03:21:44Z Message: Have 4 available DNS pods, want 5. Reason: Reconciling Status: True Type: Progressing The mcp reports everything is good: # oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-87fd457ffdaf49d75e62b532c22a9f1d True False False 3 3 3 0 14h worker rendered-worker-7fc68009b1facf8724cd952cb08435ff True False False 2 2 2 0 14h We have performed a large number of the same upgrades, using the same configuration, and while there are times the upgrade succeeds, the large number of results do fail. This seems to be a timing issue. As a current workaround, if we were to recycle the control plane nodes, the upgrade will complete successfully. A must-gather log is attached for review.
Version-Release number of selected component (if applicable):
Tested upgrading to all the following releases: 4.12.0-ec.4 4.12.0-0.nightly-s390x-2022-10-10-005931 4.12.0-0.nightly-s390x-2022-10-15-144437
How reproducible:
Moderate to Consistently
Steps to Reproduce:
1. Start with a working OCP 4.11.9 Cluster. 2. Perform an upgrade to latest OCP 4.12.x nightly build. 3. Monitor the upgrade status: # oc get clusterversion —> will state % complete and waiting on dns - which never finishes. # oc get co —> the dns and machine-config operators will remain at 4.11.9 4. Upgrade will never complete.
Actual results:
Upgrade will never complete.
Expected results:
Upgrade to the targeted release succeeds.
Additional info:
This upgrade issue occurs for both Connected and Disconnected Clusters.
This is a clone of issue OCPBUGS-3426. The following is the description of the original issue:
—
Description of problem:
We need to update the operator to be synced with the K8 api version used by OCP 4.13. We also need to sync our samples libraries with latest available libraries. Any deprecated libraries should be removed as well.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
This a bug record to pin down dependencies version in CMO release 4.12 after the release-4.12 branch was detached from master branch.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
N/A
Steps to Reproduce:
N/A
Actual results:
N/A
Expected results:
N/A
Additional info:
None.
Description of problem:
With every pod update we are executing a mutate operation to add the pod port to the port group or add the pod IP to an address set. This functionally doesn't hurt, since mutate will not add duplicate values to the same set. However, this is bad for performance. For example, with a 730 network policies affecting a pod, and issuing 7 pod updates would result in over 5k transactions.
Currently controller will set status done each time it sees host that is ready in k8s without looking if it was already set.
time="2022-09-13T19:03:45Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=6258e5a2-4e78-4148-a913-45d704a0fa1d
time="2022-09-13T19:04:05Z" level=info msg="Found new ready node ocp-2.cluster1.kpsalerno.us.ibm.com with inventory id 2da64d56-5057-78c6-ea6e-bf74a783bd79, kubernetes id 2da64d56-5057-78c6-ea6e-bf74a783bd79, updating its status to Done" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).waitAndUpdateNodesStatus" file="/remote-source/app/src/assisted_installer_controller/assisted_installer_controller.go:255" request_id=49e4e63f-cf4f-4b9f-b1f3-923c473c09dd
Description of problem: The product name for Azure Red Hat OpenShift was incorrect in Customer Case Management (CCM). As a result, the console included this incorrect product name in order for the support case link to correctly route. https://issues.redhat.com/browse/CPCCM-9926 fixed the incorrect product name, so now the support case link for Azure needs to be updated to reflect the correct product name.
Description of problem:
Have 6 runs of techpreview jobs where the jobs fails due to the MCO:
{Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)] Operator degraded (RequiredPoolsFailed): Unable to apply 4.12.0-0.ci.test-2022-09-21-183414-ci-op-qd6plyhc-latest: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 3)]}
looking at the MCD logs the master seems to go degraded in bootstrap due to the rendered config not being found?
I0921 18:49:47.091804 8171 daemon.go:444] Node ci-op-qd6plyhc-6dd9a-bfmjd-master-1 is part of the control plane I0921 18:49:49.213556 8171 node.go:24] No machineconfiguration.openshift.io/currentConfig annotation on node ci-op-qd6plyhc-6dd9a-bfmjd-master-1: map[csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/openshift-gce-devel-ci-2/zones/us-central1-b/instances/ci-op-qd6plyhc-6dd9a-bfmjd-master-1"} volumes.kubernetes.io/controller-managed-attach-detach:true], in cluster bootstrap, loading initial node annotation from /etc/machine-config-daemon/node-annotations.json I0921 18:49:49.215186 8171 node.go:45] Setting initial node config: rendered-master-2dde32327e4e5d15092fccbac1dcec49 I0921 18:49:49.253706 8171 daemon.go:1184] In bootstrap mode E0921 18:49:49.254046 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found I0921 18:49:51.232610 8171 daemon.go:499] Transitioned from state: Done -> Degraded I0921 18:49:51.249618 8171 daemon.go:1184] In bootstrap mode E0921 18:49:51.249906 8171 writer.go:200] Marking Degraded due to: machineconfig.machineconfiguration.openshift.io "rendered-master-2dde32327e4e5d15092fccbac1dcec49" not found
However looking at controller a rendered-config was generated correctly but it's not the missing config from above:
I0921 18:54:06.736984 1 render_controller.go:506] Generated machineconfig rendered-master-acc8491aafab8ef511a40b76372325ee from 6 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I0921 18:54:06.737226 1 event.go:285] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"", Name:"master", UID:"b2084ca6-4b33-46bf-b83b-9e98010ff085", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"5648", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-acc8491aafab8ef511a40b76372325ee successfully generated (release version: 4.12.0-0.ci.test-2022-09-21-183220-ci-op-9ksj7d7g-latest, controller version: a627415c240b4c7dd2f9e90f659690d9c0f623f3) I0921 18:54:06.742053 1 render_controller.go:532] Pool master: now targeting: rendered-master-acc8491aafab8ef511a40b76372325ee
So far I see this in the following techpreview jobs:
GCP techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview/1572638837954318336
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-gcp-sdn-techpreview-serial/1572638838793179136
Vsphere techpreview
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview/1572638854794448896
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-nightly-4.12-e2e-vsphere-ovn-techpreview-serial/1572638855574589440
AWS Techpreview:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview/1572638828672323584
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1360-ci-4.12-e2e-aws-sdn-techpreview-serial/1572638829217583104
The above jobs affect the k8s 1.25 bump and are blocking the job.
There are also other occurances not in our PR:
https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/31965/rehearse-31965-pull-ci-openshift-openshift-controller-manager-master-openshift-e2e-aws-builds-techpreview/1572581504297472000
Also see a quick search:
https://search.ci.openshift.org/?search=timed+out+waiting+for+the+condition%2C+error+pool+master+is+not+ready&maxAge=48h&context=1&type=bug%2Bissue%2Bjunit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Did something change that would affect tech preview jobs?
Also note, this seems like a new failure. I have some of these jobs passing in the last ~ 8 days.
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Network policy code has some problems, most of them are races, therefore it can be difficult to reproduce and verify, here is the list 1. all kinds of add/delete port to/from default deny port group failures, possible symptoms: - port should’ve been added to default deny port group, but wasn’t: connections that should’ve been dropped are allowed - port should’ve been deleted from default deny port group, but wasn’t: connections that should be allowed are dropped - db ops failures when an attempt to add/delete port to/from default deny port group fails, e.g. because this operation already was done 2. default deny port group was overwritten when 2 network policies are created in a namespace at the same time. Can lead to ports not being added to the default deny port group => denied connections will be allowed 3. handle error when getting local pod from the cache fails, possible symptoms - "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" log message - pod is not added to netpol port groups, network policy is not applied 4. creating deleted namespace via ensureNamespaceLocked, symptoms: - namespace was deleted, but address set is present in the db 5. policy acl loglevel update wasn’t applied, possible symptoms: - netpol acl log level isn’t set/updated to namespace loglevel 6. netpol cleanup failures, symptoms: - network policy failed to be deleted, something is still left in the db, error messages like - "failed to destroy network policy" - "Rollback of default port groups and acls for policy: %s/%s failed, Unable to ensure namespace for network policy" 7. concurrent write to sets.String - this will panic, you won’t miss 8. retry for network policy handler after network policy was deleted, you should see failures saying that some network policy related object is nil or doesn’t exist, e.g. - "peer AddressSet is nil, cannot add <object>" 9. host network and completed pods selected by network policy can produce error logs, no real harm - "Failed to get LSP for pod <namespace>/<name> for networkPolicy %s refetching err" 10. namespace pod handlers are never stopped, can affect memory usage and look like a memory leak 11. add local pod failure, since netpol port group is not committed to db yet, error looks like - "Failed to create *factory.localPodSelector <name>, error: object not found"
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
Example 1 1. Create network policy with [in/e]gress selector that applies to a namespace labeled project: myproject apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: myproject 2. Use oc apply to delete network policy and crate a pod in project: myproject namespace at the same time 3. check ovnkube-master logs for "peer AddressSet is nil, cannot add peer pod(s)", this should retry with the same error 15 times 4. This may not work from the first try, since we need to hit specific order of network policy delete and pod add handling 5. With the new version no error messages should be present Example 2 1. create network policy that applies to a namespace test piVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: 2. Create host network pod in namespace test 3. Check 15 logs saying "Failed to get LSP for pod %s/%s for networkPolicy %s refetching err: " 4. check final log "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" 5. With the new version no error message should be present All the other cases are difficult to reproduce, maybe just running some standard network policy tests and making sure everything works will be a good verification.
Actual results:
Expected results:
Additional info:
Description of problem:
If a customer creates a machine with a networks section like this networks: - filter: {} noAllowedAddressPairs: false subnets: - filter: {} uuid: primary-subnet-uuid - filter: {} noAllowedAddressPairs: true subnets: - filter: {} uuid: other-subnet-uuid primarySubnet: primary-subnet-uuid Then all the ports are created without the allowed address pairs. Doing some research in the source code, I have found that: - For each entry on the networks: section, networks are filtered as per its filter: section[1] - Then, if the subnets: section of the network entry is not empty, for each of the network IDs found above[2], 2 things are done that are relevant for this situatoin: - The net ID is saved on a netsWithoutAllowedAddressPairs[3]. That map is later checked while creating any port[4]. - For each subnet entry that matches the network ID, a port is created[5]. So, the problematic behavior happens due to the following: - Both entries in the networks array have empty filters. This means that both entries selected all the neutron networks. - This configuration results in one port per subnet as expected because, in the later traversal of the subnets array of each entry[5], it is filtering by subnet and creating a single port as expected. - However, the entry with "noAllowedAddressPairs: true" is selecting all the neutron networks, so it adds all of them to the netsWithoutAllowedAddressPairs map[3], regardless of the subnets filtering. - As all the networks are in noAllowedAddressPairs: true array, all the ports created for the VM have their allowed address pairs removed[4]. Why do we consider this behavior undesired? I understand that, if we create a port for a network that has no allowed pairs, we create all the other ports in the same networks without the pairs. However, it is surprising that a port in a network is removed the allowed address pairs due to a setting in an entry that yielded no port on that network. In other words, one would expect that the same subnet filtering that happens on each network entry in what regards yielding ports for the VM would also work for the noAllowedPairs parameter.
Version-Release number of selected component (if applicable):
4.10.30
How reproducible:
Always
Steps to Reproduce:
1. Create a machineset like in the description 2. 3.
Actual results:
All ports have no address pairs
Expected results:
Only the port on the secondary subnet has no address pairs.
Additional info:
A simple workaround would be to just fill the filter so that a single network is selected for each network entry. References: [1] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L576 [2] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L580 [3] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L581-L583 [4] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L658-L660 [5] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L610-L625
This is a clone of issue OCPBUGS-4190. The following is the description of the original issue:
—
Description of problem:
Two tests are perma failing in metal-ipi upgrade tests [sig-imageregistry] Image registry remains available using new connections expand_more 39m27s [sig-imageregistry] Image registry remains available using reused connections expand_more 39m27s
Version-Release number of selected component (if applicable):
4.12 / 4.13
How reproducible:
all ci runs
Steps to Reproduce:
1. 2. 3.
Actual results:
Nov 24 02:58:26.998: INFO: "[sig-imageregistry] Image registry remains available using reused connections": panic: runtime error: invalid memory address or nil pointer dereference
Expected results:
pass
Additional info:
Description of problem:
See https://github.com/metal3-io/baremetal-operator/issues/1045
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
The pipeline run nodes used to show a focus border when they were in focus but no longer do.
There is no indication of which node has the focus
There should be a focus border indicating the current focus node.
always
4.12
Previously:
Currently:
Description of problem:
node_exporter collects network metrics for "virtual" interfaces like br-*. When OVN is used, it also reports metrics for ovs-*, ovn, and genev_sys_* interfaces.
Version-Release number of selected component (if applicable):
4.12 (and before)
How reproducible:
Always
Steps to Reproduce:
1. Launch a 4.12 cluster. 2. Run the following PromQL query: "group by(device) (node_network_info)" 3.
Expected results:
Only real host interfaces should be present.
Additional info:
Description of problem:
When providing install-config as
platform: baremetal: apiVIP: 192.168.122.10 ingressVIP: 192.168.122.11
agent installer fails with
bin/openshift-install agent create cluster-manifests
FATAL failed to fetch Agent Manifests: failed to load asset "Install Config": invalid install-config configuration: [Platform.Baremetal.ApiVips: Required value: apiVips must be set for baremetal platform, Platform.Baremetal.IngressVips: Required value: ingressVips must be set for baremetal platform]
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Git clone latest installer https://github.com/openshift/installer and build it 2. Provide install-config.yaml for baremetal platform with deprecated apiVip and ingressVip set 3. Create agent image with "bin/openshift-install agent create cluster-manifests"
Actual results:
bin/openshift-install agent create cluster-manifests FATAL failed to fetch Agent Manifests: failed to load asset "Install Config": invalid install-config configuration: [Platform.Baremetal.ApiVips: Required value: apiVips must be set for baremetal platform, Platform.Baremetal.IngressVips: Required value: ingressVips must be set for baremetal platform]
Expected results:
agent installer should upconvert the depreacted fields and should not error. apiVip, ingressVip should be upconverted into apiVips and ingressVips respectively
Additional info:
Description of problem: upon attempting to install OCP 4.10 UPI on baremetal ppc64le, the openshift-install gather command returns `panic: unsupported platform "none"`
Version-Release number of selected component (if applicable):
OCP 4.10.16
openshift-install 4.10.24
How reproducible:
easily
Steps to Reproduce:
1. create install config
2. create manifests
3. create ignition configs
4. openshift-install gather bootstrap --log-level "debug"
Actual results:
DEBUG OpenShift Installer 4.10.24
DEBUG Built from commit d63a12ba0ec33d492093a8fc0e268a01a075f5da
DEBUG Fetching Bootstrap SSH Key Pair...
DEBUG Loading Bootstrap SSH Key Pair...
DEBUG Using Bootstrap SSH Key Pair loaded from state file
DEBUG Reusing previously-fetched Bootstrap SSH Key Pair
DEBUG Fetching Install Config...
DEBUG Loading Install Config...
DEBUG Loading SSH Key...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Cluster Name...
DEBUG Loading Base Domain...
DEBUG Loading Platform...
DEBUG Loading Networking...
DEBUG Loading Platform...
DEBUG Loading Pull Secret...
DEBUG Loading Platform...
DEBUG Loading Install Config from both state file and target directory
DEBUG On-disk Install Config matches asset in state file
DEBUG Using Install Config loaded from state file
DEBUG Reusing previously-fetched Install Config
panic: unsupported platform "none"
goroutine 1 [running]:
github.com/openshift/installer/pkg/terraform/stages/platform.StagesForPlatform({0x146f2d0a, 0x1619aa08})
/go/src/github.com/openshift/installer/pkg/terraform/stages/platform/stages.go:55 +0x2ff
main.runGatherBootstrapCmd({0x14d8e028, 0x1})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:115 +0x2d6
main.newGatherBootstrapCmd.func1(0xc001364500, {0xc0005a0b40, 0x2, 0x2})
/go/src/github.com/openshift/installer/cmd/openshift-install/gather.go:65 +0x59
github.com/spf13/cobra.(*Command).execute(0xc001364500, {0xc0005a0b20, 0x2, 0x2})
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:860 +0x5f8
github.com/spf13/cobra.(*Command).ExecuteC(0xc001334c80)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:974 +0x3bc
github.com/spf13/cobra.(*Command).Execute(...)
/go/src/github.com/openshift/installer/vendor/github.com/spf13/cobra/command.go:902
main.installerMain()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:72 +0x29e
main.main()
/go/src/github.com/openshift/installer/cmd/openshift-install/main.go:50 +0x125
Expected results:
I'm not really sure what I expected to happen. I've never used that gather before..
I would assume at least no panicking.
Additional info:
Description of the problem:
https://github.com/openshift/assisted-installer/issues/513
How reproducible:
https://github.com/openshift/assisted-installer/issues/513
Steps to reproduce:
1. Create cluster with proxy and additional certificate bundle
2.Install
Actual results:
Controller failed to reach service cause of self signed certificate
Expected results:
Installation succeeds
Package golang.org/x/text/language has a vulnerability which could cause a denial of service (included on version v0.3.7 of golang.org/x/text/language)
Description of problem: After I run the golang script for OCP-53608, I find the created
ingress-controller couldn't be deleted
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible: Run the script and try to delete the custom ingress-controller
Steps to Reproduce:
1.
% oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.12.0-0.nightly-2022-08-15-150248 True False 43m Cluster version is 4.12.0-0.nightly-2022-08-15-150248
shudi@Shudis-MacBook-Pro openshift-tests-private %
2. Run the script
shudi@Shudis-MacBook-Pro openshift-tests-private % ./bin/extended-platform-tests run all --dry-run | grep 53608 | ./bin/extended-platform-tests run -f -
...
---------------------------------------------------------
Received interrupt. Running AfterSuite...
^C again to terminate immediately
Aug 18 10:35:51.087: INFO: Running AfterSuite actions on all nodes
Aug 18 10:35:51.088: INFO: Waiting up to 7m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-test-router-tunning-77627" for this suite.
Aug 18 10:35:54.654: INFO: Running AfterSuite actions on node 1
failed: (15m4s) 2022-08-18T02:35:54 "[sig-network-edge] Network_Edge should Author:shudili-Low-53608-Negative Test of Expose a Configurable Reload Interval in HAproxy [Suite:openshift/conformance/parallel]"
Failing tests:
[sig-network-edge] Network_Edge should Author:shudili-Low-53608-Negative Test of Expose a Configurable Reload Interval in HAproxy [Suite:openshift/conformance/parallel]
error: 1 fail, 0 pass, 0 skip (15m4s)
shudi@Shudis-MacBook-Pro openshift-tests-private %
3. show the ingress-controllers
shudi@Shudis-MacBook-Pro openshift-tests-private % oc -n openshift-ingress-operator get ingresscontroller
NAME AGE
default 113m
ocp53608 42m
shudi@Shudis-MacBook-Pro openshift-tests-private %
4. Try to delete the ingress-controller ocp53608, when the message "ingresscontroller.operator.openshift.io "ocp53608" deleted" appears, it is hanged for a long time until the error message appears.
shudi@Shudis-MacBook-Pro openshift-tests-private % oc -n openshift-ingress-operator delete ingresscontroller ocp53608
ingresscontroller.operator.openshift.io "ocp53608" deleted
error: An error occurred while waiting for the object to be deleted: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeedingUnable to connect to the server: dial tcp 35.194.1.60:6443: i/o timeout
shudi@Shudis-MacBook-Pro openshift-tests-private %
5. After "ingresscontroller.operator.openshift.io "ocp53608" deleted" message appears, show the ingress-controller, ocp53608 isn't deleted
shudi@Shudis-MacBook-Pro golang % oc -n openshift-ingress-operator get ingresscontroller
NAME AGE
default 3h
ocp53608 109m
shudi@Shudis-MacBook-Pro golang %
6. After the error message(rror: An error occurred while waiting for the object to be deleted) appears, try to show the ingresscontroller
shudi@Shudis-MacBook-Pro openshift-tests-private % oc -n openshift-ingress-operator get ingresscontroller
E0818 12:21:57.272967 4168 request.go:1085] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
E0818 12:21:57.273379 4168 request.go:1085] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
E0818 12:21:57.274306 4168 request.go:1085] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body)
Unable to connect to the server: dial tcp 35.194.1.60:6443: i/o timeout
shudi@Shudis-MacBook-Pro openshift-tests-private %
Actual results: ingress-controller ocp53608 is still there after executed the oc delete command
Expected results:
ingress-controller ocp53608 will be deleted soon after executed the oc delete command
Additional info: