Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
As a SRE, I want hypershift operator to expose a metric when hosted control plane is ready.
This should allow SRE to tune (or silence) alerts occurring while the hosted control plane is spinning up.
The Kube APIServer has a sidecar to output audit logs. We need similar sidecars for other APIServers that run on the control plane side. We also need to pass the same audit log policy that we pass to the KAS to these other API servers.
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Place holder epic to track spontaneous task which does not deserve its own epic.
Once the HostedCluster and NodePool gets stopped using PausedUntil statement, the awsprivatelink controller will continue reconciling.
How to test this:
AC:
We have connectDirectlyToCloudAPIs flag in konnectiviy socks5 proxy to dial directly to cloud providers without going through konnectivity.
This introduce another path for exception https://github.com/openshift/hypershift/pull/1722
We should consolidate both by keep using connectDirectlyToCloudAPIs until there's a reason to not.
AWS has a hard limit of 100 OIDC providers globally.
Currently each HostedCluster created by e2e creates its own OIDC provider, which results in hitting the quota limit frequently and causing the tests to fail as a result.
DOD:
Only a single OIDC provider should be created and shared between all e2e HostedClusters.
DoD:
At the moment if the input etcd kms encryption (key and role) is invalid we fail transparently.
We should check that both key and role are compatible/operational for a given cluster and fail in a condition otherwise
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
We got a feedback from the support team that it is confusing to see switch in the Notifications column for the Alerting rule which have no alerts associated to it as user can not silence the Alerting rule.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. oc apply -f https://gist.githubusercontent.com/vikram-raj/727629797eb9d9bfcfa2721cae2ade86/raw/7c2305e14115a1a4f4f88ebb74cdad32cbec4132/Alerting%2520rule%2520without%2520alert 2. navigate to the Developer perspective Observe -> Alerts 3. Try to silence the VersionAlert alerting rule, nothing will happen
Actual results:
Silence the alerting rule using the switch will do nothing
Expected results:
No switch for silence the alerting rule should be visible if no alerts are associated to the alerting rule.
Additional info:
Description of problem:
InstanceMetadataTags are not supported in AWS C2S region(us-iso-x)
Version-Release number of selected component (if applicable):
How reproducible:
always
Steps to Reproduce:
1. OCP4.11 IPI Installation on AWS C2S regions 2. 3.
Actual results:
Expected results:
Additional info:
Actual Error: "Error launching resource Instance. Unsupported Operation Specifying InstanceMetadataTags is not yet supported" There is a related fix on upstream: resource/aws_instance: Handle regions where instance metadata tags are unsupported https://github.com/hashicorp/terraform-provider-aws/pull/26631
Description of problem:
acquiring node lock for assigning ip address, node: %s, ip: %sci-ln-g470i52-1d09d-slz7m-worker-westus-6wt7k10.0.128.102
Description of problem:
The TestReloadInterval E2E test has completely wrong validations in which the min value should be 1s, not 5s. But there is a race condition which allow these tests to sometimes pass due to the last test condition. Therefore, failures in CI are actually correct, and successes are wrong based on the E2E conditions.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
50%
Steps to Reproduce:
1.Run TestReloadInterval E2E test (make test-e2e TEST=TestReloadInterval)
Actual results:
Sometimes fails on 5us test case: reloadinterval_test.go:106: router deployment not updated with RELOAD_INTERVAL=5s: timed out waiting for the condition
Expected results:
Should pass E2E
Additional info:
Description of problem:
release-4.12 of openshift/cloud-provider-openstack is missing some commits that were backported in upstream project into the release-1.25 branch. We should import them in our downstream fork.
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
It is a disconnected cluster on AWS. There is an issue configuring Egress IP where the cluster uses STS. While looking into cloud-network-config-controller pod it is trying to connect to the global sts service "https://sts.amazonaws.com/" rather it should connect to the regional one "https://ec2.ap-southeast-1.amazonaws.com".
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Create a disconected OCP cluster on AWS.
$ oc get netnamespace | grep egress egress-ip-test 2689387 ["172.16.1.24"]
$ oc get hostsubnet NAME HOST HOST IP SUBNET EGRESS CIDRS EGRESS IPS ip-172-16-1-151.ap-southeast-1.compute.internal ip-172-16-1-151.ap-southeast-1.compute.internal 172.16.1.151 10.130.0.0/23 ip-172-16-1-53.ap-southeast-1.compute.internal ip-172-16-1-53.ap-southeast-1.compute.internal 172.16.1.53 10.131.0.0/23 ["172.16.1.24"] ip-172-16-2-15.ap-southeast-1.compute.internal ip-172-16-2-15.ap-southeast-1.compute.internal 172.16.2.15 10.128.0.0/23 ip-172-16-2-77.ap-southeast-1.compute.internal ip-172-16-2-77.ap-southeast-1.compute.internal 172.16.2.77 10.128.2.0/23 ip-172-16-3-111.ap-southeast-1.compute.internal ip-172-16-3-111.ap-southeast-1.compute.internal 172.16.3.111 10.129.0.0/23 ip-172-16-3-79.ap-southeast-1.compute.internal ip-172-16-3-79.ap-southeast-1.compute.internal 172.16.3.79 10.129.2.0/23
$ oc logs sdn-controller-6m5kb -n openshift-sdn I0922 04:09:53.348615 1 vnids.go:105] Allocated netid 2689387 for namespace "egress-ip-test" E0922 04:24:00.682018 1 egressip.go:254] Ignoring invalid HostSubnet ip-172-16-1-53.ap-southeast-1.compute.internal (host: "ip-172-16-1-53.ap-southeast-1.compute.internal", ip: "172.16.1.53", subnet: "10.131.0.0/23"): related node object "ip-172-16-1-53.ap-southeast-1.compute.internal" has an incomplete annotation "cloud.network.openshift.io/egress-ipconfig", CloudEgressIPConfig: <nil>
$ oc logs cloud-network-config-controller-5c7556db9f-x78bs -n openshift-cloud-network-config-controller E0922 04:26:59.468726 1 controller.go:165] error syncing 'ip-172-16-2-77.ap-southeast-1.compute.internal': error retrieving the private IP configuration for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: error: cannot list ec2 instance for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: WebIdentityErr: failed to retrieve credentials caused by: RequestError: send request failed caused by: Post "https://sts.amazonaws.com/": dial tcp 54.239.29.25:443: i/o timeout, requeuing in node workqueue
$ oc get Infrastructure -o yaml apiVersion: v1 items: - apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: creationTimestamp: "2022-09-22T03:28:15Z" generation: 1 name: cluster resourceVersion: "598" uid: 994da301-2a96-43b7-b43b-4b7c18d4b716 spec: cloudConfig: name: "" platformSpec: aws: serviceEndpoints: - name: sts url: https://sts.ap-southeast-1.amazonaws.com - name: ec2 url: https://ec2.ap-southeast-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com type: AWS status: apiServerInternalURI: https://api-int.openshiftyy.ocpaws.sadiqueonline.com:6443 apiServerURL: https://api.openshiftyy.ocpaws.sadiqueonline.com:6443 controlPlaneTopology: HighlyAvailable etcdDiscoveryDomain: "" infrastructureName: openshiftyy-wfrpf infrastructureTopology: HighlyAvailable platform: AWS platformStatus: aws: region: ap-southeast-1 serviceEndpoints: - name: ec2 url: https://ec2.ap-southeast-1.amazonaws.com - name: elasticloadbalancing url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com - name: sts url: https://sts.ap-southeast-1.amazonaws.com type: AWS kind: List metadata: resourceVersion: ""
$ oc get secret aws-cloud-credentials -n openshift-machine-api -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-machine-api-aws-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credential-operator-iam-ro-creds -n openshift-cloud-credential-operator -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cloud-credential-operator-cloud-creden web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret installer-cloud-credentials -n openshift-image-registry -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-image-registry-installer-cloud-credent web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credentials -n openshift-ingress-operator -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-ingress-operator-cloud-credentials web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credentials -n openshift-cloud-network-config-controller -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cloud-network-config-controller-cloud- web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token [ec2-user@ip-172-17-1-229 ~]$ oc get secret ebs-cloud-credentials -n openshift-cluster-csi-drivers -o json |jq -r .data.credentials |base64 -d [default] sts_regional_endpoints = regional role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cluster-csi-drivers-ebs-cloud-credenti web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
Actual results:
Egress IP not configured properly and cloud-network-config-controller trying to connect to global STS service.
Expected results:
Egress IP should get configured and cloud-network-config-controller should connect to regional STS service instead of global.
Additional info:
Description of problem:
When opening the Devfile sample developer catalog, switch the project in another browser tab, and then open devfile samples link in a new tab, the current project context is getting lost.
Version-Release number of selected component (if applicable):
4.12, expecting that this happen also in older versions
How reproducible:
Always
Steps to Reproduce:
1. Switch to the developer perspective, navigate to Add > Samples
2. Open a new browser tab and create a new project
3. Ctrl+click a sample in the first tab.
Actual results:
The project has also changed in the "Import sample" page
Expected results:
The project should be used also for the new "Import sample" page
Additional info:
We had this issue earlier for other catalog entries. Other samples works already fine, just the Devfile sample links doesn't contain the current namespace.
This is a clone of issue OCPBUGS-1627. The following is the description of the original issue:
—
Description of problem:
Two issues when setting user-defined folder in failureDomain.
1. installer get error when setting folder as a path of user-defined folder in failureDomain.
failureDomains setting in install-config.yaml:
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: /IBMCloud/vm/qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: /IBMCloud/vm/qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: /IBMCloud/vm/qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: ibmvcenter.vmc-ci.devcluster.openshift.com topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
Error message in terraform after completing ova image import:
DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m40s elapsed] DEBUG vsphereprivate_import_ova.import[3]: Creation complete after 1m40s [id=vm-367860] DEBUG vsphereprivate_import_ova.import[1]: Creation complete after 1m49s [id=vm-367863] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [1m50s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m0s elapsed] DEBUG vsphereprivate_import_ova.import[2]: Creation complete after 2m2s [id=vm-367862] DEBUG vsphereprivate_import_ova.import[0]: Still creating... [2m10s elapsed] DEBUG vsphereprivate_import_ova.import[0]: Creation complete after 2m20s [id=vm-367861] DEBUG data.vsphere_virtual_machine.template[0]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Reading... DEBUG data.vsphere_virtual_machine.template[1]: Reading... DEBUG data.vsphere_virtual_machine.template[2]: Reading... DEBUG data.vsphere_virtual_machine.template[3]: Read complete after 1s [id=42054e33-85d6-e310-7f4f-4c52a73f8338] DEBUG data.vsphere_virtual_machine.template[1]: Read complete after 2s [id=42053e17-cc74-7c89-f5d1-059c9030ecc7] DEBUG data.vsphere_virtual_machine.template[2]: Read complete after 2s [id=4205019f-26d8-f9b4-ac0c-2c073fd70b35] DEBUG data.vsphere_virtual_machine.template[0]: Read complete after 2s [id=4205eaf2-c727-c647-ad44-bd9ad7023c56] ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR failed to fetch Cluster: failed to generate asset "Cluster": failure applying terraform for "pre-bootstrap" stage: failed to create cluster: failed to apply Terraform: exit status 1 ERROR ERROR Error: error trying to determine parent targetFolder: folder '/IBMCloud/vm//IBMCloud/vm' not found ERROR ERROR with vsphere_folder.folder["IBMCloud-/IBMCloud/vm/qe-jima"], ERROR on main.tf line 61, in resource "vsphere_folder" "folder": ERROR 61: resource "vsphere_folder" "folder" { ERROR ERROR
2. installer get panic error when setting folder as user-defined folder name in failure domains.
failure domain in install-config.yaml
failureDomains: - name: us-east-1 region: us-east zone: us-east-1a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-1 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-1 folder: qe-jima - name: us-east-2 region: us-east zone: us-east-2a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-2 networks: - multi-zone-qe-dev-1 datastore: multi-zone-ds-2 folder: qe-jima - name: us-east-3 region: us-east zone: us-east-3a server: xxx topology: datacenter: IBMCloud computeCluster: /IBMCloud/host/vcs-mdcnc-workload-3 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR folder: qe-jima - name: us-west-1 region: us-west zone: us-west-1a server: xxx topology: datacenter: datacenter-2 computeCluster: /datacenter-2/host/vcs-mdcnc-workload-4 networks: - multi-zone-qe-dev-1 datastore: workload_share_vcsmdcncworkload3_joYiR
panic error message in installer:
INFO Obtaining RHCOS image file from 'https://rhcos.mirror.openshift.com/art/storage/releases/rhcos-4.12/412.86.202208101039-0/x86_64/rhcos-412.86.202208101039-0-vmware.x86_64.ova?sha256='
INFO The file was found in cache: /home/user/.cache/openshift-installer/image_cache/rhcos-412.86.202208101039-0-vmware.x86_64.ova. Reusing...
panic: runtime error: index out of range [1] with length 1goroutine 1 [running]:
github.com/openshift/installer/pkg/tfvars/vsphere.TFVars({{0xc0013bd068, 0x3, 0x3}, {0xc000b11dd0, 0x12}, {0xc000b11db8, 0x14}, {0xc000b11d28, 0x14}, {0xc000fe8fc0, ...}, ...})
/go/src/github.com/openshift/installer/pkg/tfvars/vsphere/vsphere.go:79 +0x61b
github.com/openshift/installer/pkg/asset/cluster.(*TerraformVariables).Generate(0x1d1ed360, 0x5?)
/go/src/github.com/openshift/installer/pkg/asset/cluster/tfvars.go:847 +0x4798
Based on explanation of field folder, looks like folder name should be ok. If it is not allowed to use folder name, need to validate the folder and update explain.
sh-4.4$ ./openshift-install explain installconfig.platform.vsphere.failureDomains.topology.folder KIND: InstallConfig VERSION: v1RESOURCE: <string> folder is the name or inventory path of the folder in which the virtual machine is created/located.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-20-095559
How reproducible:
always
Steps to Reproduce:
see description
Actual results:
installation has errors when set user-defined folder
Expected results:
installation is successful when set user-defined folder
Additional info:
Description of problem:
When trying to add a Cisco UCS Rackmount server as a `baremetalhost` CR the following error comes up in the metal3 container log in the openshift-machine-api namespace.
'TransferProtocolType' property which is mandatory to complete the action is missing in the request body
Full log entry:
{"level":"info","ts":1677155695.061805,"logger":"provisioner.ironic","msg":"current provision state","host":"ucs-rackmounts~ocp-test-1","lastError":"Deploy step deploy.deploy failed with BadRequestError: HTTP POST https://10.5.4.78/redfish/v1/Managers/CIMC/VirtualMedia/0/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.4.0.GeneralError: 'TransferProtocolType' property which is mandatory to complete the action is missing in the request body. Extended information: [{'@odata.type': 'Message.v1_0_6.Message', 'MessageId': 'Base.1.4.0.GeneralError', 'Message': "'TransferProtocolType' property which is mandatory to complete the action is missing in the request body.", 'MessageArgs': [], 'Severity': 'Critical'}].","current":"deploy failed","target":"active"}
Version-Release number of selected component (if applicable):
image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b imagePullPolicy: IfNotPresent name: metal3-ironic
How reproducible:
Adding a Cisco UCS Rackmount with Redfish enabled as a baremetalhost to metal3
Steps to Reproduce:
1. The address to use: redfish-virtualmedia://10.5.4.78/redfish/v1/Systems/WZP22100SBV
Actual results:
[baelen@baelen-jumphost mce]$ oc get baremetalhosts.metal3.io -n ucs-rackmounts ocp-test-1 NAME STATE CONSUMER ONLINE ERROR AGE ocp-test-1 provisioning true provisioning error 23h
Expected results:
For the provisioning to be successfull.
Additional info:
Description of problem:
We're seeing frequent private DNS zone creation failures in Azure CI jobs recent two days, the Azure CI jobs have been greatly affected. https://search.ci.openshift.org/?search=error+creating%2Fupdating+Private+DNS+Zone+Virtual+network&maxAge=48h&context=1&type=build-log&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job Such as the following error from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-azure-sdn-upgrade/1566852244215697408 level=info msg=Consuming Openshift Manifests from target directory level=info msg=Consuming Common Manifests from target directory level=info msg=Credentials loaded from file "/var/run/secrets/ci.openshift.io/cluster-profile/osServicePrincipal.json" level=info msg=Creating infrastructure resources... level=error level=error msg=Error: error creating/updating Private DNS Zone Virtual network link "ci-op-1w80vs6f-7f65d-t2zlz-network-link" (Resource Group "ci-op-1w80vs6f-7f65d-t2zlz-rg"): privatedns.VirtualNetworkLinksClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource 'ci-op-1w80vs6f-7f65d.ci2.azure.devcluster.openshift.com' not found." level=error level=error msg= with module.dns.azureprivatedns_zone_virtual_network_link.network, level=error msg= on dns/dns.tf line 13, in resource "azureprivatedns_zone_virtual_network_link" "network": level=error msg= 13: resource "azureprivatedns_zone_virtual_network_link" "network"
Version-Release number of selected component (if applicable):
All OCP versions
How reproducible:
https://search.ci.openshift.org/chart?name=e2e-azure&search=error+creating%2Fupdating+Private+DNS+Zone&maxAge=24h&type=build-log shows 26% of the failed Azure jobs are related to "error creating/updating Private DNS Zone" in the past day. 3/5 of the failed Azure jobs are caused by this in QE’s CI today.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
No Azure outage was reported from https://status.azure.com/en-us/status. No private zone or DNS records quota exceeded was observed.
This is a clone of issue OCPBUGS-4950. The following is the description of the original issue:
—
Description of problem:
A PR bumping OLM's k8s dependencies to 1.25 wasn't merged into openshift 4.12
Version-Release number of selected component (if applicable):
openshift-4.12
How reproducible:
Always
Steps to Reproduce:
1. Check OLM's repository for k8s dependencies in the 4.12 branch
Actual results:
Has 1.24 k8s dependencies
Expected results:
Has 1.25 k8s dependencies
Additional info:
This is a clone of issue OCPBUGS-3987. The following is the description of the original issue:
—
Description of problem:
When the user supplies nmstateConfig in agent-config.yaml invalid configurations may not be detected
Version-Release number of selected component (if applicable):
4.12
How reproducible:
every time
Steps to Reproduce:
1. Create an invalid NM config. In this case an interface was defined with a route but no IP address 2. The ISO can be generated with no errors 3. At run time the invalid was detected by assisted-service, create-cluster-and-infraenv.service logged the error "failed to validate network yaml for host 0, invalid yaml, error:"
Actual results:
Installation failed
Expected results:
Invalid configuration would be detected when ISO is created
Additional info:
It looks like the ValidateStaticConfigParams check is ONLY done when the nmstateconfig is provided in nmstateconfig.yaml, not when the file is generated (supplied in agent-config.yaml). https://github.com/openshift/installer/blob/master/pkg/asset/agent/manifests/nmstateconfig.go#L188
We cache images by filename, which works when downloading from the Internet as the filename always includes the CoreOS version.
However, when extracting an image from the release payload, it always has the same name. Therefore, we will never update it to a newer image even when running different versions of the installer.
A possible solution:
An alternative might be to set the name of the cache file to something different. It's not clear how we'd guarantee a match between the release payload we've been given and the ISO unless the name was based on the release payload (which eliminates some of the point of the cache, since ordinarily most release payloads will point to a small number of images).
Currently, the AWS actuator has a static list of instance types embedded in it. This means that as new instance types are added, we have to continually update this list.
Ideally, we could fetch this information from the AWS API as we do in GCP.
DoD:
This is a clone of issue OCPBUGS-4022. The following is the description of the original issue:
—
Description of problem:
Unnecessary react warning:
Warning: Each child in a list should have a unique "key" prop. Check the render method of `NavSection`. See https://reactjs.org/link/warning-keys for more information. NavItemHref@http://localhost:9012/static/main-785e94355aeacc12c321.js:5141:88 NavSection@http://localhost:9012/static/main-785e94355aeacc12c321.js:5294:20 PluginNavItem@http://localhost:9012/static/main-785e94355aeacc12c321.js:5582:23 div PerspectiveNav@http://localhost:9012/static/main-785e94355aeacc12c321.js:5398:134
Version-Release number of selected component (if applicable):
4.11 was fine
4.12 and 4.13 (master) shows this warning
How reproducible:
Always
Steps to Reproduce:
1. Open browser log
2. Open web console
Actual results:
React warning
Expected results:
Obviously no react warning
Create a script that gathers debug information from a host running the agent ISO and exports it in a standard format so that we can ask customers to provide it for debugging when something has gone wrong (and also use it in CI).
For now, it is fine to require the user to ssh into the host to run the script. The script should be already in place inside the agent ISO.
The output should probably be a compressed tar file. That file could be saved locally, or potentially piped to stdout so that a user only has to run a command like: ssh node0 -c agent-gather >node0.tgz
Things we need to collect:
There is a bug where creating OLM subscription manifests early in the installation process results in those OLM operators not being installed.
This is because the OLM installation Jobs fail when they are tried early in the installation process, and OLM does not retry those jobs sufficiently and eventually gives up on them.
This should be solved starting OCP 4.12, but until then, we should solve this using Assisted.
A way to solve this is to delay the installation of OLM operators to only occur after the cluster is up and healthy.
This can be done by creating the subscriptions with "installPlanApproval" set to "Manual" instead of "Automatic". Then once the cluster is up and healthy, the assisted-controller should approve the InstallPlans that OLM will create for the operators. This will then trigger the installation which is more likely to succeed since the cluster is up and healthy at this point
This is a clone of issue OCPBUGS-10558. The following is the description of the original issue:
—
Description of problem:
When running a cluster on application credentials, this event appears repeatedly: ns/openshift-machine-api machineset/nhydri0d-f8dcc-kzcwf-worker-0 hmsg/173228e527 - pathological/true reason/ReconcileError could not find information for "ci.m1.xlarge"
Version-Release number of selected component (if applicable):
How reproducible:
Happens in the CI (https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/33330/rehearse-33330-periodic-ci-shiftstack-shiftstack-ci-main-periodic-4.13-e2e-openstack-ovn-serial/1633149670878351360).
Steps to Reproduce:
1. On a living cluster, rotate the OpenStack cloud credentials 2. Invalidate the previous credentials 3. Watch the machine-api events (`oc -n openshift-machine-api get event`). A `Warning` type of issue could not find information for "name-of-the-flavour" will appear. If the cluster was installed using a password that you can't invalidate: 1. Rotate the cloud credentials to application credentials 2. Restart MAPO (`oc -n openshift-machine-api get pods -o NAME | xargs -r oc -n openshift-machine-api delete`) 3. Rotate cloud credentials again 4. Revoke the first application credentials you set 5. Finally watch the events (`oc -n openshift-machine-api get event`) The event signals that MAPO wasn't able to update flavour information on the MachineSet status.
Actual results:
Expected results:
No issue detecting the flavour details
Additional info:
Offending code likely around this line: https://github.com/openshift/machine-api-provider-openstack/blob/bcb08a7835c08d20606d75757228fd03fbb20dab/pkg/machineset/controller.go#L116
Description of problem:
Egress IP is not being assigned to primary interface of node as per hostsubnet definition. The issue being observed at an Openshift cluster hosted on Disconnected AWS environment. Following steps were performed at AWS end: - Disconnected VPC was created and installation of Openshift was done as per documentation. - Elastic IP could not be used as it is a disconnected environment. Customer identified a free IP from same subnet as the node and modified interface of the node to add a secondary IP. It seems cloud.network.openshift.io/egress-ipconfig annotation is need on the node to attach IP to primary interface but its missing. From SDN POD log on the same node I could see its complaining about 'an incomplete annotation "cloud.network.openshift.io/egress-ipconfig"'. Will share more details over comments.
Version-Release number of selected component (if applicable):
Openshift 4.10.28
How reproducible:
Always
Steps to Reproduce:
1. Create a disconnected environment on AWS 2. find a free IP from subnet where a worker node is hosted and add that as secondary IP to NIC of that node. 3. Configure hostsubnet and netnamespace on Openshift cluster
Actual results:
- Eress IP is not being attached to primary interface of node for which hostsubnet has been configured
Expected results:
- Egress IP should get configured without any issue.
Additional info:
Description of problem:
One multus case always fail in QE e2e testing. Using same net-attach-def and pod configure files, testing passed in 4.11 but failed in 4.12 and 4.13
Version-Release number of selected component (if applicable):
4.12 and 4.13
How reproducible:
All the times
Steps to Reproduce:
[weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/NetworkAttachmentDefinitions/runtimeconfig-def-ipandmac.yaml networkattachmentdefinition.k8s.cni.cncf.io/runtimeconfig-def created [weliang@weliang networking]$ oc get net-attach-def -o yaml apiVersion: v1 items: - apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: creationTimestamp: "2023-01-03T16:33:03Z" generation: 1 name: runtimeconfig-def namespace: test resourceVersion: "64139" uid: bb26c08f-adbf-477e-97ab-2aa7461e50c4 spec: config: '{ "cniVersion": "0.3.1", "name": "runtimeconfig-def", "plugins": [{ "type": "macvlan", "capabilities": { "ips": true }, "mode": "bridge", "ipam": { "type": "static" } }, { "type": "tuning", "capabilities": { "mac": true } }] }' kind: List metadata: resourceVersion: "" [weliang@weliang networking]$ oc create -f https://raw.githubusercontent.com/weliang1/verification-tests/master/testdata/networking/multus-cni/Pods/runtimeconfig-pod-ipandmac.yaml pod/runtimeconfig-pod created [weliang@weliang networking]$ oc get pod NAME READY STATUS RESTARTS AGE runtimeconfig-pod 0/1 ContainerCreating 0 6s [weliang@weliang networking]$ oc describe pod runtimeconfig-pod Name: runtimeconfig-pod Namespace: test Priority: 0 Node: weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal/10.0.128.4 Start Time: Tue, 03 Jan 2023 11:33:45 -0500 Labels: <none> Annotations: k8s.v1.cni.cncf.io/networks: [ { "name": "runtimeconfig-def", "ips": [ "192.168.22.2/24" ], "mac": "CA:FE:C0:FF:EE:00" } ] openshift.io/scc: anyuid Status: Pending IP: IPs: <none> Containers: runtimeconfig-pod: Container ID: Image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5zqd (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-k5zqd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 26s default-scheduler Successfully assigned test/runtimeconfig-pod to weliang-01031-bvxtz-worker-a-qlwz7.c.openshift-qe.internal Normal AddedInterface 24s multus Add eth0 [10.128.2.115/23] from openshift-sdn Warning FailedCreatePodSandBox 23s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(cff792dbd07e8936d04aad31964bd7b626c19a90eb9d92a67736323a1a2303c4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / Normal AddedInterface 7s multus Add eth0 [10.128.2.116/23] from openshift-sdn Warning FailedCreatePodSandBox 7s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_runtimeconfig-pod_test_7d5f3e7a-846d-4cfb-ac78-fd08b27102ae_0(d2456338fa65847d5dc744dea64972912c10b2a32d3450910b0b81cdc9159ca4): error adding pod test_runtimeconfig-pod to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [test/runtimeconfig-pod/7d5f3e7a-846d-4cfb-ac78-fd08b27102ae:runtimeconfig-def]: error adding container to network "runtimeconfig-def": Interface name contains an invalid character / [weliang@weliang networking]$
Actual results:
Pod is not running
Expected results:
Pod should be in running state
Additional info:
Description of problem:
E2E CI feature files are failing as Mocha version couldn't be determined
Version-Release number of selected component (if applicable):
How reproducible:
CI Search : https://search.ci.openshift.org/?search=Couldn%27t+determine+Mocha+version&maxAge=336h&context=1&type=bug%2Bjunit&name=pull-ci-openshift-console-operator-master-e2e-aws-console&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Steps to Reproduce:
1. 2. 3.
Actual results:
E2E tests failing with `Couldn't determine Mocha version` error
Expected results:
E2E tests should pass without any failures
Additional info:
This is a clone of issue OCPBUGS-10888. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-10887. The following is the description of the original issue:
—
Description of problem:
Following https://bugzilla.redhat.com/show_bug.cgi?id=2102765 respectively https://issues.redhat.com/browse/OCPBUGS-2140 problems with OpenID Group sync have been resolved. Yet the problem documented in https://bugzilla.redhat.com/show_bug.cgi?id=2102765 still does exist and we see that Groups that are being removed are still part of the chache in oauth-apiserver, causing a panic of the respective components and failures during login for potentially affected users. So in general, it looks like that oauth-apiserver cache is not properly refreshing or handling the OpenID Groups being synced. E1201 11:03:14.625799 1 runtime.go:76] Observed a panic: interface conversion: interface {} is nil, not *v1.Group goroutine 3706798 [running]: k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1.1() k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:103 +0xb0 panic({0x1aeab00, 0xc001400390}) runtime/panic.go:838 +0x207 k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1.1.1() k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:80 +0x2a k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1.1() k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:89 +0x250 panic({0x1aeab00, 0xc001400390}) runtime/panic.go:838 +0x207 github.com/openshift/library-go/pkg/oauth/usercache.(*GroupCache).GroupsFor(0xc00081bf18?, {0xc000c8ac03?, 0xc001400360?}) github.com/openshift/library-go@v0.0.0-20211013122800-874db8a3dac9/pkg/oauth/usercache/groups.go:47 +0xe7 github.com/openshift/oauth-server/pkg/groupmapper.(*UserGroupsMapper).processGroups(0xc0002c8880, {0xc0005d4e60, 0xd}, {0xc000c8ac03, 0x7}, 0x1?) github.com/openshift/oauth-server/pkg/groupmapper/groupmapper.go:101 +0xb5 github.com/openshift/oauth-server/pkg/groupmapper.(*UserGroupsMapper).UserFor(0xc0002c8880, {0x20f3c40, 0xc000e18bc0}) github.com/openshift/oauth-server/pkg/groupmapper/groupmapper.go:83 +0xf4 github.com/openshift/oauth-server/pkg/oauth/external.(*Handler).login(0xc00022bc20, {0x20eebb0, 0xc00041b058}, 0xc0015d8200, 0xc001438140?, {0xc0000e7ce0, 0x150}) github.com/openshift/oauth-server/pkg/oauth/external/handler.go:209 +0x74f github.com/openshift/oauth-server/pkg/oauth/external.(*Handler).ServeHTTP(0xc00022bc20, {0x20eebb0, 0xc00041b058}, 0x0?) github.com/openshift/oauth-server/pkg/oauth/external/handler.go:180 +0x74a net/http.(*ServeMux).ServeHTTP(0x1c9dda0?, {0x20eebb0, 0xc00041b058}, 0xc0015d8200) net/http/server.go:2462 +0x149 github.com/openshift/oauth-server/pkg/server/headers.WithRestoreAuthorizationHeader.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) github.com/openshift/oauth-server/pkg/server/headers/oauthbasic.go:27 +0x10f net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0xc0005e0280?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/authorization.go:64 +0x498 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x2f6cea0?, {0x20eebb0?, 0xc00041b058?}, 0x3?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithMaxInFlightLimit.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/server/filters/maxinflight.go:187 +0x2a4 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0x11?, {0x20eebb0?, 0xc00041b058?}, 0x1aae340?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/impersonation.go:50 +0x21c net/http.HandlerFunc.ServeHTTP(0xc000d52120?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x0?, {0x20eebb0?, 0xc00041b058?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x20eebb0, 0xc00041b058}, 0xc0015d8200) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0xc0015d8100?, {0x20eebb0?, 0xc00041b058?}, 0xc000531930?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAudit.func1({0x7fae682a40d8?, 0xc00041b048}, 0x9dbbaa?) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit.go:111 +0x549 net/http.HandlerFunc.ServeHTTP(0xc00003def0?, {0x7fae682a40d8?, 0xc00041b048?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:79 +0x178 net/http.HandlerFunc.ServeHTTP(0x0?, {0x7fae682a40d8?, 0xc00041b048?}, 0x0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:103 +0x1a5 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x7fae682a40d8?, 0xc00041b048?}, 0x20cfd00?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withAuthentication.func1({0x7fae682a40d8, 0xc00041b048}, 0xc0015d8100) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/authentication.go:80 +0x8b9 net/http.HandlerFunc.ServeHTTP(0x20f0f20?, {0x7fae682a40d8?, 0xc00041b048?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x7fae682a40d8, 0xc00041b048}, 0xc000e69e00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filterlatency/filterlatency.go:88 +0x46b net/http.HandlerFunc.ServeHTTP(0xc0019f5890?, {0x7fae682a40d8?, 0xc00041b048?}, 0xc000848764?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithCORS.func1({0x7fae682a40d8, 0xc00041b048}, 0xc000e69e00) k8s.io/apiserver@v0.22.2/pkg/server/filters/cors.go:75 +0x10b net/http.HandlerFunc.ServeHTTP(0xc00149a380?, {0x7fae682a40d8?, 0xc00041b048?}, 0xc0008487d0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1() k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:108 +0xa2 created by k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:94 +0x2cc goroutine 3706802 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x19eb780?, 0xc001206e20}) k8s.io/apimachinery@v0.22.2/pkg/util/runtime/runtime.go:74 +0x99 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0xc0016aec60, 0x1, 0x1560f26?}) k8s.io/apimachinery@v0.22.2/pkg/util/runtime/runtime.go:48 +0x75 panic({0x19eb780, 0xc001206e20}) runtime/panic.go:838 +0x207 k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0005047c8, {0x20eecd0?, 0xc0010fae00}, 0xdf8475800?) k8s.io/apiserver@v0.22.2/pkg/server/filters/timeout.go:114 +0x452 k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x20eecd0, 0xc0010fae00}, 0xc000e69d00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/request_deadline.go:101 +0x494 net/http.HandlerFunc.ServeHTTP(0xc0016af048?, {0x20eecd0?, 0xc0010fae00?}, 0xc0000bc138?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.WithWaitGroup.func1({0x20eecd0?, 0xc0010fae00}, 0xc000e69d00) k8s.io/apiserver@v0.22.2/pkg/server/filters/waitgroup.go:59 +0x177 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x7fae705daff0?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithAuditAnnotations.func1({0x20eecd0, 0xc0010fae00}, 0xc000e69c00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/audit_annotations.go:37 +0x230 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x20eecd0?, 0xc0010fae00}, 0xc000e69b00) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/warning.go:35 +0x2bb net/http.HandlerFunc.ServeHTTP(0x1c9dda0?, {0x20eecd0?, 0xc0010fae00?}, 0xd?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x20eecd0, 0xc0010fae00}, 0x0?) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/cachecontrol.go:31 +0x126 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20eecd0?, 0xc0010fae00?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/httplog.WithLogging.func1({0x20ef480?, 0xc001c20620}, 0xc000e69a00) k8s.io/apiserver@v0.22.2/pkg/server/httplog/httplog.go:103 +0x518 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20ef480?, 0xc001c20620?}, 0x20cfc08?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x20ef480, 0xc001c20620}, 0xc000e69900) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/requestinfo.go:39 +0x316 net/http.HandlerFunc.ServeHTTP(0x20f0f58?, {0x20ef480?, 0xc001c20620?}, 0xc0007c3f70?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x20ef480, 0xc001c20620}, 0xc000e69800) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/request_received_time.go:38 +0x27e net/http.HandlerFunc.ServeHTTP(0x419e2c?, {0x20ef480?, 0xc001c20620?}, 0xc0007c3e40?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x20ef480?, 0xc001c20620?}, 0xc0004ff600?) k8s.io/apiserver@v0.22.2/pkg/server/filters/wrap.go:74 +0xb1 net/http.HandlerFunc.ServeHTTP(0x1c05260?, {0x20ef480?, 0xc001c20620?}, 0x8?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x20ef480, 0xc001c20620}, 0xc000e69600) k8s.io/apiserver@v0.22.2/pkg/endpoints/filters/with_auditid.go:66 +0x40d net/http.HandlerFunc.ServeHTTP(0x1c9dda0?, {0x20ef480?, 0xc001c20620?}, 0xd?) net/http/server.go:2084 +0x2f github.com/openshift/oauth-server/pkg/server/headers.WithPreserveAuthorizationHeader.func1({0x20ef480, 0xc001c20620}, 0xc000e69600) github.com/openshift/oauth-server/pkg/server/headers/oauthbasic.go:16 +0xe8 net/http.HandlerFunc.ServeHTTP(0xc0016af9d0?, {0x20ef480?, 0xc001c20620?}, 0x16?) net/http/server.go:2084 +0x2f github.com/openshift/oauth-server/pkg/server/headers.WithStandardHeaders.func1({0x20ef480, 0xc001c20620}, 0x4d55c0?) github.com/openshift/oauth-server/pkg/server/headers/headers.go:30 +0x18f net/http.HandlerFunc.ServeHTTP(0x0?, {0x20ef480?, 0xc001c20620?}, 0xc0016afac8?) net/http/server.go:2084 +0x2f k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(0xc00098d622?, {0x20ef480?, 0xc001c20620?}, 0xc000401000?) k8s.io/apiserver@v0.22.2/pkg/server/handler.go:189 +0x2b net/http.serverHandler.ServeHTTP({0xc0019f5170?}, {0x20ef480, 0xc001c20620}, 0xc000e69600) net/http/server.go:2916 +0x43b net/http.(*conn).serve(0xc0002b1720, {0x20f0f58, 0xc0001e8120}) net/http/server.go:1966 +0x5d7 created by net/http.(*Server).Serve net/http/server.go:3071 +0x4db
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.11.13
How reproducible:
- Always
Steps to Reproduce:
1. Install OpenShift Container Platform 4.11 2. Configure OpenID Group Sync (as per https://docs.openshift.com/container-platform/4.11/authentication/identity_providers/configuring-oidc-identity-provider.html#identity-provider-oidc-CR_configuring-oidc-identity-provider) 3. Have users with hundrets of groups 4. Login and after a while, remove some Groups from the user in the IDP and from OpenShift Container Platform 5. Try to login again and see the panic in oauth-apiserver
Actual results:
User is unable to login and oauth pods are reporting a panic as shown above
Expected results:
oauth-apiserver should invalidate the cache quickly to remove potential invalid references to non exsting groups
Additional info:
Description of problem:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
Version-Release number of selected component (if applicable): 4.11.5
How reproducible: Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Steps to Reproduce:
Perform disconnected IPI install of OCP 4.11.5 on bare metal with master nodes that do not contain the text "master"
Actual results: master nodes do come up.
Expected results: master nodes should come up despite that the text "master" is not in their hostname.
Additional info:
Disconnected IPI OCP 4.11.5 cluster install on baremetal fails when hostname of master nodes does not include "master"
My cust reinstall new cluster using the fix here . But they have the exact same issue. The metal3 pod have PROVISIONING_MACS value empty. Can we work together with them to understand why the new code fix https://github.com/openshift/cluster-baremetal-operator/commit/76bd6bc461b30a6a450f85a42e492a0933178aee is not working.
cat metal3-static-ip-set/metal3-static-ip-set/logs/current.log 2022-09-27T14:19:38.140662564Z + '[' -z 10.17.199.3/27 ']' 2022-09-27T14:19:38.140662564Z + '[' -z '' ']' 2022-09-27T14:19:38.140662564Z + '[' -n '' ']' 2022-09-27T14:19:38.140722345Z ERROR: Could not find suitable interface for "10.17.199.3/27" 2022-09-27T14:19:38.140726312Z + '[' -n '' ']' 2022-09-27T14:19:38.140726312Z + echo 'ERROR: Could not find suitable interface for "10.17.199.3/27"' 2022-09-27T14:19:38.140726312Z + exit 1
cat metal3-b9bf8d595-gv94k.yaml ... initContainers: command: /set-static-ip env: name: PROVISIONING_IP value: 10.17.199.3/27 name: PROVISIONING_INTERFACE name: PROVISIONING_MACS <------------------------- missing MACS image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4f04793bd109ecba2dfe43be93dc990ac5299272482c150bd5f2eee0f80c983b imagePullPolicy: IfNotPresent name: metal3-static-ip-set ....
omc logs machine-api-controllers-6b9ffd96cd-grh6l -c nodelink-controller -n openshift-machine-api 2022-09-21T16:13:43.600517485Z I0921 16:13:43.600513 1 nodelink_controller.go:408] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" 2022-09-21T16:13:43.600521381Z I0921 16:13:43.600517 1 nodelink_controller.go:425] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by ProviderID 2022-09-21T16:13:43.600525225Z W0921 16:13:43.600521 1 nodelink_controller.go:427] Node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" has no providerID 2022-09-21T16:13:43.600528917Z I0921 16:13:43.600524 1 nodelink_controller.go:448] Finding machine from node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" by IP 2022-09-21T16:13:43.600532711Z I0921 16:13:43.600529 1 nodelink_controller.go:453] Found internal IP for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca": "10.17.192.33" 2022-09-21T16:13:43.600551289Z I0921 16:13:43.600544 1 nodelink_controller.go:477] Matching machine not found for node "blocp-1-106-m-0.c106-1.sc.evolhse.hydro.qc.ca" with internal IP "10.17.192.33"
From @dtantsur WIP PR: https://github.com/openshift/cluster-baremetal-operator/pull/299
Customer is waiting for this fix. The previous code change don't fix customer situation.
Please refer to this slack thread :https://coreos.slack.com/archives/CFP6ST0A3/p1664215102459219
Description of problem:
For example, "openshift-install explain installconfig.platform.gcp.publicDNSZone" tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created", but in fact it's for specifying an existing zone where the Public DNS zone records will be put in.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-10-015203
How reproducible:
Always
Steps to Reproduce:
1. openshift-install explain installconfig.platform.gcp.publicDNSZone 2. openshift-install explain installconfig.platform.gcp.privateDNSZone 3.
Actual results:
For example, it tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created."
Expected results:
It should be like "PublicDNSZone contains the zone ID and project where the Public DNS zone records will be created."
Additional info:
$ openshift-install version openshift-install 4.12.0-0.nightly-2022-10-10-015203 built from commit 02102a96b3f7c78337b32dcafe2e28be6fb67a0f release image registry.ci.openshift.org/ocp/release@sha256:00806cf7faaa86981e73b478a72c1b7a838cd08b215f3a9ab9b278ae94d9a794 release architecture amd64 $ $ openshift-install explain installconfig.platform.gcp.publicDNSZone KIND: InstallConfig VERSION: v1RESOURCE: <object> PublicDNSZone Technology Preview. PublicDNSZone contains the zone ID and project where the Public DNS zone will be created.FIELDS: id <string> ID Technology Preview. ID or name of the zone. project <string> ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID). $ $ openshift-install explain installconfig.platform.gcp.privateDNSZone KIND: InstallConfig VERSION: v1RESOURCE: <object> PrivateDNSZone Technology Preview. PrivateDNSZone contains the zone ID and project where the Private DNS zone will be created.FIELDS: id <string> ID Technology Preview. ID or name of the zone. project <string> ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID). $
The DVO metrics gatherer in the Insights operator relies on the "deployment-validation-operator" namespace name, but this is not very good, because the DVO can be installed in other namespaces (e.g it's installed in the "openshift-operators" namespace when installing through OperatorHub)
This bug is a backport clone of [Bugzilla Bug 2100429](https://bugzilla.redhat.com/show_bug.cgi?id=2100429). The following is the description of the original bug:
—
Description of problem:
[apiserver-auth] default SCC restricted allow volumes don't have "ephemeral" caused deployment with Generic Ephemeral Volumes stuck at Pending
Version-Release number of selected component (if applicable):
Cluster version is 4.11.0-0.nightly-2022-06-22-190830
$ oc version
Client Version: 4.11.0-0.nightly-2022-05-11-054135
Kustomize Version: v4.5.4
Server Version: 4.11.0-0.nightly-2022-06-22-190830
Kubernetes Version: v1.24.0+284d62a
How reproducible:
Always
Steps to Reproduce:
1. Set up a AWS OCP cluster with 4.11 nightly
2. Create a deployment with Generic Ephemeral Volumes
3. Waiting for the deployment ready and check the volume could write and read data
Test data:
wangpenghao@MacBook-Pro ~ cat temp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-dep
spec:
replicas: 1
selector:
matchLabels:
app: my-dep
template:
metadata:
labels:
app: my-dep
spec:
containers:
Actual results:
In Step3 : The deployment stucked at Pending caused by unable to validate against any security context constraint
Expected results:
In Step3 : The deployment should ready with the default scc restricted, the default scc restricted should allow
volumes:
Additional info:
Generic ephemeral volumes are the safer option of these two - it just creates/deletes PVCs on behalf of users. And most users can already create PVCs.
ephemeral type volume not in scc.volumes list definition
https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#authorization-cont[…]ing-internal-oauth
So currently if customers want to use ephemeral type volume have to use scc with:
volumes:
Discuss record: https://coreos.slack.com/archives/CB48XQ4KZ/p1655465586780419
Generic Ephemeral Volumes docs:
https://kubernetes.io/blog/2020/09/01/ephemeral-volumes-with-storage-capacity-tracking/#generic-ephemeral-volumes
Master Log:
Node Log (of failed PODs):
PV Dump:
PVC Dump:
StorageClass Dump (if StorageClass used by PV/PVC):
Description of problem:
For Hardware Backed Management Ports (e.g. Virtual functions), the Egress IP Health Check Feature will error out with: "unable to start health checking server: no mgmt ip"
Version-Release number of selected component (if applicable):
OVN-Kubernetes 4.12.0
How reproducible:
Always
Steps to Reproduce:
1. Load OVN-Kubernetes 4.12.0 in MLX BlueField 2 2. If in NIC mode: https://github.com/ovn-org/ovn-kubernetes/pull/3160 https://github.com/ovn-org/ovn-kubernetes/pull/3251 Patches are needed. 3. If in DPU mode then those above patches are optional. 4. Set OVNKUBE_NODE_MGMT_PORT_NETDEV environment variable to point to the Virtual Function.
Actual results:
Error in ovnkube-node: "unable to start health checking server: no mgmt ip". The ovnkube-node container will crash. Egress IP Health Check should be compatible with VFs as management port.
Expected results:
No Error.
Additional info:
A simple workaround is to not return an error: go-controller/pkg/node/node.go @@ -660,7 +660,8 @@ func (n *OvnNode) startEgressIPHealthCheckingServer(wg *sync.WaitGroup, mgmtPort return fmt.Errorf("failed start health checking server due to unsettled IPv6: %w", err) } } else { - return fmt.Errorf("unable to start health checking server: no mgmt ip") + klog.Infof("Unable to start Egress IP health checking server: no mgmt ip") + return nil }
Description of problem:
OVN-Kubernetes master is crashing during upgrade from 4.11.5 to 4.11.6
Version-Release number of selected component (if applicable):
4.11.5 to 4.11.6
cannot clean up egress default deny ACL name: cannot update old NetworkPolicy ACLs for namespace ocm-myuser-1urk47c6ti1n94n1spdvo9902as3klar-sd6: error in transact with ops [{Op:update Table:ACL Row:map[action:drop direction:from-lport external_ids:{GoMap:map[default-deny-policy-type:Egress]} log:false match:inport == @a12995145443578534523_egressDefaultDeny meter:{GoSet:[acl-logging]} name:{GoSet:[ocm-myuser-1urk47c6ti1n94n1spdvo9902as3klar-sd6_egressDefaultDeny]} options:{GoMap:map[apply-after-lb:true]} priority:1000 severity:{GoSet:[info]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {5277db54-dd96-4c4d-bbed-99142cab91e7}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}] results [{Count:0 Error:constraint violation Details:"ocm-myuser-1urk47c6ti1n94n1spdvo9902as3klar-sd6_egressDefaultDeny" length 65 is greater than maximum allowed length 63 UUID:{GoUUID:} Rows:[]}] and errors
This is a clone of issue OCPBUGS-6018. The following is the description of the original issue:
—
This is a public clone of OCPBUGS-3821
The MCO can sometimes render a rendered-config in the middle of an upgrade with old MCs, e.g.:
This will cause the render controller to create a new rendered MC that uses the OLD kubeletconfig-MC, which at best is a double reboot for 1 node, and at worst block the update and break maxUnavailable nodes per pool.
Description of problem:
E2E test Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name
This is a clone of issue OCPBUGS-4357. The following is the description of the original issue:
—
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-4973. The following is the description of the original issue:
—
Description of problem:
Config OAuth with htpasswd in the hostedcluster doesn't work as expected.
Version-Release number of selected component (if applicable):
How reproducible:
enable OAuth htpasswd in hostedcluster
Steps to Reproduce:
1. create passwd file for user init by htpasswd ``` htpasswd -cbB .passwd helitest helitest oc create secret generic testuser --from-file=htpasswd=.passwd -n clusters ``` 2. edit hostedcluster.yaml ``` spec: configuration: oauth: identityProviders: - htpasswd: fileData: name: testuser mappingMethod: claim name: htpasswd type: HTPasswd ``` 3. oc login hostedcluster apiserver $ oc login https://ac0be21b169ff4399b6a2044388c38cf-5789e1b174d7424b.elb.us-east-2.amazonaws.com:6443 --username=testuser --password=testuser The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y Login failed (401 Unauthorized)
Actual results:
oc login with error : "Login failed (401 Unauthorized) "
Expected results:
oc login successfully.
Additional info:
# check configmap of oauth $ oc get cm -n clusters-demo-02 oauth-openshift -oyaml ... oauthConfig: alwaysShowProviderSelection: false assetPublicURL: "" grantConfig: method: deny serviceAccountMethod: prompt identityProviders: [] loginURL: https://ac0be21b169ff4399b6a2044388c38cf-5789e1b174d7424b.elb.us-east-2.amazonaws.com:6443 ---> seems `identityProviders` is not synced correctly ?
Description of problem:
pkg/devfile/sample_test.go fails after devfile registry was updated (https://github.com/devfile/registry/pull/126)
OCPBUGS-1677 is about updating our assertion so that the CI job runs successfully again. We might want to backport this as well.
This is about updating the code that the test should use a mock response instead of the latest registry content OR check some specific attributes instead of comparing the full JSON response.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. Clone openshift/console
2. Run ./test-backend.sh
Actual results:
Unit tests fail
Expected results:
Unit tests should pass again
Additional info:
This is a clone of issue OCPBUGS-6053. The following is the description of the original issue:
—
Description of problem:
When a ClusterVersion's `status.availableUpdates` has a value of `null` and `Upgradeable=False`, a run time error occurs on the Cluster Settings page as the UpdatesGraph component expects `status.availableUpdates` to have a non-empty value.
Steps to Reproduce:
1. Add the following overrides to ClusterVersion config (/k8s/cluster/config.openshift.io~v1~ClusterVersion/version) spec: overrides: - group: apps kind: Deployment name: console-operator namespace: openshift-console-operator unmanaged: true - group: rbac.authorization.k8s.io kind: ClusterRole name: console-operator namespace: '' unmanaged: true 2. Visit /settings/cluster and note the run-time error (see attached screenshot)
Actual results:
An error occurs.
Expected results:
The contents of the Cluster Settings page render.
Allow users to turn PodSecurity admission in enforcement mode in 4.12 as TechPreviewNoUpgrade in order to be able to test the feature with their workloads and see if there is anything that needs fixing.
Description of problem:
The IPI installation in some regions got bootstrap failure, and without any node available/ready.
Version-Release number of selected component (if applicable):
12-22 16:22:27.970 ./openshift-install 4.12.0-0.nightly-2022-12-21-202045 12-22 16:22:27.970 built from commit 3f9c38a5717c638f952df82349c45c7d6964fcd9 12-22 16:22:27.970 release image registry.ci.openshift.org/ocp/release@sha256:2d910488f25e2638b6d61cda2fb2ca5de06eee5882c0b77e6ed08aa7fe680270 12-22 16:22:27.971 release architecture amd64
How reproducible:
Always
Steps to Reproduce:
1. try the IPI installation in the problem regions (so far tried and failed with ap-southeast-2, ap-south-1, eu-west-1, ap-southeast-6, ap-southeast-3, ap-southeast-5, eu-central-1, cn-shanghai, cn-hangzhou and cn-beijing)
Actual results:
Bootstrap failed to complete
Expected results:
Installation in those regions should succeed.
Additional info:
FYI the QE flexy-install job: https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-common/job/Flexy-install/166672/ No any node available/ready, and no any operator available. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version False True 30m Unable to apply 4.12.0-0.nightly-2022-12-21-202045: an unknown error has occurred: MultipleErrors $ oc get nodes No resources found $ oc get machines -n openshift-machine-api -o wide NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE jiwei-1222f-v729x-master-0 30m jiwei-1222f-v729x-master-1 30m jiwei-1222f-v729x-master-2 30m $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication baremetal cloud-controller-manager cloud-credential cluster-autoscaler config-operator console control-plane-machine-set csi-snapshot-controller dns etcd image-registry ingress insights kube-apiserver kube-controller-manager kube-scheduler kube-storage-version-migrator machine-api machine-approver machine-config marketplace monitoring network node-tuning openshift-apiserver openshift-controller-manager openshift-samples operator-lifecycle-manager operator-lifecycle-manager-catalog operator-lifecycle-manager-packageserver service-ca storage $ Mater nodes don't run for example kubelet and crio services. [core@jiwei-1222f-v729x-master-0 ~]$ sudo crictl ps FATA[0000] unable to determine runtime API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/crio/crio.sock: connect: no such file or directory" [core@jiwei-1222f-v729x-master-0 ~]$ The machine-config-daemon firstboot tells "failed to update OS". [jiwei@jiwei log-bundle-20221222085846]$ grep -Ei 'error|failed' control-plane/10.0.187.123/journals/journal.log Dec 22 16:24:16 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 22 16:24:16 localhost kernel: GPT: Use GNU Parted to correct GPT errors. Dec 22 16:24:18 localhost ignition[867]: failed to fetch config: resource requires networking Dec 22 16:24:18 localhost ignition[891]: GET error: Get "http://100.100.100.200/latest/user-data": dial tcp 100.100.100.200:80: connect: network is unreachable Dec 22 16:24:18 localhost ignition[891]: GET error: Get "http://100.100.100.200/latest/user-data": dial tcp 100.100.100.200:80: connect: network is unreachable Dec 22 16:24:19 localhost.localdomain NetworkManager[919]: <info> [1671726259.0329] hostname: hostname: hostnamed not used as proxy creation failed with: Could not connect: No such file or directory Dec 22 16:24:19 localhost.localdomain NetworkManager[919]: <warn> [1671726259.0464] sleep-monitor-sd: failed to acquire D-Bus proxy: Could not connect: No such file or directory Dec 22 16:24:19 localhost.localdomain ignition[891]: GET error: Get "https://api-int.jiwei-1222f.alicloud-qe.devcluster.openshift.com:22623/config/master": dial tcp 10.0.187.120:22623: connect: connection refused ...repeated logs omitted... Dec 22 16:27:46 jiwei-1222f-v729x-master-0 ovs-ctl[1888]: 2022-12-22T16:27:46Z|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Dec 22 16:27:46 jiwei-1222f-v729x-master-0 ovs-vswitchd[1888]: ovs|00001|dns_resolve|WARN|Failed to read /etc/resolv.conf: No such file or directory Dec 22 16:27:46 jiwei-1222f-v729x-master-0 dbus-daemon[1669]: [system] Activation via systemd failed for unit 'dbus-org.freedesktop.resolve1.service': Unit dbus-org.freedesktop.resolve1.service not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[1924]: Error: Device '' not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[1937]: Error: Device '' not found. Dec 22 16:27:46 jiwei-1222f-v729x-master-0 nm-dispatcher[2037]: Error: Device '' not found. Dec 22 08:35:32 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: Warning: failed, retrying in 1s ... (1/2)I1222 08:35:32.477770 2181 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-extensions/os-extensions-content-910221290 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259d8c6b9ec714d53f0275db9f2962769f703d4d395afb9d902e22cfe96021b0 Dec 22 08:56:06 jiwei-1222f-v729x-master-0 rpm-ostree[2288]: Txn Rebase on /org/projectatomic/rpmostree1/rhcos failed: remote error: Get "https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:27f262e70d98996165748f4ab50248671d4a4f97eb67465cd46e1de2d6bd24d0": net/http: TLS handshake timeout Dec 22 08:56:06 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: W1222 08:56:06.785425 2181 firstboot_complete_machineconfig.go:46] error: failed to update OS to quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411e6e3be017538859cfbd7b5cd57fc87e5fee58f15df19ed3ec11044ebca511 : error running rpm-ostree rebase --experimental ostree-unverified-registry:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:411e6e3be017538859cfbd7b5cd57fc87e5fee58f15df19ed3ec11044ebca511: Warning: The unit file, source configuration file or drop-ins of rpm-ostreed.service changed on disk. Run 'systemctl daemon-reload' to reload units. Dec 22 08:56:06 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: error: remote error: Get "https://quay.io/v2/openshift-release-dev/ocp-v4.0-art-dev/blobs/sha256:27f262e70d98996165748f4ab50248671d4a4f97eb67465cd46e1de2d6bd24d0": net/http: TLS handshake timeout Dec 22 08:57:31 jiwei-1222f-v729x-master-0 machine-config-daemon[2181]: Warning: failed, retrying in 1s ... (1/2)I1222 08:57:31.244684 2181 run.go:19] Running: nice -- ionice -c 3 oc image extract --path /:/run/mco-extensions/os-extensions-content-4021566291 --registry-config /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:259d8c6b9ec714d53f0275db9f2962769f703d4d395afb9d902e22cfe96021b0 Dec 22 08:59:20 jiwei-1222f-v729x-master-0 systemd[2353]: /usr/lib/systemd/user/podman-kube@.service:10: Failed to parse service restart specifier, ignoring: never Dec 22 08:59:21 jiwei-1222f-v729x-master-0 podman[2437]: Error: open default: no such file or directory Dec 22 08:59:21 jiwei-1222f-v729x-master-0 podman[2450]: Error: failed to start API service: accept unixgram @00026: accept4: operation not supported Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: podman-kube@default.service: Failed with result 'exit-code'. Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: Failed to start A template for running K8s workloads via podman-play-kube. Dec 22 08:59:21 jiwei-1222f-v729x-master-0 systemd[2353]: podman.service: Failed with result 'exit-code'. [jiwei@jiwei log-bundle-20221222085846]$
This is a clone of issue OCPBUGS-3085. The following is the description of the original issue:
—
Description of problem:
IPI on BareMetal Dual stack deployment failed and Bootstrap timed out before completion
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-25-210451
How reproducible:
Always
Steps to Reproduce:
1. Deploy IPI on BM using Dual stack 2. 3.
Actual results:
Deployment failed
Expected results:
Should pass
Additional info:
Same deployment works fine on 4.11
This is a clone of issue OCPBUGS-10213. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8468. The following is the description of the original issue:
—
Description of problem:
RHCOS is being published to new AWS regions (https://github.com/openshift/installer/pull/6861) but aws-sdk-go need to be bumped to recognize those regions
Version-Release number of selected component (if applicable):
master/4.14
How reproducible:
always
Steps to Reproduce:
1. openshift-install create install-config 2. Try to select ap-south-2 as a region 3.
Actual results:
New regions are not found. New regions are: ap-south-2, ap-southeast-4, eu-central-2, eu-south-2, me-central-1.
Expected results:
Installer supports and displays the new regions in the Survey
Additional info:
See https://github.com/openshift/installer/blob/master/pkg/asset/installconfig/aws/regions.go#L13-L23
This is a clone of issue OCPBUGS-3668. The following is the description of the original issue:
—
Description of problem:
Installer fails to install 4.12.0-rc.0 on VMware IPI with the script that worked with prior OCP versions. Error happens during Terraform prepare step when gathering information in the "Platform Provisioning Check". It looks like a permission issue, but we're using the VCenter administrator account. I double checked and that account has all the necessary permissions.
Version-Release number of selected component (if applicable):
OCP installer 4.12.0-rc.0 VSphere & Vcenter 7.0.3 - no pending updates
How reproducible:
always - we observed this already in the nightlies, but wanted to wait for a RC to confirm
Steps to Reproduce:
1. Try to install using the openshift-install binary
Actual results:
Fails during the preparation step
Expected results:
Installs the cluster ;)
Additional info:
This runs in our CICD pipeline, let me know if you want to need access to the full run log: https://gitlab.consulting.redhat.com/cblum/storage-ocs-lab/-/jobs/219304 This includes the install-config.yaml, all component versions and the full debug log output
Description of problem:
The SQL-based index image created by old opm failed to run in 4.12 even if added the `privileged` permission to the namespace.
MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 1 (2s ago) 11s MacBook-Pro:~ jianzhang$ oc logs jian-operators-4g5ln Error: open /etc/nsswitch.conf: permission denied
PS: the SQL-based index created by the new opm version doesn't have this issue.
opm version Version: version.Version{OpmVersion:"e41024eb3", GitCommit:"e41024eb37c721bc43e8b3df226dd30c0589aee7", BuildDate:"2022-08-16T01:50:17Z", GoOs:"darwin", GoArch:"amd64"}
Version-Release number of selected component (if applicable):
OCP 4.12
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2022-08-15-150248 True False 3h25m Cluster version is 4.12.0-0.nightly-2022-08-15-150248
How reproducible:
always
Steps to Reproduce:
1. Deploy OCP 4.12
2, Deploy a CatalogSource in the `openshift-marketplace` namespace.
MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml apiVersion: v1 kind: Namespace metadata: annotations: capability.openshift.io/name: marketplace include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c16,c10 openshift.io/sa.scc.supplemental-groups: 1000260000/10000 openshift.io/sa.scc.uid-range: 1000260000/10000 workload.openshift.io/allowed: management creationTimestamp: "2022-08-15T23:15:27Z" labels: kubernetes.io/metadata.name: openshift-marketplace olm.operatorgroup.uid/1b776321-2714-4c1f-95ba-2ddff49c4efe: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/enforce: baseline pod-security.kubernetes.io/warn: baseline name: openshift-marketplace ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: cd81594b-4f6c-46d6-9369-75deef542ec8 resourceVersion: "8617" uid: 1c35352e-3636-4f2b-a3b1-c84ebc6681e0 spec: finalizers: - kubernetes status: phase: Active
3, Check the CatalogSource pod status, crashed.
MacBook-Pro:~ jianzhang$ oc get catalogsource -n openshift-marketplace jian-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T02:24:20Z" generation: 1 name: jian-operators namespace: openshift-marketplace resourceVersion: "106145" uid: 6a75ecc9-7b88-4411-bcf5-e34618f9b3cd spec: displayName: Jian Operators image: quay.io/olmqe/etcd-index:v1 priority: -100 publisher: Jian sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: jian-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T03:12:28Z" lastObservedState: TRANSIENT_FAILURE latestImageRegistryPoll: "2022-08-16T02:34:21Z" registryService: createdAt: "2022-08-16T02:24:20Z" port: "50051" protocol: grpc serviceName: jian-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get pods -n openshift-marketplace NAME READY STATUS RESTARTS AGE 28bb83ea022e9728d25570ab0adbe09a31d6a0a606917488e0ddb00f925mnfw 0/1 Completed 0 3h23m 7049ea48beb27a712fa506b76ad672be201ce5d3a6a93d627a0091e0fesvdlj 0/1 Completed 0 3h23m certified-operators-ftt2n 1/1 Running 0 3h49m community-operators-27dx9 1/1 Running 0 3h49m jian-operators-5zq7d 0/1 CrashLoopBackOff 12 (71s ago) 38m jian-operators-gpg4v 0/1 CrashLoopBackOff 14 (57s ago) 48m marketplace-operator-9c8496b58-2jfmv 1/1 Running 0 3h56m qe-app-registry-rqrrv 1/1 Running 0 141m redhat-marketplace-s6zrj 1/1 Running 0 3h49m redhat-operators-54cqr 1/1 Running 0 3h49m MacBook-Pro:~ jianzhang$ oc -n openshift-marketplace logs jian-operators-gpg4v Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
4. Create a namespace with the `privileged` permission.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
5. Deploy a CatalogSource as above step 2. Still crashed.
MacBook-Pro:~ jianzhang$ oc get pods -n debug NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 10 (114s ago) 28m jian-operators-wn766 0/1 CrashLoopBackOff 8 (2m25s ago) 18m MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging -h, --help help for serve -p, --port string port number to serve on (default "50051") --skip-migrate do not attempt to migrate to the latest db revision when starting -t, --termination-log string path to a container termination log file (default "/dev/termination-log") --timeout-seconds string Timeout in seconds. This flag will be removed later. (default "infinite") Global Flags: --skip-tls skip TLS certificate verification for container image registries while pulling bundles or index
Actual results:
The sql-based index image created by the old opm version cannot be run.
MacBook-Pro:~ jianzhang$ oc -n debug logs jian-operators-wn766 Error: open /etc/nsswitch.conf: permission denied
Expected results:
The old SQL-based index image runs well. Or we have a workaround for it.
Additional info:
I changed another old sql-based image and have a try, get another permission issue.
MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE jian-operators Jian Operators grpc Jian 37m xia-operators Xia Operators grpc Xia 101s MacBook-Pro:~ jianzhang$ oc get catalogsource xia-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T03:22:38Z" generation: 1 name: xia-operators namespace: debug resourceVersion: "110629" uid: 8be42e68-43be-4fd4-9b67-c74edc5e6353 spec: displayName: Xia Operators image: quay.io/olmqe/ditto-index:test-xzha-1 priority: -100 publisher: Xia sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: xia-operators.debug.svc:50051 lastConnect: "2022-08-16T03:24:18Z" lastObservedState: CONNECTING registryService: createdAt: "2022-08-16T03:22:38Z" port: "50051" protocol: grpc serviceName: xia-operators serviceNamespace: debug MacBook-Pro:~ jianzhang$ oc project Using project "debug" on server "https://api.qe-daily-412-0816.ibmcloud.qe.devcluster.openshift.com:6443". MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE jian-operators-4g5ln 0/1 CrashLoopBackOff 11 (3m41s ago) 35m jian-operators-wn766 0/1 CrashLoopBackOff 9 (4m13s ago) 25m xia-operators-6wgjt 0/1 CrashLoopBackOff 1 (8s ago) 13s MacBook-Pro:~ jianzhang$ oc logs xia-operators-6wgjt time="2022-08-16T03:22:43Z" level=warning msg="\x1b[1;33mDEPRECATION NOTICE:\nSqlite-based catalogs and their related subcommands are deprecated. Support for\nthem will be removed in a future release. Please migrate your catalog workflows\nto the new file-based catalog format.\x1b[0m" Error: open ./db-609956243: permission denied Usage: opm registry serve [flags] Flags: -d, --database string relative path to sqlite db (default "bundles.db") --debug enable debug logging
Even if that namespace is `privileged`.
MacBook-Pro:~ jianzhang$ oc get ns debug -o yaml apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/sa.scc.mcs: s0:c30,c10 openshift.io/sa.scc.supplemental-groups: 1000890000/10000 openshift.io/sa.scc.uid-range: 1000890000/10000 creationTimestamp: "2022-08-16T02:46:41Z" labels: kubernetes.io/metadata.name: debug pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" name: debug resourceVersion: "95718" uid: bdf93839-6c42-4365-a65c-d9c0b9fe0504 spec: finalizers: - kubernetes status: phase: Active
But, both of them work well in the 4.11 cluster. As follows,
MacBook-Pro:~ jianzhang$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-08-15-152346 True False 91m Cluster version is 4.11.0-0.nightly-2022-08-15-152346 MacBook-Pro:~ jianzhang$ oc get catalogsource NAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 106m community-operators Community Operators grpc Red Hat 106m jian-operators Jian Operators grpc Jian 48m redhat-marketplace Red Hat Marketplace grpc Red Hat 106m redhat-operators Red Hat Operators grpc Red Hat 106m xia-operators Xia Operators grpc Xia 6s MacBook-Pro:~ jianzhang$ oc get pods NAME READY STATUS RESTARTS AGE certified-operators-fsjc8 1/1 Running 0 107m community-operators-9qvzt 1/1 Running 0 107m jian-operators-n5s8c 1/1 Running 0 48m marketplace-operator-7b777f747-22rwq 1/1 Running 0 109m redhat-marketplace-2mgrl 1/1 Running 0 107m redhat-operators-72q6z 1/1 Running 0 107m xia-operators-ngq86 1/1 Running 0 23s MacBook-Pro:~ jianzhang$ oc get catalogsource jian-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T02:39:52Z" generation: 1 name: jian-operators namespace: openshift-marketplace resourceVersion: "58565" uid: 481a6fbe-00a5-4af5-86f7-d7413c658db3 spec: displayName: Jian Operators image: quay.io/olmqe/etcd-index:v1 priority: -100 publisher: Jian sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: jian-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T02:44:45Z" lastObservedState: READY latestImageRegistryPoll: "2022-08-16T03:24:54Z" registryService: createdAt: "2022-08-16T02:39:52Z" port: "50051" protocol: grpc serviceName: jian-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get catalogsource xia-operators -o yaml apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: creationTimestamp: "2022-08-16T03:28:07Z" generation: 1 name: xia-operators namespace: openshift-marketplace resourceVersion: "59886" uid: a270f665-ee0b-49a5-badb-d3394c7a9344 spec: displayName: Xia Operators image: quay.io/olmqe/ditto-index:test-xzha-1 priority: -100 publisher: Xia sourceType: grpc updateStrategy: registryPoll: interval: 10m0s status: connectionState: address: xia-operators.openshift-marketplace.svc:50051 lastConnect: "2022-08-16T03:28:27Z" lastObservedState: READY registryService: createdAt: "2022-08-16T03:28:07Z" port: "50051" protocol: grpc serviceName: xia-operators serviceNamespace: openshift-marketplace MacBook-Pro:~ jianzhang$ oc get ns openshift-marketplace -o yaml apiVersion: v1 kind: Namespace metadata: annotations: capability.openshift.io/name: marketplace include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" openshift.io/node-selector: "" openshift.io/sa.scc.mcs: s0:c16,c5 openshift.io/sa.scc.supplemental-groups: 1000250000/10000 openshift.io/sa.scc.uid-range: 1000250000/10000 workload.openshift.io/allowed: management creationTimestamp: "2022-08-16T01:38:10Z" labels: kubernetes.io/metadata.name: openshift-marketplace olm.operatorgroup.uid/24dae571-2843-445b-b09f-5a4631cb25ba: "" openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/audit: baseline pod-security.kubernetes.io/warn: baseline name: openshift-marketplace ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 470d072e-37d9-4203-bc5a-c675800d593c resourceVersion: "6981" uid: 554a5ceb-8343-46f4-ae69-af36ee45d7fe spec: finalizers: - kubernetes status: phase: Active
Description of problem:
CI failure when installing a devfile, see also attached screenshot.
You can find the error by looking for Object.verifyTopologyPage
For example:
Description of problem:
Network policy code has some problems, most of them are races, therefore it can be difficult to reproduce and verify, here is the list 1. all kinds of add/delete port to/from default deny port group failures, possible symptoms: - port should’ve been added to default deny port group, but wasn’t: connections that should’ve been dropped are allowed - port should’ve been deleted from default deny port group, but wasn’t: connections that should be allowed are dropped - db ops failures when an attempt to add/delete port to/from default deny port group fails, e.g. because this operation already was done 2. default deny port group was overwritten when 2 network policies are created in a namespace at the same time. Can lead to ports not being added to the default deny port group => denied connections will be allowed 3. handle error when getting local pod from the cache fails, possible symptoms - "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" log message - pod is not added to netpol port groups, network policy is not applied 4. creating deleted namespace via ensureNamespaceLocked, symptoms: - namespace was deleted, but address set is present in the db 5. policy acl loglevel update wasn’t applied, possible symptoms: - netpol acl log level isn’t set/updated to namespace loglevel 6. netpol cleanup failures, symptoms: - network policy failed to be deleted, something is still left in the db, error messages like - "failed to destroy network policy" - "Rollback of default port groups and acls for policy: %s/%s failed, Unable to ensure namespace for network policy" 7. concurrent write to sets.String - this will panic, you won’t miss 8. retry for network policy handler after network policy was deleted, you should see failures saying that some network policy related object is nil or doesn’t exist, e.g. - "peer AddressSet is nil, cannot add <object>" 9. host network and completed pods selected by network policy can produce error logs, no real harm - "Failed to get LSP for pod <namespace>/<name> for networkPolicy %s refetching err" 10. namespace pod handlers are never stopped, can affect memory usage and look like a memory leak 11. add local pod failure, since netpol port group is not committed to db yet, error looks like - "Failed to create *factory.localPodSelector <name>, error: object not found"
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
Example 1 1. Create network policy with [in/e]gress selector that applies to a namespace labeled project: myproject apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: project: myproject 2. Use oc apply to delete network policy and crate a pod in project: myproject namespace at the same time 3. check ovnkube-master logs for "peer AddressSet is nil, cannot add peer pod(s)", this should retry with the same error 15 times 4. This may not work from the first try, since we need to hit specific order of network policy delete and pod add handling 5. With the new version no error messages should be present Example 2 1. create network policy that applies to a namespace test piVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: test-network-policy namespace: test spec: podSelector: {} policyTypes: - Ingress ingress: 2. Create host network pod in namespace test 3. Check 15 logs saying "Failed to get LSP for pod %s/%s for networkPolicy %s refetching err: " 4. check final log "Failed to get LSP after multiple retries for pod %s/%s for networkPolicy" 5. With the new version no error message should be present All the other cases are difficult to reproduce, maybe just running some standard network policy tests and making sure everything works will be a good verification.
Actual results:
Expected results:
Additional info:
Description of problem:
openshift-install does not detect releaseImage mismatches between cluster-image-set.yaml and registries.conf
Version-Release number of selected component (if applicable):
4.12
How reproducible:
100%
Steps to Reproduce:
1.Create ZTP inputs for image generation where registries.conf does not have any source matching the binary releaseimage (the binary image which can be obtained by running "openshift-install version". You can also set this value in ZTP manifest cluster-image-set.yaml 2.run openshift-install agent create image
Actual results:
Image is generated with no warnings
Expected results:
Image is generated with warning message - "The ImageContentSources configuration in install-config.yaml should have at-least one source field matching the releaseImage value %s", releaseImagePath
Additional info:
This is a clone of issue OCPBUGS-3096. The following is the description of the original issue:
—
While the installer binary is statically linked, the terraform binaries shipped with it are dynamically linked.
This could give issues when running the installer on Linux and depending on the GLIBC version the specific Linux distribution has installed. It becomes a risk when switching the base image of the builders from ubi8 to ubi9 and trying to run the installer in cs8 or rhel8.
For example, building the installer on cs9 and trying to run it in a cs8 distribution leads to:
time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform version -json" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=debug msg="[INFO] running Terraform command: /root/test/terraform/bin/terraform init -no-color -force-copy -input=false -backend=true -get=true -upgrade=false -plugin-dir=/root/test/terraform/plugins" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)" time="2022-10-31T14:31:47+01:00" level=error msg="failed to fetch Cluster: failed to generate asset \"Cluster\": failure applying terraform for \"cluster\" stage: failed to create cluster: failed doing terraform init: exit status 1\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /root/test/terraform/bin/terraform)\n/root/test/terraform/bin/terraform: /lib64/libc.so.6: version `GLIBC_2.34' not found (required by /root/test/terraform/bin/terraform)\n"
How reproducible:Always
Steps to Reproduce:{code:none} 1. Build the installer on cs9 2. Run the installer on cs8 until the terraform binary are started 3. Looking at the terrform binary with ldd or file, you can get it is not a statically linked binary and the error above might occur depending on the glibc version you are running on
Actual results:
Expected results:
The terraform and providers binaries have to be statically linked as well as the installer is.
Additional info:
This comes from a build of OKD/SCOS that is happening outside of Prow on a cs9-based builder image. One can use the Dockerfile at images/installer/Dockerfile.ci and replace the builder image with one like https://github.com/okd-project/images/blob/main/okd-builder.Dockerfile
In the Known Issues section of the OpenStack-specific Installer docs issues, there is a point about control plane anti-affinity.
The known issue has several problems:
Description of problem:
scale up more worker nodes but they are not added to the Load Balancer instances (backend pool), if moving the router pod to the new worker nodes then co/ingress becomes degraded
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-23-204408
How reproducible:
100%
Steps to Reproduce:
1. ensure the fresh install cluster works well. 2. scale up worker nodes. $ oc -n openshift-machine-api get machineset NAME DESIRED CURRENT READY AVAILABLE AGE hongli-1024-hnkrm-worker-us-east-2a 1 1 1 1 5h21m hongli-1024-hnkrm-worker-us-east-2b 1 1 1 1 5h21m hongli-1024-hnkrm-worker-us-east-2c 1 1 1 1 5h21m $ oc -n openshift-machine-api scale machineset hongli-1024-hnkrm-worker-us-east-2a --replicas=2 machineset.machine.openshift.io/hongli-1024-hnkrm-worker-us-east-2a scaled $ oc -n openshift-machine-api scale machineset hongli-1024-hnkrm-worker-us-east-2b --replicas=2 machineset.machine.openshift.io/hongli-1024-hnkrm-worker-us-east-2b scaled (about 5 minutes later) $ oc -n openshift-machine-api get machineset NAME DESIRED CURRENT READY AVAILABLE AGE hongli-1024-hnkrm-worker-us-east-2a 2 2 2 2 5h29m hongli-1024-hnkrm-worker-us-east-2b 2 2 2 2 5h29m hongli-1024-hnkrm-worker-us-east-2c 1 1 1 1 5h29m 3. delete router pods and to make new ones running on new workers $ oc get node NAME STATUS ROLES AGE VERSION ip-10-0-128-45.us-east-2.compute.internal Ready worker 71m v1.25.2+4bd0702 ip-10-0-131-192.us-east-2.compute.internal Ready control-plane,master 6h35m v1.25.2+4bd0702 ip-10-0-139-51.us-east-2.compute.internal Ready worker 6h29m v1.25.2+4bd0702 ip-10-0-162-228.us-east-2.compute.internal Ready worker 71m v1.25.2+4bd0702 ip-10-0-172-216.us-east-2.compute.internal Ready control-plane,master 6h35m v1.25.2+4bd0702 ip-10-0-190-82.us-east-2.compute.internal Ready worker 6h25m v1.25.2+4bd0702 ip-10-0-196-26.us-east-2.compute.internal Ready control-plane,master 6h35m v1.25.2+4bd0702 ip-10-0-199-158.us-east-2.compute.internal Ready worker 6h28m v1.25.2+4bd0702 $ oc -n openshift-ingress get pod -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES router-default-86444dcd84-cm96l 1/1 Running 0 65m 10.130.2.7 ip-10-0-128-45.us-east-2.compute.internal <none> <none> router-default-86444dcd84-vpnjz 1/1 Running 0 65m 10.131.2.7 ip-10-0-162-228.us-east-2.compute.internal <none> <none>
Actual results:
$ oc get co ingress console authentication NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE ingress 4.12.0-0.nightly-2022-10-23-204408 True False True 66m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) console 4.12.0-0.nightly-2022-10-23-204408 False False False 66m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hongli-1024.qe.devcluster.openshift.com): Get "https://console-openshift-console.apps.hongli-1024.qe.devcluster.openshift.com": EOF authentication 4.12.0-0.nightly-2022-10-23-204408 False False True 66m OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.hongli-1024.qe.devcluster.openshift.com/healthz": EOF checked the Load Balancer on AWS console and found that new created nodes are not added to load balancer. see the snapshot attached.
Expected results:
the LB should added new created instances automatically and ingress should work with new workers.
Additional info:
1. this is also reproducible with common user created LoadBalancer service. 2. if the LB service is created after adding the new nodes then it works well, we can see that all nodes are added to LB on AWS console.
This is a clone of issue OCPBUGS-3123. The following is the description of the original issue:
—
Description of problem:
Support for tech preview API extensions was introduced in https://github.com/openshift/installer/pull/6336 and https://github.com/openshift/api/pull/1274 . In the case of https://github.com/openshift/api/pull/1278 , config/v1/0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml was introduced which seems to result in both 0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml and 0000_10_config-operator_01_infrastructure-Default.crd.yaml being rendered by the bootstrap. As a result, both CRDs are created during bootstrap. However, one of them(in this case the tech preview CRD) fails to be created. We may need to modify the render command to be aware of feature gates when rendering manifests during bootstrap. Also, I'm open hearing other views on how this might work.
Version-Release number of selected component (if applicable):
https://github.com/openshift/cluster-config-operator/pull/269 built and running on 4.12-ec5
How reproducible:
consistently
Steps to Reproduce:
1. bump the version of OpenShift API to one including a tech preview version of the infrastructure CRD 2. install openshift with the infrastructure manifest modified to incorporate tech preview fields 3. those fields will not be populated upon installation Also, checking the logs from bootkube will show both being installed, but one of them fails.
Actual results:
Expected results:
Additional info:
Excerpts from bootkube log Nov 02 20:40:01 localhost.localdomain bootkube.sh[4216]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml Nov 02 20:40:01 localhost.localdomain bootkube.sh[4216]: Writing asset: /assets/config-bootstrap/manifests/0000_10_config-operator_01_infrastructure-Default.crd.yaml Nov 02 20:41:23 localhost.localdomain bootkube.sh[5710]: Created "0000_10_config-operator_01_infrastructure-Default.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n Nov 02 20:41:23 localhost.localdomain bootkube.sh[5710]: Skipped "0000_10_config-operator_01_infrastructure-TechPreviewNoUpgrade.crd.yaml" customresourcedefinitions.v1.apiextensions.k8s.io/infrastructures.config.openshift.io -n as it already exists
This is a clone of issue OCPBUGS-4049. The following is the description of the original issue:
—
Description of problem:
In case of CRC we provision the cluster first and the create the disk image out of it and that what we share to our users. Now till now we always remove the pull secret from the cluster after provision it using https://github.com/crc-org/snc/blob/master/snc.sh#L241-L258 and it worked without any issue till 4.11.x but for 4.12.0-rc.1 we are seeing that MCO not able to reconcile.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Create a single node cluster using cluster bot `launch 4.12.0-rc.1 aws,single-node` 2. Once cluster is provisioned update the pull secret from the config ``` $ cat pull-secret.yaml apiVersion: v1 data: .dockerconfigjson: e30K kind: Secret metadata: name: pull-secret namespace: openshift-config type: kubernetes.io/dockerconfigjson $ oc replace -f pull-secret.yaml ``` 3. Wait for MCO recocile and you will see failure to reconcile MCO
Actual results:
$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-66086aa249a9f92b773403f7c3745ea4 False True True 1 0 0 1 94m worker rendered-worker-0c07becff7d3c982e24257080cc2981b True False False 0 0 0 0 94m $ oc get co machine-config NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE machine-config 4.12.0-rc.1 True False True 93m Failed to resync 4.12.0-rc.1 because: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool master is not ready, retrying. Status: (pool degraded: true total: 1, ready 0, updated: 0, unavailable: 0)] $ oc logs machine-config-daemon-nf9mg -n openshift-machine-config-operator [...] I1123 15:00:37.864581 10194 run.go:19] Running: podman pull -q --authfile /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba Error: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: (Mirrors also failed: [quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: authentication required]): quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized W1123 15:00:39.186103 10194 run.go:45] podman failed: running podman pull -q --authfile /var/lib/kubelet/config.json quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba failed: Error: initializing source docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: (Mirrors also failed: [quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quayio-pull-through-cache-us-west-2-ci.apps.ci.l2s4.p1.openshiftapps.com/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: authentication required]): quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba: reading manifest sha256:ffa3568233298408421ff7da60e5c594fb63b2551c6ab53843eb51c8cf6838ba in quay.io/openshift-release-dev/ocp-v4.0-art-dev: unauthorized: access to the requested resource is not authorized : exit status 125; retrying...
Expected results:
Additional info:
Description of problem:
When alert raised for vSphere privilege check which is reported by vsphere-problem-detector, we could only get the very simple info as below:
=======================================
Description
The vsphere-problem-detector monitors the health and configuration of OpenShift on VSphere. If problems are found which may prevent machine scaling, storage provisioning, and safe upgrades, the vsphere-problem-detector will raise alerts.
Summary
VSphere cluster health checks are failing
Message
VSphere cluster health checks are failing with CheckAccountPermissions
=======================================
(We could get the namespace/pod info from metric, but I think adding it in alert Description or Message should be more clear)
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-12-152748
How reproducible:
Always
Steps to Reproduce:
See description
Actual results:
Alert info is not so clear
Expected results:
Add more Alert info
Description of problem:
When deleting a BYOH node in Platform:none, as well as in an Azure IPI cluster the node gets reconciled correctly, however when added back to the cluster it stays in Ready,SchedulingDisabled. When checking the WMCO logs, we can observe the following log: {"level":"error","ts":"2022-12-14T16:14:31Z","msg":"Reconciler error","controller":"configmap","controllerGroup":"","controllerKind":"ConfigMap","configMap":{"name":"windows-instances","namespace":"openshift-windows-machine-config-operator"},"namespace":"openshift-windows-machine-config-operator","name":"windows-instances","reconcileID":"d66a3142-d52c-43f5-8a42-214ce9c88417","error":"error configuring host with address 10.0.55.21: configuring node network failed: error waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation for byoh-2019: timeout waiting for k8s.ovn.org/hybrid-overlay-node-subnet node annotation: timed out waiting for the condition" And when checking the node's annotation, it is indeed missing: $ oc get nodes byoh-2019 -o=jsonpath="{.metadata.annotations}" {"volumes.kubernetes.io/controller-managed-attach-detach":"true","windowsmachineconfig.openshift.io/desired-version":"7.0.0-16f486a","windowsmachineconfig.openshift.io/pub-key-hash":"1df2c166b1c401180523270e9cf6bc2cd2724b9279ea65668a3b95298525a0f5","windowsmachineconfig.openshift.io/username":"wx4EBwMICL6qT+4RY8tgbx4hiRmQdHlwUsHgVGCTVY7S5gG/G5gb/Wzv0JBLhNP9\u003cwmcoMarker\u003ejlmI5ExHPYFrd2Fw6Lxe/6PKEE5/vYAhZ2n1Z2nBIoa1xN1/HEaXhqR2CuXNe7Ez\u003cwmcoMarker\u003eg2Hg+gA=\u003cwmcoMarker\u003e=ubWA"} Tested in Azure IPI and Platform:None, in both cases the issue got reproduced.
Version-Release number of selected component (if applicable):
$ oc get cm -n openshift-windows-machine-config-operator NAME DATA AGE kube-root-ca.crt 1 10h openshift-service-ca.crt 1 10h windows-instances 2 9h windows-machine-config-operator-lock 0 6h24m windows-services-7.0.0-16f486a 2 6h23m $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-rc.4 True False 6h48m Cluster version is 4.12.0-rc.4
How reproducible:
Steps to Reproduce:
1. Deploy a OCP 4.11 cluster with WMCO 6.0.0 2. Add one or two byoh nodes to the cluster 3. Upgrade the cluster to OCP 4.12, and later WMCO to 7.0.0 4. Remove one of the byoh nodes using: oc delete node <byoh-node-id> 5. Wait for reconciliation to bring the node back
Actual results:
The deleted node gets re-added but stays in Ready,SchedulingDisabled and the workloads left in Pending state.
Expected results:
The node gets properly added to the cluster and stays in Ready.
Additional info:
github rate limit failures for upi image downloading govc.
This is a clone of issue OCPBUGS-5891. The following is the description of the original issue:
—
Description of problem:
When used in heads-only mode, oc-mirror does not record the operator bundles minimum version if a target name is set. The record values ensures that bundles that still exist in the catalog are included as part of the generated catalog and that the associated images are not pruned. This behavior will prune bundles that have when no minimum version is set in the imageset configuration and the bundles still exist in the source catalog.
Version-Release number of selected component (if applicable):
Client Version: version.Info{Major:"", Minor:"", GitVersion:"4.13.0-202212011938.p0.g8bf1402.assembly.stream-8bf1402", GitCommit:"8bf14023aa018e12425e29993e6f53f0ab07e6ab", GitTreeState:"clean", BuildDate:"2022-12-01T19:56:31Z", GoVersion:"go1.18.4", Compiler:"gc", Platform:"linux/amd64"}
How reproducible:
100%
Steps to Reproduce:
Using the advanced cluster management package as an example. 1. Find the latest bundle for acm in the release-2.6 channel with oc-mirror list operators --catalog registry.redhat.io/redhat/redhat-operator-index:v4.10-1663021232 --package advanced-cluster-management 2. Create an mirror set configuration to mirror an operator from an older catalog version apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: test mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10-1663021232 targetName: test targetTag: test packages: - name: advanced-cluster-management channels: - name: release-2.6 2. Run oc-mirror --config config-with-operators.yaml file:// 3. Check the bundle minimum version on the metadata using oc-mirror describe mirror_seq1_000000.tar under the field operators, the advanced-cluster-management should show version found in Step 1. 4. Create another ImageSetConfiguration for a later version of the catalog apiVersion: mirror.openshift.io/v1alpha2 kind: ImageSetConfiguration storageConfig: local: path: test mirror: operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.10 targetName: test targetTag: test packages: - name: advanced-cluster-management channels: - name: release-2.6 4. Check the bundle minimum version on the metadata using oc-mirror describe mirror_seq2_000000.tar under the operators field.
Actual results:
The catalog entry in the metadata shows packages as null.
Expected results:
It should have the advanced-cluster-managament package with the original minimum version or an updated minimum version if the original bundle was pruned.
Currently, we have this validation https://github.com/openshift/installer/blob/master/pkg/asset/agent/installconfig_test.go#L103 which checks if the platform is none then the number of control planes should be 1 and workers should be zero.
We need another validation to check if the number of control planes is 1 and workers are zero, the in the install-config.yaml the platform can only be set as none and in agent-cluster-install.yaml, the platformType should only be set as none. If we try to do SNO (i.e. control planes is 1 and workers are zero) with e.g. platform: baremetal then assisted will reject it, so we should catch it as early as possible
Description of problem:
Using OLM descriptor components.Using OLM descriptor components deletes operand
Steps to Reproduce:
Description of problem:
project viewer is able to see a 'Create Pod Disruption Budget' button on Pods list page while the creation will fail finally due to less permission, in this way console should not show a 'Create Pod Disruption Budget' button for project viewer, other resources list page doesn’t have the issue
Version-Release number of selected component (if applicable):
4.10.0-0.nightly-2021-09-16-212009
How reproducible:
Always
Steps to Reproduce:
1. normal user has a project and workloads
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/example 3/3 3 3 79s
NAME DESIRED CURRENT READY AGE
replicaset.apps/example-787f749bb 3 3 3 79s
2. grant another user with view access to user project 'yapei1-project'
Actual results:
3. project viewer 'uiauto1' can see pods list successfully, at the same time console also shows a 'Create Pod Disruption Budget' button while the creation will finally fail if project viewer tries to create a pod
Expected results:
3. console should not show 'Create Pod Disruption Budget' button for a project viewer
Additional info:
For comparison: we doesn't show resource creation button('Create xxx' button) on other workloads list page for a project viewer, such as Deployments, DeploymentConfigs list etc
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Grafana has been removed in 4.11 and we can safely remove any logic in CMO that deals with Grafana (except dashboards since they are used by OCP console).
Another point to clarify is to communicate to ProdSec and ART that Grafana isn't part of OCP anymore.
And possibly other alerts. Declaring namespace labels on alerts makes it easy to find the source or affected resource, as described here. But because Insights alerts are based on metrics exported by the cluster-version operator, they inherit source information from the CVO, and end up looking like:
ALERTS{alertname="SimpleContentAccessNotAvailable", alertstate="firing", condition="SCAAvailable", endpoint="metrics", instance="10.58.57.116:9099", job="cluster-version-operator", name="insights", namespace="openshift-cluster-version", pod="cluster-version-operator-5d8579fb58-p5hfn", prometheus="openshift-monitoring/k8s", reason="NotFound", receive="true", service="cluster-version-operator", severity="info"}
Adding namespace: openshift-insights to the labels block for InsightsDisabled and SimpleContentAccessNotAvailable would avoid this confusion.
You might also want to clear the job and service labels as irrelevant source information. And you might want to clear the pod label to avoid churning alerts when the CVO rolls out a new pod. You can get the label clearing by wrapping the expr with max without (job, pod, service) (...) or similar.
This is a clone of issue OCPBUGS-5988. The following is the description of the original issue:
—
Description of problem:
Etcd operator is in degraded state as one of the masters can't connect. Master that fails to connect was previously bootstrap and pivoted as part of assisted-installer installation to master. Etcd log: 2023-01-17T23:09:26.523562312Z 28dcf1b0a44481b0, started, test-infra-cluster-04bf4418-master-1, https://192.168.127.11:2380, https://192.168.127.11:2379, false 2023-01-17T23:09:26.523562312Z 30600b5b86e23c8e, started, etcd-bootstrap, https://192.168.127.12:2380, https://192.168.127.12:2379, false 2023-01-17T23:09:26.523562312Z 73f00626fee34a87, started, test-infra-cluster-04bf4418-master-0, https://192.168.127.10:2380, https://192.168.127.10:2379, false 2023-01-17T23:09:26.541214220Z #### attempt 0 2023-01-17T23:09:26.547811132Z member={name="test-infra-cluster-04bf4418-master-1", peerURLs=[https://192.168.127.11:2380}, clientURLs=[https://192.168.127.11:2379] 2023-01-17T23:09:26.547811132Z member={name="etcd-bootstrap", peerURLs=[https://192.168.127.12:2380}, clientURLs=[https://192.168.127.12:2379] 2023-01-17T23:09:26.547811132Z member={name="test-infra-cluster-04bf4418-master-0", peerURLs=[https://192.168.127.10:2380}, clientURLs=[https://192.168.127.10:2379] 2023-01-17T23:09:26.547811132Z target={name="etcd-bootstrap", peerURLs=[https://192.168.127.12:2380}, clientURLs=[https://192.168.127.12:2379] 2023-01-17T23:09:26.547846508Z member "https://192.168.127.12:2380" dataDir has been destroyed and must be removed from the cluster There are couple of problems that we see: 1. For unknown reason etcd operator BootstrapTeardownController fails to start as it fails to see "openshift-etcd" namespace though by the logs it is there. 2023-01-17T21:39:43.323928903Z E0117 21:39:43.323917 1 base_controller.go:272] BootstrapTeardownController reconciliation failed: failed to get bootstrap scaling strategy: failed to get openshift-etcd names 2. DelayStrategy code was change by https://github.com/openshift/cluster-etcd-operator/pull/964/files and currently it requires 3 healthy members in order to remove. It can create issues as etcd and cluster-bootstrap(bootkube) are not synchronized and nothing is actually blocking bootstrap on stop etcd and block remove of bootstrap etcd.(at least how i understand the flow)
Version-Release number of selected component (if applicable):
How reproducible:
It is race as far as i understand but reproduced pretty much in our CI by installing 4.12 nightlies
Steps to Reproduce:
1. 2. 3.
Actual results:
Etcd is degrade cause third joined master etcd can't start
Expected results:
Etcd is healthy
Additional info:
Description of problem:
Large OpenShift Container Platform 4.10.24 - Cluster is failing to update router-certs secret in openshift-config-managed namespace as the given secret is too big. 2022-09-01T06:24:15.157333294Z 2022-09-01T06:24:15.157Z ERROR operator.init.controller.certificate_publisher_controller controller/controller.go:266 Reconciler error {"name": "foo-bar", "namespace": "openshift-ingress-operator", "error": "failed to ensure global secret: failed to update published router certificates secret: Secret \"router-certs\" is invalid: data: Too long: must have at most 1048576 bytes"} The OpenShift Container Platform 4 - Cluster has 180 IngressController configured with endpointPublishingStrategy set to private. Now the default certificate needs to be replaced but is not properly replicated to openshift-authentication namespace and potentially other location because of the problem mentioned (since the required secret can not be updated)
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.10.24
How reproducible:
Always
Steps to Reproduce:
1. Install OpenShift Container Platform 4.10 2. Create 180 IngressController with specific certificates 3. Check openshift-ingress-operator logs to see how it fails to update/create the necessary secret in openshift-config-managed
Actual results:
2022-09-01T06:24:15.157333294Z 2022-09-01T06:24:15.157Z ERROR operator.init.controller.certificate_publisher_controller controller/controller.go:266 Reconciler error {"name": "foo-bar", "namespace": "openshift-ingress-operator", "error": "failed to ensure global secret: failed to update published router certificates secret: Secret \"router-certs\" is invalid: data: Too long: must have at most 1048576 bytes"}
Expected results:
No matter how many IngressController is created, secret management taken care by Operators need to work, even if data exceed 1 MB size limitation. In that case an approach needs to exist to split data into multiple secrets or handle it otherwise.
Additional info:
Description of problem:
According to https://issues.redhat.com/browse/OCPBUGS-705, thanks Junyun share the test env/result for install part, and we need the fix in vsphere-problem-detector, currently it reports the following missing when using the pre-existing folder and/or resource pool with ReadOnly permission: 1. vcenter cluster set ReadOnly permission: I0902 10:07:50.324782 1 vsphere_check.go:244] CheckComputeClusterPermissions:jima-permission-q84s8-worker-86gd4 failed: missing privileges for compute cluster workloads: Resource.AssignVMToPool, VApp.AssignResourcePool, VApp.Import, VirtualMachine.Config.AddNewDisk 2. datacenter set ReadOnly permission: I0902 08:09:19.462001 1 vsphere_check.go:225] CheckAccountPermissions failed: missing privileges for datacenter OCP-DC: Resource.AssignVMToPool, VApp.Import, VirtualMachine.Config.AddExistingDisk, VirtualMachine.Config.AddNewDisk, VirtualMachine.Config.AddRemoveDevice, VirtualMachine.Config.AdvancedConfig, VirtualMachine.Config.Annotation, VirtualMachine.Config.CPUCount, VirtualMachine.Config.DiskExtend, VirtualMachine.Config.DiskLease, VirtualMachine.Config.EditDevice, VirtualMachine.Config.Memory, VirtualMachine.Config.RemoveDisk, VirtualMachine.Config.Rename, VirtualMachine.Config.ResetGuestInfo, VirtualMachine.Config.Resource, VirtualMachine.Config.Settings, VirtualMachine.Config.UpgradeVirtualHardware, VirtualMachine.Interact.GuestControl, VirtualMachine.Interact.PowerOff, VirtualMachine.Interact.PowerOn, VirtualMachine.Interact.Reset, VirtualMachine.Inventory.Create, VirtualMachine.Inventory.CreateFromExisting, VirtualMachine.Inventory.Delete, VirtualMachine.Provisioning.Clone, VirtualMachine.Provisioning.DeployTemplate, VirtualMachine.Provisioning.MarkAsTemplate, Folder.Create, Folder.Delete
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-02-194931
How reproducible:
Always
Steps to Reproduce:
See Description of problem
Actual results:
The vsphere-problem-detector operator reports privilege missing when using pre-existing folder and/or resource pool with ReadOnly permission
Expected results:
The vsphere-problem-detector operator should not reports privilege missing in that case.
Additional info:
Description of problem:
When you migrate a HostedCluster, the AWSEndpointService conflicts from the old MGMT Server with the new MGMT Server. The AWSPrivateLink_Controller does not have any validation when this happens. This is needed to make the Disaster Recovery HC Migration works. So the issue will raise up when the nodes of the HostedCluster cannot join the new Management cluster because the AWSEndpointServiceName is still pointing to the old one.
Version-Release number of selected component (if applicable):
4.12 4.13 4.14
How reproducible:
Follow the migration procedure from upstream documentation and the nodes in the destination HostedCluster will keep in NotReady state.
Steps to Reproduce:
1. Setup a management cluster with the 4.12-13-14/main version of the HyperShift operator. 2. Run the in-place node DR Migrate E2E test from this PR https://github.com/openshift/hypershift/pull/2138: bin/test-e2e \ -test.v \ -test.timeout=2h10m \ -test.run=TestInPlaceUpgradeNodePool \ --e2e.aws-credentials-file=$HOME/.aws/credentials \ --e2e.aws-region=us-west-1 \ --e2e.aws-zones=us-west-1a \ --e2e.pull-secret-file=$HOME/.pull-secret \ --e2e.base-domain=www.mydomain.com \ --e2e.latest-release-image="registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2023-03-17-063546" \ --e2e.previous-release-image="registry.ci.openshift.org/ocp/release:4.13.0-0.nightly-2023-03-17-063546" \ --e2e.skip-api-budget \ --e2e.aws-endpoint-access=PublicAndPrivate
Actual results:
The nodes stay in NotReady state
Expected results:
The nodes should join the migrated HostedCluster
Additional info: