Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |
Note: this page shows the Feature-Based Change Log for a release
These features were completed when this image was assembled
1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI
2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.
3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.
4. List any affected packages or components.
As a user, I should be able to configure CSI driver to have a storage topology.
In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.
Manifests are currently present in /bindata and /manifest directories.
Here is example of the insights-operator change.
Here is the overall enhancement doc.
Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Framework for CSI driver | TBD | Yes |
Drivers should be available to install both in disconnected and connected mode | Yes | |
Drivers should upgrade from release to release without any impact | Yes | |
Drivers should be installable via CVO (when in-tree plugin exists) |
Out of Scope
This work will only cover the drivers themselves, it will not include
Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.
Assumptions
Customer Considerations
Customers will need to be able to use the storage they want.
Documentation Considerations
This Epic is to track the GA of this feature
As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.
This Epic tracks the GA of this feature
Epic Goal
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
On new installations, we should make the StorageClass created by the CSI operator the default one.
However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set a different quota on the CSI driver Storage Class.
Exit criteria:
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Rebase openshift-controller-manager to k8s 1.24
4.11 MVP Requirements
Out of scope use cases (that are part of the Kubeframe/factory project):
Questions to be addressed:
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6
As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6
IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.
Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]
For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().
For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.
Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode
In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.
This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service
As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed
BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a customer, I want to be able to:
so that I can achieve
Description of criteria:
We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)
This requires/does not require a design proposal.
This requires/does not require a feature gate.
As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.
We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921
Add GA support for deploying OpenShift to IBM Public Cloud
Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available
This epic tracks the changes needed to the ingress operator to support IBM DNS Services for private clusters.
Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue.
Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.
The overall plan is:
As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.
Acceptance Criteria:
After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:
Assumption
Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.
must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents
hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276
Exit criteria:
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumption
cluster-snapshot-controller-operator is running on the CP.
More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit
As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.
Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!
Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.
Exit criteria:
As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.
Exit criteria:
CNCC was moved to the management cluster and it should use proxy settings defined for the management cluster.
When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release
We have a set of images
that should become multiarch images. This should be done both in upstream and downstream.
As a reference, we have built internally those images as multiarch and made them available as
They can be consumed by the Assisted Serivce pod via the following env
- name: AGENT_DOCKER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest - name: CONTROLLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest - name: INSTALLER_IMAGE value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest
OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes
Ref: https://github.com/openshift/enhancements/pull/1014
Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.
A/C:
- New OLM API version release
- OLM API dependency updated in OLM Project
- OLM Subscription API changes downstreamed
- OLM Controller changes downstreamed
- Changes manually tested on Cluster Bot
We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.
There are definitely grey areas, but in general:
Questions to be addressed:
Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.
Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.
Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.
Dependencies (internal and external):
Prioritized epics + deliverables (in scope / not in scope):
Not in scope:
Estimate (XS, S, M, L, XL, XXL):
Previous Work:
Open questions:
Acceptance criteria:
Epic Done Checklist:
Description:
As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:
Design 2 will be implemented as part of this story.
Acceptance Criteria:
Description:
As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:
The metrics should be allowlisted on the cluster side.
The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.
Depends on CFE-478.
Acceptance Criteria:
This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.
When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276
Jira Description
As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).
Summary / Background
IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions.
Acceptance Criteria
Definition of Ready
Definition of Done
enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j
then the command could be used in a manner similar to many k8s examples like
```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```
Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011
We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.
Goals
Requirements
Requirement | Notes | isMvp? |
---|---|---|
Telemetry | No | |
Certification | No | |
API metrics | No | |
Out of Scope
n/a
Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low
Assumptions
Customer Considerations
Documentation Considerations
Notes
In progress:
High prio:
Unsorted
Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
There is a new driver release 5.0.0 since the last rebase that includes snapshot support:
https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0
Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.
This includes (but is not limited to):
Operators:
Update all CSI sidecars to the latest upstream release.
This includes update of VolumeSnapshot CRDs in https://github.com/openshift/cluster-csi-snapshot-controller-operator/tree/master/assets
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
This includes ibm-vpc-node-label-updater!
(Using separate cards for each driver because these updates can be more complicated)
Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.
(Using separate cards for each driver because these updates can be more complicated)
The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.
We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.
We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.
related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729
tldr: three basic claims, the rest is explanation and one example
While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.
One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.
I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.
We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.
Relevant links:
Epic Template descriptions and documentation.
Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.
Requirement | Notes | isMvp? |
---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
This Section:
This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.
Questions to be addressed:
As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console. When viewing a Pod in the console, the field status.HostIP is not visible.
Acceptance criteria:
When OCP is performing cluster upgrade user should be notified about this fact.
There are two possibilities how to surface the cluster upgrade to the users:
AC:
Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem
Created from: https://issues.redhat.com/browse/RFE-3024
As a console user I want to have option to:
For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.
For DeploymentConfig we will add 'Retry rollout' action button. This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.
Acceptance Criteria:
BACKGROUND:
OpenShift console will be updated to allow rollout restart deployment from the console itself.
Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.
The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.
Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit
OC mirror is GA product as of Openshift 4.11 .
The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror
Pre-Work Objectives
Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.
Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster.
Why customers want this?
Why we want this?
Phase 2 Goal: Productization of the united Console
As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.
cc Ali Mobrem Sho Weimer Jakub Hadvig
UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and OpenShiftDedicated
Acceptance criteria:
RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.
Requirement | Notes | isMvp? |
---|---|---|
CI - MUST be running successfully with test automation | This is a requirement for ALL features. | YES |
Release Technical Enablement | Provide necessary release enablement details and documents. | YES |
Questions to be addressed:
PROBLEM
We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.
PROPOSAL
Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.
ACCEPTANCE CRITERIA
Image has been switched/included:
DEPENDENCIES
The SCOS build payload.
RELATED RESOURCES
OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p
OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit
Acceptance Criteria
A stable OKD on SCOS is built and available to the community sprintly.
This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image
```
[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```
The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53
HyperShift came to life to serve multiple goals, some are main near-term, some are secondary that serve well long-term.
HyperShift opens up doors to penetrate the market. HyperShift enables true hybrid (CP and Workers decoupled, mixed IaaS, mixed Arch,...). An architecture that opens up more options to target new opportunities in the cloud space. For more details on this one check: Hosted Control Planes (aka HyperShift) Strategy [Live Document]
To bring hosted control planes to our customers, we need the means to ship it. Today MCE is how HyperShift shipped, and installed so that customers can use it. There are two main customers for hosted-control-planes:
If you have noticed, MCE is the delivery mechanism for both management models. The difference between managed and self-managed is the consumer persona. For self-managed, it's the customer SRE for managed its the RH SRE.
For us to ship HyperShift in the product (as hosted control planes) in either management model, there is a necessary readiness checklist that we need to satisfy. Below are the high-level requirements needed before GA:
Please also have a look at our What are we missing in Core HyperShift for GA Readiness? doc.
Multi-cluster is becoming an industry need today not because this is where trend is going but because it’s the only viable path today to solve for many of our customer’s use-cases. Below is some reasoning why multi-cluster is a NEED:
As a result, multi-cluster management is a defining category in the market where Red Hat plays a key role. Today Red Hat solves for multi-cluster via RHACM and MCE. The goal is to simplify fleet management complexity by providing a single pane of glass to observe, secure, police, govern, configure a fleet. I.e., the operand is no longer one cluster but a set, a fleet of clusters.
HyperShift logically centralized architecture, as well as native separation of concerns and superior cluster lifecyle management experience, makes it a great fit as the foundation of our multi-cluster management story.
Thus the following stories are important for HyperShift:
Refs:
HyperShift is the core engine that will be used to provide hosted control-planes for consumption in managed and self-managed.
Main user story: When life cycling clusters as a cluster service consumer via HyperShift core APIs, I want to use a stable/backward compatible API that is less susceptible to future changes so I can provide availability guarantees.
Ref: What are we missing in Core HyperShift for GA Readiness?
Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
Assumptions:
HyperShift - proposed cuts from data plane
When operating OpenShift clusters (for any OpenShift form factor) from MCE/ACM/OCM/CLI as a Cluster Service Consumer (RH managed SRE, or self-manage SRE/admin) I want to be able to migrate CPs from one hosting service cluster to another:
More information:
To understand usage patterns and inform our decision making for the product. We need to be able to measure adoption and assess usage.
See Hosted Control Planes (aka HyperShift) Strategy [Live Document]
Whether it's managed or self-managed, it’s pertinent to report health metrics to be able to create meaningful Service Level Objectives (SLOs), alert of failure to meet our availability guarantees. This is especially important for our managed services path.
https://issues.redhat.com/browse/OCPPLAN-8901
HyperShift for managed services is a strategic company goal as it improves usability, feature, and cost competitiveness against other managed solutions, and because managed services/consumption-based cloud services is where we see the market growing (customers are looking to delegate platform overhead).
We should make sure our SD milestones are unblocked by the core team.
This feature reflects HyperShift core readiness to be consumed. When all related EPICs and stories in this EPIC are complete HyperShift can be considered ready to be consumed in GA form. This does not describe a date but rather the readiness of core HyperShift to be consumed in GA form NOT the GA itself.
- GA date for self-managed will be factoring in other inputs such as adoption, customer interest/commitment, and other factors.
- GA dates for ROSA-HyperShift are on track, tracked in milestones M1-7 (have a look at https://issues.redhat.com/browse/OCPPLAN-5771)
Epic Goal*
The goal is to split client certificate trust chains from the global Hypershift root CA.
Why is this important? (mandatory)
This is important to:
Scenarios (mandatory)
Provide details for user scenarios including actions to be performed, platform specifications, and user personas.
Dependencies (internal and external) (mandatory)
Hypershift team needs to provide us with code reviews and merge the changes we are to deliver
Contributing Teams(and contacts) (mandatory)
Acceptance Criteria (optional)
The serviceaccount CA bundle automatically injected to all pods cannot be used to authenticate any client certificate generated by the control-plane.
Drawbacks or Risk (optional)
Risk: there is a throbbing time pressure as this should be delivered before first stable Hypershift release
Done - Checklist (mandatory)
AUTH-311 introduced an enhancement. Implement the signer separation described there.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.
In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.
The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike.
For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing.
This includes:
While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic. I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state".
Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing
https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing
The current property description is:
configuration represents the current MachineConfig object for the machine config pool.
But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled
This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.
Today the links point at a rule-scoped page, but that page lacks information about recommended resolution. You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.
We can implement by updating the template here to be:
fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)
or something like that.
unknowns
request is clear, solution/implementation to be further clarified
This story only covers API components. We will create a separate story for other utility functions.
Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.
We are generating the markdown from the dynamic-plugin-sdk using
yarn generate-doc
Here is the list of the API that the dynamic-plugin-sdk is exposing:
https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a
Acceptance Criteria:
Out of Scope:
when defining two proxy endpoints,
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:
service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api
but both proxy to the `forklift-must-gather-api` service
e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service
Acceptance Criteria: Add missing api docs for *Icon and *Status components ins the API docs
The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx
`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.
This isn't documented today. We need to do that.
We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.
During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.
AC: Add `message` property to NotLoadedDynamicPluginInfo type.
Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.
This would require updates in following repositories:
AC:
NOTE: This story does not include the conversion webhook change which will be created as a follow on story
Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:
cc Andrew Ballantyne Bryan Florkiewicz
Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`
Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.
Currently ResourceLink is exported but not ResourceIcon
AC:
We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.
AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.
The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.
The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.
I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.
To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.
If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.
Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.
There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.
AC:
@jpoulin is good to ask about heterogeneous clusters.
This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.
We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g. `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures
AC:
OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86
@jpoulin is good to ask about heterogeneous clusters.
An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.
As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content.
Acceptance criteria:
As a user, I want to be able to:
so that I can achieve
Description of criteria:
Detail about what is specifically not being delivered in the story
1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.
2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.
3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/
4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:
spec:
connectionConfig:
username: username
password:
secretName: secret-name
The secret namespace should be openshift-config to align with the tlsClientConfig behavior.
5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus
As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated
We need to support helm installs for Repos that have the basic authentication secret name and namespace.
Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.
<Defines what is included in this story>
If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.
Nonet
NA
I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth
Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated
Unknown
Verified
Unsatisfied
ACCEPTANCE CRITERIA
NOTES
ACCEPTANCE CRITERIA
NOTES
This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:
- removing or reducing the need for ignition
- maintaining feature parity between self-driving and managed OCP models
- adding additional functionality such as hotfixes
Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic
Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof
Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD
We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12
This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9
More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4
update ironic software to pick up latest bug fixes
This is an API change and we will consider this as a feature request.
https://issues.redhat.com/browse/NE-799 Please check this for more details
https://issues.redhat.com/browse/NE-799 Please check this for more details
No
N/A
Make sure that the CSI driver automatically updates oVirt credentials when they are updated in OpenShift.
In the CSI driver operator we should add the
withSecretHashAnnotation
call from library-go like this: https://github.com/openshift/aws-ebs-csi-driver-operator/blob/53ed27b2a0eaa655338da180a79897855b366ac7/pkg/operator/starter.go#L138
We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to
Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:
Integration tests need to be implemented according to https://cluster-api.sigs.k8s.io/developer/testing.html#integration-tests using envtest.
As a user, I would like to be informed in an intuitive way, when quotas have been reached in a namespace
Refer below for more details
As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits
Refer below for more details
Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.
We have heard the following requests from customers and developer advocates:
As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users
Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Previous customization work:
As an admin, I want to be able to use a form driven experience to hide user perspective(s)
As an admin, I want to hide user perspective(s) based on the customization.
As an admin, I should be able to see a code snippet that shows how to add user perspectives
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives
To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).
Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205
Previous work:
Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog. The request is to change access for the cluster, not per user or persona.
Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.
Multiple customer requests.
We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services
As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.
As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource
Extend the "customization" spec type definition for the CRD in the openshift/api project
Previous customization work:
As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.
Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s) from the Developer Catalog or the Dev catalog as a whole.
To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).
Previous work:
As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.
Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.
A/C:
- OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
- If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled
- The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)
This epic tracks network tooling improvements for 4.12
New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.
Our estimation for this Epic is 1 engineer * 2 Sprints
WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.
Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.
The metric updates every 2 minutes so please be mindful of this when creating the alert.
If the controller is disconnected for 10 minutes, fire an alert.
DoD: Merged to CNO and tested by QE
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]
This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/
Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23
https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help
We need to rebase cloud network config controller to 1.25 when the kube 1.25 rebase lands.
This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled
Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.
We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.
Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.
Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.
The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).
Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as
Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem?
How reproducible:
Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.
List block devices /usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME NAME MAJ:MIN SIZE TYPE FSTYPE KNAME MODEL UUID WWN HCTL VENDOR STATE TRAN PKNAME loop0 7:0 125.9G loop xfs loop0 c080b47b-2291-495c-8cc0-2009ebc39839 loop1 7:1 885.5M loop squashfs loop1 sda 8:0 894.3G disk sda INTEL SSDSC2KG96 0x55cd2e415235b2db 1:0:0:0 ATA running sas |-sda1 8:1 250M part sda1 0x55cd2e415235b2db sda |-sda2 8:2 750M part ext2 sda2 3aa73c72-e342-4a07-908c-a8a49767469d 0x55cd2e415235b2db sda |-sda3 8:3 49G part xfs sda3 ffc3ccfe-f150-4361-8ae5-f87b17c13ac2 0x55cd2e415235b2db sda |-sda4 8:4 394.2G part LVM2_member sda4 Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db sda `-sda5 8:5 450G part LVM2_member sda5 W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db sda `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sda5 sdb 8:16 894.3G disk sdb INTEL SSDSC2KG96 0x55cd2e415235b31b 1:0:1:0 ATA running sas `-sdb1 8:17 894.3G part LVM2_member sdb1 6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b sdb `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdb1 sdc 8:32 894.3G disk sdc INTEL SSDSC2KG96 0x55cd2e415235b652 1:0:2:0 ATA running sas `-sdc1 8:33 894.3G part LVM2_member sdc1 pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652 sdc `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdc1 sdd 8:48 894.3G disk sdd INTEL SSDSC2KG96 0x55cd2e41521679b7 1:0:3:0 ATA running sas `-sdd1 8:49 894.3G part LVM2_member sdd1 exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7 sdd `-nova-instance 253:0 3.1T lvm ext4 dm-0 d15e2de6-2b97-4241-9451-639f7b14594e running sdd1 sr0 11:0 989M rom iso9660 sr0 Virtual CDROM0 2022-06-17-18-18-33-00 0:0:0:0 AMI running usb
Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda
Actual results:
The installation will fail with a message that indicates that it could not exclusively access /dev/sda
Expected results:
The installation should proceed and the cluster should start to install.
Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810
Description of the problem:
Cluster Installation fail if installation disk has lvm on raid:
Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?"
How reproducible:
100%
Steps to reproduce:
1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)
Actual results:
Installation failed
Expected results:
Installation success
Same thing as we've had in assisted-service. We sometimes fail to install golangci-lint by fetching release artifacts from GitHub directly. That's usually because the same IP address (CI build cluster) tries to access GitHub in a high rate, leading to 429 (too many requests)
The way we fixed it for assisted-service is changing installation to use quay.io image that is already built with the binary.
Example for such a failure: https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_release/30788/rehearse-30788-periodic-ci-openshift-assisted-installer-agent-release-ocm-2.6-subsystem-test-periodic/1551879759036682240
Filter for all recent failures: https://search.ci.openshift.org/?search=golangci%2Fgolangci-lint+crit+unable+to+find&maxAge=168h&context=1&type=build-log&name=.*assisted.*&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Section 5 of PRD: https://docs.google.com/document/d/1fF-Ajdzc9EDDg687FzTrX577hvY9NdK0/edit#heading=h.gjdgxs
Testing and collaboration with NVIDIA: https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=0
Deploying Nvidia Patches: https://docs.google.com/document/d/1yR4lphjPKd6qZ9sGzZITl0wH1r4ykfMKPjUnlzvWji4/edit#
This is the continuation of https://issues.redhat.com/browse/NHE-273 but now the focus is on the remainig flows
Description of problem:
check_pkt_length cannot be offloaded without 1) sFlow offload patches in Openvswitch 2) Hardware driver support. Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.
Version-Release number of selected component (if applicable):
4.11/4.12
How reproducible:
Always
Steps to Reproduce:
1. Any flow that has check_pkt_len() 5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node) 6-b: Pod -> NodePort Service traffic (Host Backend - Different Node) 4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node) 10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node) 11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node) 12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)
Actual results:
Poor performance due to upcalls when check_pkt_len() is not supported.
Expected results:
Good performance.
Additional info:
https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges
No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.
We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.
This likely warrants an OpenShift blog post, potentially?
OCP/Telco Definition of Done
Epic Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Epic --->
We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.
Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.
I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.
Goal
Provide an indication that advanced features are used
Problem
Today, customers and RH don't have the information on the actual usage of advanced features.
Why is this important?
Prioritized Scenarios
In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode).
Not in Scope
Integrate with subscription watch - will be done by the subscription watch team with our help.
Customers
All
Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions
What does success look like?
A clear indication in subscription watch for ODF usage (either essential or advanced).
1. Proposed title of this feature request
2. What is the nature and description of the request?
3. Why does the customer need this? (List the business requirements here)
4. List any affected packages or components.
_____________________
Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173
We migrated most component as part of https://issues.redhat.com/browse/RHSTOR-2165
We now have a few components remaining roughly 15 to 20%. This epic tragets
1) Add support for in-tree modal launcher
This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled
Description of problem:
We need to include the `openshift_apps_deploymentconfigs_strategy_total` metrics to the IO archive file.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Create a cluster 2. Download the IO archive 3. Check the file `config/metrics` 4. You must find `openshift_apps_deploymentconfigs_strategy_total` insde of it
Actual results:
Expected results:
You should see the `openshift_apps_deploymentconfigs_strategy_total` at the `config/metrics` file.
Additional info:
Description of problem:
The Alertmanager silence create / edit form got a new "Negative matcher" option in 4.12 (see https://issues.redhat.com/browse/OCPBUGSM-47734). However, there is nothing to explain what this option means and it will likely not be obvious from the label alone unless you are already quite familiar with Alertmanager. After discussion with the docs team, it was decided that adding some explanation in context in the UI would be much better than adding an explanation to the documentation.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
1. Go to Admin perspective 2. Go to Observe > Alerting > Silences page 3. Click on the Create button ("Negative matcher" option is shown with no explanation)
Actual results:
Expected results:
Additional info:
And possibly other alerts. Declaring namespace labels on alerts makes it easy to find the source or affected resource, as described here. But because Insights alerts are based on metrics exported by the cluster-version operator, they inherit source information from the CVO, and end up looking like:
ALERTS{alertname="SimpleContentAccessNotAvailable", alertstate="firing", condition="SCAAvailable", endpoint="metrics", instance="10.58.57.116:9099", job="cluster-version-operator", name="insights", namespace="openshift-cluster-version", pod="cluster-version-operator-5d8579fb58-p5hfn", prometheus="openshift-monitoring/k8s", reason="NotFound", receive="true", service="cluster-version-operator", severity="info"}
Adding namespace: openshift-insights to the labels block for InsightsDisabled and SimpleContentAccessNotAvailable would avoid this confusion.
You might also want to clear the job and service labels as irrelevant source information. And you might want to clear the pod label to avoid churning alerts when the CVO rolls out a new pod. You can get the label clearing by wrapping the expr with max without (job, pod, service) (...) or similar.
Description of problem:
project viewer is able to see a 'Create Pod Disruption Budget' button on Pods list page while the creation will fail finally due to less permission, in this way console should not show a 'Create Pod Disruption Budget' button for project viewer, other resources list page doesn’t have the issue
Version-Release number of selected component (if applicable):
4.10.0-0.nightly-2021-09-16-212009
How reproducible:
Always
Steps to Reproduce:
1. normal user has a project and workloads
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/example 3/3 3 3 79s
NAME DESIRED CURRENT READY AGE
replicaset.apps/example-787f749bb 3 3 3 79s
2. grant another user with view access to user project 'yapei1-project'
Actual results:
3. project viewer 'uiauto1' can see pods list successfully, at the same time console also shows a 'Create Pod Disruption Budget' button while the creation will finally fail if project viewer tries to create a pod
Expected results:
3. console should not show 'Create Pod Disruption Budget' button for a project viewer
Additional info:
For comparison: we doesn't show resource creation button('Create xxx' button) on other workloads list page for a project viewer, such as Deployments, DeploymentConfigs list etc
Description of problem:
If cluster install failed and no tag attached to vm, run ./openshift-install destroy cluster get stuck, details pls see openshift-install.log
...
time="2022-09-28T08:19:14-04:00" level=debug msg="Delete Folder"
time="2022-09-28T08:19:14-04:00" level=debug msg="Find attached Folder on tag"
time="2022-09-28T08:19:15-04:00" level=debug msg="Folder: Expected Folder sgao-rtf6v to be empty"
time="2022-09-28T08:19:25-04:00" level=debug msg="Power Off Virtual Machines"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Delete Virtual Machines"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Delete Folder"
time="2022-09-28T08:19:25-04:00" level=debug msg="Find attached Folder on tag"
time="2022-09-28T08:19:25-04:00" level=debug msg="Folder: Expected Folder sgao-rtf6v to be empty"
time="2022-09-28T08:19:35-04:00" level=debug msg="Power Off Virtual Machines"
time="2022-09-28T08:19:35-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:35-04:00" level=debug msg="Delete Virtual Machines"
time="2022-09-28T08:19:35-04:00" level=debug msg="Find attached VirtualMachine on tag"
time="2022-09-28T08:19:35-04:00" level=debug msg="Delete Folder"
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
always when cluster install failed and no tag attached to vm
Steps to Reproduce:
1. cluster install failed and no tag attached to vm
2. run ./openshift-install destroy cluster
3.
Actual results:
installer destroy get stuck
Expected results:
installer destroy should set timeout and be able to quit in such situation
Additional info:
This is a clone of issue OCPBUGS-3499. The following is the description of the original issue:
—
Description of problem:
On clusters serving Route via CRD (i.e. MicroShift), Route validation does not perform the same validation as on OCP.
Version-Release number of selected component (if applicable):
How reproducible:
Always
Steps to Reproduce:
$ cat<<EOF | oc apply --server-side -f- apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift spec: to: kind: Service name: hello-microshift EOF route.route.openshift.io/hello-microshift serverside-applied $ oc get route hello-microshift -o yaml apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: "2022-11-11T23:53:33Z" generation: 1 name: hello-microshift namespace: default resourceVersion: "2659" uid: cd35cd20-b3fd-4d50-9912-f34b3935acfd spec: host: hello-microshift-default.cluster.local to: kind: Service name: hello-microshift wildcardPolicy: None $ cat<<EOF | oc apply --server-side -f- apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-microshift spec: to: kind: Service name: hello-microshift wildcardPolicy: "" EOF
Actual results:
route.route.openshift.io/hello-microshift serverside-applied
Expected results:
The Route "hello-microshift" is invalid: spec.wildcardPolicy: Invalid value: "": field is immutable
Additional info:
** This change will be inert on OCP, which already has the correct behavior. **
In 4.12.0-rc.0 some API-server components declare flowcontrol/v1beta1 release manifests:
$ oc adm release extract --to manifests quay.io/openshift-release-dev/ocp-release:4.12.0-rc.0-x86_64 $ grep -r flowcontrol.apiserver.k8s.io manifests manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-authentication-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_etcd-operator_10_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_kube-apiserver-operator_08_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_kube-apiserver-operator_08_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_20_kube-apiserver-operator_08_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-apiserver-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-apiserver-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-apiserver-operator_09_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1 manifests/0000_50_cluster-openshift-controller-manager-operator_10_flowschema.yaml:apiVersion: flowcontrol.apiserver.k8s.io/v1beta1
The APIs are scheduled for removal in Kube 1.26, which will ship with OpenShift 4.13. We want the 4.12 CVO to move to modern APIs in 4.12, so the APIRemovedInNext.*ReleaseInUse alerts are not firing on 4.12. This ticket tracks removing those manifests, or replacing them with a more modern resource type, or some such. Definition of done is that new 4.13 (and with backports, 4.12) nightlies no longer include flowcontrol.apiserver.k8s.io/v1beta1 manifests.
[It] clients should not use APIs that are removed in upcoming releases [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel] github.com/openshift/origin/test/extended/apiserver/api_requests.go:27 Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times Nov 18 21:59:06.261: INFO: api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times Nov 18 21:59:06.261: INFO: api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times Nov 18 21:59:06.261: INFO: user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times [AfterEach] [sig-arch][Late] github.com/openshift/origin/test/extended/util/client.go:158 [AfterEach] [sig-arch][Late] github.com/openshift/origin/test/extended/util/client.go:159 flake: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times Ginkgo exit error 4: exit with code 4
This is required to unblock https://github.com/openshift/origin/pull/27561
This is a clone of issue OCPBUGS-4700. The following is the description of the original issue:
—
Description of problem:
In at least 4.12.0-rc.0, a user with read-only access to ClusterVersion can see an "Update blocked" pop-up talking about "...alert above the visualization...". It is referencing a banner about "This cluster should not be updated to the next minor version...", but that banner is not displayed because hasPermissionsToUpdate is false, so canPerformUpgrade is false.
Version-Release number of selected component (if applicable):
4.12.0-rc.0. Likely more. I haven't traced it out.
How reproducible:
Always.
Steps to Reproduce:
1. Install 4.12.0-rc.0
2. Create a user with cluster-wide read-only permissions. For me, it's via binding to a sudoer ClusterRole. I'm not sure where that ClusterRole comes from, but it's:
$ oc get -o yaml clusterrole sudoer apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" creationTimestamp: "2020-05-21T19:39:09Z" name: sudoer resourceVersion: "7715" uid: 28eb2ffa-dccd-47e8-a2d5-6a95e0e8b1e9 rules: - apiGroups: - "" - user.openshift.io resourceNames: - system:admin resources: - systemusers - users verbs: - impersonate - apiGroups: - "" - user.openshift.io resourceNames: - system:masters resources: - groups - systemgroups verbs: - impersonate
3. View /settings/cluster
Actual results:
See the "Update blocked" pop-up talking about "...alert above the visualization...".
Expected results:
Something more internally consistent. E.g. having the referenced banner "...alert above the visualization..." show up, or not having the "Update blocked" pop-up reference the non-existent banner.
This is a clone of issue OCPBUGS-6270. The following is the description of the original issue:
—
Similar to how, due to the install-config validation, the baremetal platform previously required a bunch of fields that are actually ignored (OCPBUGS-3278), we similarly require values for the following fields in the platform.vsphere section:
None of these values are actually used in the agent-based installer at present, and they should not be required.
Users can work around this by specifying dummy values in the platform config (note that the VIP values are required and must be genuine):
platform: vsphere: apiVIP: 192.168.111.1 ingressVIP: 192.168.111.2 vCenter: a username: b password: c datacenter: d defaultDatastore: e
This relates to the recovery of a cluster following an etcd outage.
The ingress path to kube-apiserver is:
───────────> VIP ─────────────────> Local HAProxy ────┬─> kube-apiserver-master-0 (managed by keepalived) │ ├─> kube-apiserver-master-1 │ └─> kube-apiserver-master-2
Each master is running an HAProxy which load balances between the 3 kube-apiservers. Each HAProxy is running health checks against each kube-apiserver, and will add or remove it from the available pool based on its health.
We only use keepalived to ensure that HAProxy is not a single point of failure. It is the job of keepalived to ensure that incoming traffic is being directed to an HAProxy which is functioning correctly.
The current health check we are using for keepalived involves polling /readyz against the local HAProxy. While this seems intuitively correct it is in fact testing the wrong thing. It is testing whether the kube-apiserver it connects to is functioning correctly. However, this is not the purpose of keepalived. HAProxy runs health checks against kube-apiserver backends. keepalived simply selects a correctly functioning HAProxy.
This becomes important during recovery from an outage. When none of the kube-apiservers are healthy this health check will fail continuously, and the API VIP will move uselessly between masters. However the situation is much worse when only one of the kube-apiservers is up. In this case there is a high probability that it is overloaded and at least rate limiting incoming connections. This may lead us to fail the keepalived health check and fail the VIP over to the next HAProxy. This will cause all open kube-apiserver connections to reset, even the established ones. This increases the load on the kube-apiserver and increases the probability that the health check will fail again.
Ideally the keepalived health check would check only the health of HAProxy itself, not the health of the pool of kube-apiservers. In practise it will probably never be necessary to move the VIP while the master is up, regardless of the health of the cluster. A network partition affecting HAProxy would already be handled by VRRP between the masters, so it may be that it would be sufficient to check that the local HAProxy pod is healthy.
Description of problem:
Upgrade OCP 4.11 --> 4.12 fails with one 'NotReady,SchedulingDisabled' node and MachineConfigDaemonFailed.
Version-Release number of selected component (if applicable):
Upgrade from OCP 4.11.0-0.nightly-2022-09-19-214532 on top of OSP RHOS-16.2-RHEL-8-20220804.n.1 to 4.12.0-0.nightly-2022-09-20-040107. Network Type: OVNKubernetes
How reproducible:
Twice out of two attempts.
Steps to Reproduce:
1. Install OCP 4.11.0-0.nightly-2022-09-19-214532 (IPI) on top of OSP RHOS-16.2-RHEL-8-20220804.n.1. The cluster is up and running with three workers: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True False 51m Cluster version is 4.11.0-0.nightly-2022-09-19-214532 2. Run the OC command to upgrade to 4.12.0-0.nightly-2022-09-20-040107: $ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 --allow-explicit-upgrade --force=true warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead warning: The requested upgrade image is not one of the available updates.You have used --allow-explicit-upgrade for the update to proceed anyway warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures. Requesting update to release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 3. The upgrade is not succeeds: [0] $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-09-19-214532 True True 17h Unable to apply 4.12.0-0.nightly-2022-09-20-040107: wait has exceeded 40 minutes for these operators: network One node degrided to 'NotReady,SchedulingDisabled' status: $ oc get nodes NAME STATUS ROLES AGE VERSION ostest-9vllk-master-0 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-1 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-master-2 Ready master 19h v1.24.0+07c9eb7 ostest-9vllk-worker-0-4x4pt NotReady,SchedulingDisabled worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-h6kcs Ready worker 18h v1.24.0+3882f8f ostest-9vllk-worker-0-xhz9b Ready worker 18h v1.24.0+3882f8f $ oc get pods -A | grep -v -e Completed -e Running NAMESPACE NAME READY STATUS RESTARTS AGE openshift-openstack-infra coredns-ostest-9vllk-worker-0-4x4pt 0/2 Init:0/1 0 18h $ oc get events LAST SEEN TYPE REASON OBJECT MESSAGE 7m15s Warning OperatorDegraded: MachineConfigDaemonFailed /machine-config Unable to apply 4.12.0-0.nightly-2022-09-20-040107: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] 7m15s Warning MachineConfigDaemonFailed /machine-config Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.12.0-0.nightly-2022-09-20-040107 True False False 18h baremetal 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cloud-credential 4.12.0-0.nightly-2022-09-20-040107 True False False 19h cluster-autoscaler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h config-operator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h console 4.12.0-0.nightly-2022-09-20-040107 True False False 18h control-plane-machine-set 4.12.0-0.nightly-2022-09-20-040107 True False False 17h csi-snapshot-controller 4.12.0-0.nightly-2022-09-20-040107 True False False 19h dns 4.12.0-0.nightly-2022-09-20-040107 True True False 19h DNS "default" reports Progressing=True: "Have 5 available node-resolver pods, want 6." etcd 4.12.0-0.nightly-2022-09-20-040107 True False False 19h image-registry 4.12.0-0.nightly-2022-09-20-040107 True True False 18h Progressing: The registry is ready... ingress 4.12.0-0.nightly-2022-09-20-040107 True False False 18h insights 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-apiserver 4.12.0-0.nightly-2022-09-20-040107 True True False 18h NodeInstallerProgressing: 1 nodes are at revision 11; 2 nodes are at revision 13 kube-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-scheduler 4.12.0-0.nightly-2022-09-20-040107 True False False 19h kube-storage-version-migrator 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-api 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-approver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h machine-config 4.11.0-0.nightly-2022-09-19-214532 False True True 16h Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)] marketplace 4.12.0-0.nightly-2022-09-20-040107 True False False 19h monitoring 4.12.0-0.nightly-2022-09-20-040107 True False False 18h network 4.12.0-0.nightly-2022-09-20-040107 True True True 19h DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2022-09-20T14:16:13Z... node-tuning 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-apiserver 4.12.0-0.nightly-2022-09-20-040107 True False False 18h openshift-controller-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 17h openshift-samples 4.12.0-0.nightly-2022-09-20-040107 True False False 17h operator-lifecycle-manager 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-catalog 4.12.0-0.nightly-2022-09-20-040107 True False False 19h operator-lifecycle-manager-packageserver 4.12.0-0.nightly-2022-09-20-040107 True False False 19h service-ca 4.12.0-0.nightly-2022-09-20-040107 True False False 19h storage 4.12.0-0.nightly-2022-09-20-040107 True True False 19h ManilaCSIDriverOperatorCRProgressing: ManilaDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods... [0] http://pastebin.test.redhat.com/1074531
Actual results:
OCP 4.11 --> 4.12 upgrade fails.
Expected results:
OCP 4.11 --> 4.12 upgrade success.
Additional info:
Attached logs of the NotReady node - [^journalctl_ostest-9vllk-worker-0-4x4pt.log.tar.gz]
This is a clone of issue OCPBUGS-95. The following is the description of the original issue:
—
In an OpenShift cluster with OpenShiftSDN network plugin with egressIP and NMstate operator configured, there are some conditions when the egressIP is deconfigured from the network interface.
The bug is 100% reproducible.
Steps for reproducing the issue are:
1. Install a cluster with OpenShiftSDN network plugin.
2. Configure egressip for a project.
3. Install NMstate operator.
4. Create a NodeNetworkConfigurationPolicy.
5. Identify on which node the egressIP is present.
6. Restart the nmstate-handler pod running on the identified node.
7. Verify that the egressIP is no more present.
Restarting the sdn pod related to the identified node will reconfigure the egressIP in the node.
This issue has a high impact since any changes triggered for the NMstate operator will prevent application traffic. For example, in the customer environment, the issue is triggered any time a new node is added to the cluster.
The expectation is that NMstate operator should not interfere with SDN configuration.
Description of problem:
When scaling down the machineSet for worker nodes, a PV(vmdk) file got deleted.
Version-Release number of selected component (if applicable):
4.10
How reproducible:
N/A
Steps to Reproduce:
1. Scale down worker nodes 2. Check VMware logs and VM gets deleted with vmdk still attached
Actual results:
After scaling down nodes, volumes still attached to the VM get deleted alongside the VM
Expected results:
Worker nodes scaled down without any accidental deletion
Additional info:
Description of problem:
The service project and the host project both have a private DNS zone named as "ipi-xpn-private-zone". The thing is, although platform.gcp.privateDNSZone.project is set as the host project, the installer checks the zone of the service project, and complains dns name not match.
Version-Release number of selected component (if applicable):
$ openshift-install version openshift-install 4.12.0-0.nightly-2022-10-25-210451 built from commit 14d496fdaec571fa97604a487f5df6a0433c0c68 release image registry.ci.openshift.org/ocp/release@sha256:d6cc07402fee12197ca1a8592b5b781f9f9a84b55883f126d60a3896a36a9b74 release architecture amd64
How reproducible:
Always, if both the service project and the host project have a private DNS zone with the same name.
Steps to Reproduce:
1. try IPI installation to a shared VPC, using "privateDNSZone" of the host project
Actual results:
$ openshift-install create cluster --dir test7 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" ERROR failed to fetch Metadata: failed to load asset "Install Config": failed to create install config: platform.gcp.privateManagedZone: Invalid value: "ipi-xpn-private-zone": dns zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. did not match expected jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com $
Expected results:
The installer should check the private zone in the specified project (i.e. the host project).
Additional info:
$ yq-3.3.0 r test7/install-config.yaml platform gcp: projectID: openshift-qe region: us-central1 computeSubnet: installer-shared-vpc-subnet-2 controlPlaneSubnet: installer-shared-vpc-subnet-1 createFirewallRules: Disabled publicDNSZone: id: qe-shared-vpc project: openshift-qe-shared-vpc privateDNSZone: id: ipi-xpn-private-zone project: openshift-qe-shared-vpc network: installer-shared-vpc networkProjectID: openshift-qe-shared-vpc $ yq-3.3.0 r test7/install-config.yaml baseDomain qe-shared-vpc.qe.gcp.devcluster.openshift.com $ yq-3.3.0 r test7/install-config.yaml metadata creationTimestamp: null name: jiwei-1027a $ $ openshift-install create cluster --dir test7 INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" ERROR failed to fetch Metadata: failed to load asset "Install Config": failed to create install config: platform.gcp.privateManagedZone: Invalid value: "ipi-xpn-private-zone": dns zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. did not match expected jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com $ $ gcloud --project openshift-qe-shared-vpc dns managed-zones list --filter='name=qe-shared-vpc' NAME DNS_NAME DESCRIPTION VISIBILITY qe-shared-vpc qe-shared-vpc.qe.gcp.devcluster.openshift.com. public $ gcloud --project openshift-qe-shared-vpc dns managed-zones list --filter='name=ipi-xpn-private-zone' NAME DNS_NAME DESCRIPTION VISIBILITY ipi-xpn-private-zone jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com. Preserved private zone for IPI XPN private $ gcloud dns managed-zones list --filter='name=ipi-xpn-private-zone' NAME DNS_NAME DESCRIPTION VISIBILITY ipi-xpn-private-zone jiwei-1026a.qe1.gcp.devcluster.openshift.com. Preserved private zone for IPI XPN private $ $ gcloud --project openshift-qe-shared-vpc dns managed-zones describe qe-shared-vpc cloudLoggingConfig: kind: dns#managedZoneCloudLoggingConfig creationTime: '2020-04-26T02:50:25.172Z' description: '' dnsName: qe-shared-vpc.qe.gcp.devcluster.openshift.com. id: '7036327024919173373' kind: dns#managedZone name: qe-shared-vpc nameServers: - ns-cloud-b1.googledomains.com. - ns-cloud-b2.googledomains.com. - ns-cloud-b3.googledomains.com. - ns-cloud-b4.googledomains.com. visibility: public $ $ gcloud --project openshift-qe-shared-vpc dns managed-zones describe ipi-xpn-private-zone cloudLoggingConfig: kind: dns#managedZoneCloudLoggingConfig creationTime: '2022-10-27T08:05:18.332Z' description: Preserved private zone for IPI XPN dnsName: jiwei-1027a.qe-shared-vpc.qe.gcp.devcluster.openshift.com. id: '5506116785330943369' kind: dns#managedZone name: ipi-xpn-private-zone nameServers: - ns-gcp-private.googledomains.com. privateVisibilityConfig: kind: dns#managedZonePrivateVisibilityConfig networks: - kind: dns#managedZonePrivateVisibilityConfigNetwork networkUrl: https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/networks/installer-shared-vpc visibility: private $ $ gcloud dns managed-zones describe ipi-xpn-private-zone cloudLoggingConfig: kind: dns#managedZoneCloudLoggingConfig creationTime: '2022-10-26T06:42:52.268Z' description: Preserved private zone for IPI XPN dnsName: jiwei-1026a.qe1.gcp.devcluster.openshift.com. id: '7663537481778983285' kind: dns#managedZone name: ipi-xpn-private-zone nameServers: - ns-gcp-private.googledomains.com. privateVisibilityConfig: kind: dns#managedZonePrivateVisibilityConfig networks: - kind: dns#managedZonePrivateVisibilityConfigNetwork networkUrl: https://www.googleapis.com/compute/v1/projects/openshift-qe-shared-vpc/global/networks/installer-shared-vpc visibility: private $
This is a clone of issue OCPBUGS-4089. The following is the description of the original issue:
—
The kube-state-metric pod inside the openshift-monitoring namespace is not running as expected.
On checking the logs I am able to see that there is a memory panic
~~~
2022-11-22T09:57:17.901790234Z I1122 09:57:17.901768 1 main.go:199] Starting kube-state-metrics self metrics server: 127.0.0.1:8082
2022-11-22T09:57:17.901975837Z I1122 09:57:17.901951 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.902389844Z I1122 09:57:17.902291 1 main.go:210] Starting metrics server: 127.0.0.1:8081
2022-11-22T09:57:17.903191857Z I1122 09:57:17.903133 1 main.go:66] levelinfomsgTLS is disabled.http2false
2022-11-22T09:57:17.906272505Z I1122 09:57:17.906224 1 builder.go:191] Active resources: certificatesigningrequests,configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,ingresses,jobs,leases,limitranges,mutatingwebhookconfigurations,namespaces,networkpolicies,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets,storageclasses,validatingwebhookconfigurations,volumeattachments
2022-11-22T09:57:17.917758187Z E1122 09:57:17.917560 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
2022-11-22T09:57:17.917758187Z goroutine 24 [running]:
2022-11-22T09:57:17.917758187Z k8s.io/apimachinery/pkg/util/runtime.logPanic(
)
2022-11-22T09:57:17.917758187Z /usr/lib/golang/src/runtime/panic.go:1038 +0x215
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.ingressMetricFamilies.func6(0x40)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/internal/store/ingress.go:136 +0x189
2022-11-22T09:57:17.917758187Z k8s.io/kube-state-metrics/v2/internal/store.wrapIngressFunc.func1(
)
2022-11-22T09:57:17.917758187Z /go/src/k8s.io/kube-state-metrics/pkg/metric_generator/generator.go:107 +0xd8
~~~
Logs are attached to the support case
This is a clone of issue OCPBUGS-1805. The following is the description of the original issue:
—
The vSphere CSI cloud.conf lists the single datacenter from platform workspace config but in a multi-zone setup (https://github.com/openshift/enhancements/pull/918 ) there may be more than the one datacenter.
This issue is resulting in PVs failing to attach because the virtual machines can't be find in any other datacenter. For example:
0s Warning FailedAttachVolume pod/image-registry-85b5d5db54-m78vp AttachVolume.Attach failed for volume "pvc-ab1a0611-cb3b-418d-bb3b-1e7bbe2a69ed" : rpc error: code = Internal desc = failed to find VirtualMachine for node:"rbost-zonal-ghxp2-worker-3-xm7gw". Error: virtual machine wasn't found
The machine above lives in datacenter-2 but the CSI cloud.conf is only aware of the datacenter IBMCloud.
$ oc get cm vsphere-csi-config -o yaml -n openshift-cluster-csi-drivers | grep datacenters
datacenters = "IBMCloud"
This is a clone of issue OCPBUGS-5458. The following is the description of the original issue:
—
reported in https://coreos.slack.com/archives/C027U68LP/p1673010878672479
Description of problem:
Hey guys, I have a openshift cluster that was upgraded to version 4.9.58 from version 4.8. After the upgrade was done, the etcd pod on master1 isn't coming up and is crashlooping. and it gives the following error: {"level":"fatal","ts":"2023-01-06T12:12:58.709Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"wal: max entry size limit exceeded, recBytes: 13279, fileSize(313430016) - offset(313418480) - padBytes(1) = entryLimit(11535)","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/remote-source/cachito-gomod-with-deps/app/server/etcdmain/main.go:40\nmain.main\n\t/remote-source/cachito-gomod-with-deps/app/server/main.go:32\nruntime.main\n\t/usr/lib/golang/src/runtime/proc.go:225"}
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-723. The following is the description of the original issue:
—
Description of problem:
I have a customer who created clusterquota for one of the namespace, it got created but the values were not reflecting under limits or not displaying namespace details.
~~~
$ oc describe AppliedClusterResourceQuota
Name: test-clusterquota
Created: 19 minutes ago
Labels: size=custom
Annotations: <none>
Namespace Selector: []
Label Selector:
AnnotationSelector: map[openshift.io/requester:system:serviceaccount:application-service-accounts:test-sa]
Scopes: NotTerminating
Resource Used Hard
-------- ---- ----
~~~
WORKAROUND: They recreated the clusterquota object (cache it off, delete it, create new) after which it displayed values as expected.
In the past, they saw similar behavior on their test cluster, there it was heavily utilized the etcd DB was much larger in size (>2.5Gi), and had many more objects (at that time, helm secrets were being cached for all deployments, and keeping a history of 10, so etcd was being bombarded).
This cluster the same "symptom" was noticed however etcd was nowhere near that in size nor the amount of etcd objects and/or helm cached secrets.
Version-Release number of selected component (if applicable): OCP 4.9
How reproducible: Occurred only twice(once in test and in current cluster)
Steps to Reproduce:
1. Create ClusterQuota
2. Check AppliedClusterResourceQuota
3. The values and namespace is empty
Actual results: ClusterQuota should display the values
Expected results: ClusterQuota not displaying values
We rely on the user providing accurate information about the MAC addresses in the agent-config, because at the point we read it we haven't seen the hosts yet. However, if the user gets this wrong then chaos may ensue.
Once inventory is available, we should validate that the user has not:
and fail the install if they have.
Description of problem:
Pod and PDB list page just report "Not found" when no resources found
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-15-094115
How reproducible:
Always
Steps to Reproduce:
1. normal user has a new empty project 2. normal user visit PDB list page via Workloads -> PodDisruptionBudgets 3.
Actual results:
2. it just reports 'Not found'
Expected results:
2. for other workloads, it will report "No <resource> found", for example No HorizontalPodAutoscalers found No StatefulSets found No Deployments found so for Pods and PodDisruptionBudgets list page, when no resource can be found, it's better that we also reports "No pods found" and "No PodDisruptionBudgets found"
Additional info:
This is a clone of issue OCPBUGS-2144. The following is the description of the original issue:
—
Description of problem:
Azure IPI creates boot images using the image gallery API now, it will create two image definition resources for both hyperVGeneration V1 and V2. For arm64 cluster, the architecture in image definition hyperVGeneration V1 is x64, but it should be Arm64
Version-Release number of selected component (if applicable):
./openshift-install version ./openshift-install 4.12.0-0.nightly-arm64-2022-10-07-204251 built from commit 7b739cde1e0239c77fabf7622e15025d32fc272c release image registry.ci.openshift.org/ocp-arm64/release-arm64@sha256:d2569be4ba276d6474aea016536afbad1ce2e827b3c71ab47010617a537a8b11 release architecture arm64
How reproducible:
always
Steps to Reproduce:
1.Create arm cluster using latest arm64 nightly build 2.Check image definition created for hyperVGeneration V1
Actual results:
The architecture field is x64. ### $ az sig image-definition show --gallery-name ${gallery_name} --gallery-image-definition lwanazarm1008-rc8wh --resource-group ${rg} | jq -r ".architecture" x64 The image version under this image definition is for aarch64. ### $ az sig image-version show --gallery-name gallery_lwanazarm1008_rc8wh --gallery-image-definition lwanazarm1008-rc8wh --resource-group lwanazarm1008-rc8wh-rg --gallery-image-version 412.86.20220922 | jq -r ".storageProfile.osDiskImage.source" { "uri": "https://clustermuygq.blob.core.windows.net/vhd/rhcosmuygq.vhd"} $ az storage blob show --container-name vhd --name rhcosmuygq.vhd --account-name clustermuygq --account-key $account_key | jq -r ".metadata" { "Source_uri": "https://rhcos.blob.core.windows.net/imagebucket/rhcos-412.86.202209220538-0-azure.aarch64.vhd"}
Expected results:
Although no VMs with HypergenV1 can be provisioned, the architecture field should be Arm64 even for hyperGenerationV1 image definitions
Additional info:
1.The architecture in image definition hyperVGeneration V2 is Arm64 and installer will use V2 by default for arm64 vm_type, so installation didn't fail by default. But we still need to make architecture consistent in V1. 2.Need to set architecture field for both V1 and V2, now we only set architecture in V2 image definition resource. https://github.com/openshift/installer/blob/master/data/data/azure/vnet/main.tf#L100-L128
This is a clone of issue OCPBUGS-3621. The following is the description of the original issue:
—
Description of problem:
EUS-to-EUS upgrade(4.10.38-4.11.13-4.12.0-rc.0), after control-plane nodes are upgraded to 4.12 successfully, unpause the worker pool to get worker nodes updated. But worker nodes failed to be updated with degraded worker pool: ``` # ./oc get node NAME STATUS ROLES AGE VERSION jliu410-6hmkz-master-0.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-1.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-master-2.c.openshift-qe.internal Ready master 4h40m v1.25.2+f33d98e jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal Ready,SchedulingDisabled worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-b-9hnb8.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 jliu410-6hmkz-worker-c-bdv4f.c.openshift-qe.internal Ready worker 4h31m v1.23.12+6b34f32 ... # ./oc get co machine-config machine-config 4.12.0-rc.0 True False True 3h41m Failed to resync 4.12.0-rc.0 because: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error pool worker is not ready, retrying. Status: (pool degraded: true total: 3, ready 0, updated: 0, unavailable: 1)] ... # ./oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-b81233204496767f2fe32fbb6cb088e1 True False False 3 3 3 0 4h10m worker rendered-worker-a2caae543a144d94c17a27e56038d4c4 False True True 3 0 0 1 4h10m ... # ./oc describe mcp worker Message: Reason: Status: True Type: Degraded Last Transition Time: 2022-11-14T07:19:42Z Message: Node jliu410-6hmkz-worker-a-xdwvv.c.openshift-qe.internal is reporting: "Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1\ntime=\"2022-11-14T07:42:47Z\" level=fatal msg=\"unknown flag: --no-tags\"\n" Reason: 1 nodes are reporting degraded status on sync Status: True Type: NodeDegraded ... # ./oc logs machine-config-daemon-mg2zn E1114 08:11:27.115577 192836 writer.go:200] Marking Degraded due to: Error checking type of update image: error running skopeo inspect --no-tags --retry-times 5 --authfile /var/lib/kubelet/config.json docker://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c01b0ae9870dbee5609c52b4d649334ce6854fff1237f1521929d151f6876daa: exit status 1 time="2022-11-14T08:11:25Z" level=fatal msg="unknown flag: --no-tags" ```
Version-Release number of selected component (if applicable):
4.12.0-rc.0
How reproducible:
Steps to Reproduce:
1. EUS upgrade with path 4.10.38-> 4.11.13-> 4.12.0-rc.0 with paused worker pool 2. After master pool upgrade succeed, unpause worker pool 3.
Actual results:
Worker pool upgrade failed
Expected results:
Worker pool upgrade succeed
Additional info:
Description of problem:
During ocp multinode spoke cluster creation agent provisioning is stuck on "configuring" because machineConfig service is crashing on the node.
After restarting the service still fails with
Can't read link "/var/lib/containers/storage/overlay/l/V2OP2CCVMKSOHK2XICC546DUCG" because it does not exist. A storage corruption might have occurred, attempting to recreate the missing symlinks. It might be best wipe the storage to avoid further errors due to storage corruption.
Version-Release number of selected component (if applicable):
Podman 4.0.2 +
How reproducible:
sometimes
Steps to Reproduce:
1. deploy multinode spoke (ipxe + boot order ) 2. 3.
Actual results:
4 agents in done state and 1 is in "configuring"
Expected results:
all agents are in "done" state
Additional info:
issue mentioned in https://github.com/containers/podman/issues/14003
Fix: https://github.com/containers/storage/issues/1136
Description of problem:
`create a project` link is enabled for users who do not have permission to create a project. This issue surfaces itself in the developer sandbox.
Version-Release number of selected component (if applicable):
4.11.5
How reproducible:
Steps to Reproduce:
1. log into dev sandbox, or a cluster where the user does not have permission to create a project 2. go directly to URL /topology/all-namespaces
Actual results:
`create a project` link is enabled. Upon clicking the link and submitting the form, the project fails to create; as expected.
Expected results:
`create a project` link should only be available to users with the correct permissions.
Additional info:
The project list pages are not directly available to the user in the UI through the project selector. The user must go directly to the URL. It's possible to encounter this situation when a user logs in with multiple accounts and returns to a previous url.
This is a clone of issue OCPBUGS-4997. The following is the description of the original issue:
—
The fix for OCPBUGS-3382 ensures that we pass the proxy settings from the install-config through to the final cluster. However, nothing in the agent ISO itself uses proxy settings (at least until bootstrapping starts.
It is probably less likely for the agent-based installer that proxies will be needed than e.g. for assisted (where agents running on-prem need to call back to assisted-service in the cloud), but we should be consistent about using any proxy config provided. There may certainly be cases where the registry is only reachable via a proxy.
This can be easily set system-wide by configuring default environment variables in the systemd config. An example (from the bootstrap ignition) is: https://github.com/openshift/installer/blob/master/data/data/bootstrap/files/etc/systemd/system.conf.d/10-default-env.conf.template
Note that current the agent service explicitly overrides these environment variables to be empty, so that will have to be cleared.
Description of problem:
console.openshift.io/use-i18n false in v1alpha API is converted to "" in the v1 APi, which is not a valid value for the enum type declared in the code.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-09-25-071630
How reproducible:
Always
Steps to Reproduce:
1. Load a dynamic plugin with v1alpha API console.openshift.io/use-i18n set to 'false' 2. In the v1 API the {"spec":{"i18n":{"loadType":""}}} loadType is set to empty string, which is not a valid value defined here: https://github.com/jhadvig/api/blob/22d69793277ffeb618d642724515f249262959a5/console/v1/types_console_plugin.go#L46 https://github.com/openshift/api/pull/1186/files#
Actual results:
{"spec":{"i18n":{"loadType":""}}}
Expected results:
{"spec":{"i18n":{"loadType":"Lazy"}}}
Additional info:
Description of problem:
We got a feedback from the support team that it is confusing to see switch in the Notifications column for the Alerting rule which have no alerts associated to it as user can not silence the Alerting rule.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. oc apply -f https://gist.githubusercontent.com/vikram-raj/727629797eb9d9bfcfa2721cae2ade86/raw/7c2305e14115a1a4f4f88ebb74cdad32cbec4132/Alerting%2520rule%2520without%2520alert 2. navigate to the Developer perspective Observe -> Alerts 3. Try to silence the VersionAlert alerting rule, nothing will happen
Actual results:
Silence the alerting rule using the switch will do nothing
Expected results:
No switch for silence the alerting rule should be visible if no alerts are associated to the alerting rule.
Additional info:
Description of problem:
seeing test failure due to panic in cvo here:
Undiagnosed panic detected in pod expand_less 0s { pods/openshift-cluster-version_cluster-version-operator-96cf55b5-rffgt_cluster-version-operator_previous.log.gz:E0915 18:38:42.763315 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) pods/openshift-cluster-version_cluster-version-operator-96cf55b5-rffgt_cluster-version-operator_previous.log.gz:E0915 18:38:42.763418 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)}
full error from logs:
/E0915 18:38:42.763315 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) goroutine 187 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1934980?, 0x2bc6240}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:75 +0x99 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x4d2604?}) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:49 +0x75 panic({0x1934980, 0x2bc6240}) /usr/lib/golang/src/runtime/panic.go:838 +0x207 github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).calculateNext(0xc0015c6000, 0xc001df2000) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:716 +0x14d github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).Start.func1() /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:575 +0x2a9 k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x10000000000?) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001df2000?, {0x1e44e60, 0xc002739f50}, 0x1, 0xc00058e0c0) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x989680, 0x0, 0x60?, 0x0?) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 github.com/openshift/cluster-version-operator/pkg/cvo.(*SyncWorker).Start(0xc0015c6000?, {0x1e5eb30?, 0xc0000cacc0?}, 0x10?, {0x0?, 0x0?}, {0x0?, 0x0?}) /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/sync_worker.go:556 +0x145 github.com/openshift/cluster-version-operator/pkg/cvo.(*Operator).Run.func2() /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/cvo.go:387 +0x83 created by github.com/openshift/cluster-version-operator/pkg/cvo.(*Operator).Run /go/src/github.com/openshift/cluster-version-operator/pkg/cvo/cvo.go:385 +0x4af E0915 18:38:42.763418 1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
Version-Release number of selected component (if applicable):
How reproducible:
currently unsure hit this in a test run, but shouldn't ever panic.
Steps to Reproduce:
1. 2. 3.
Actual results:
panic in cvo pod
Expected results:
no panic in cvo pod
Additional info:
Description of problem:
E2E test Installs Red Hat Integration - 3scale operator test is failing due to change of Operator name
This is a clone of issue OCPBUGS-2873. The following is the description of the original issue:
—
Description of problem:
Prometheus fails to scrape metrics from the storage operator after some time.
Version-Release number of selected component (if applicable):
4.11
How reproducible:
Always
Steps to Reproduce:
1. Install storage operator. 2. Wait for 24h (time for the certificate to be recycled). 3.
Actual results:
Targets being down because Prometheus didn't reload the CA certificate.
Expected results:
Prometheus reloads its client TLS certificate and scrapes the target successfully.
Additional info:
OpenShift 4.12 is going to be built with Go 1.19, but automatic migration in our repository has failed. The migration should be done manually.
[1]: https://github.com/openshift/cluster-image-registry-operator/pull/802
Description of problem:
If you set a services cluster IP to an IP with a leading zero (e.g. 192.168.0.011), ovn-k should normalise this and remove the leading zero before sending it to ovn.
This was seen by me on a CI run executing the k8 test here: test/e2e/network/funny_ips.go +75
you can reproduce using that above test.
Have a read of the text there:
43 // What are funny IPs: 44 // The adjective is because of the curl blog that explains the history and the problem of liberal 45 // parsing of IP addresses and the consequences and security risks caused the lack of normalization, 46 // mainly due to the use of different notations to abuse parsers misalignment to bypass filters. 47 // xref: https://daniel.haxx.se/blog/2021/04/19/curl-those-funny-ipv4-addresses/ 48 // 49 // Since golang 1.17, IPv4 addresses with leading zeros are rejected by the standard library. 50 // xref: https://github.com/golang/go/issues/30999 51 // 52 // Because this change on the parsers can cause that previous valid data become invalid, Kubernetes 53 // forked the old parsers allowing leading zeros on IPv4 address to not break the compatibility. 54 // 55 // Kubernetes interprets leading zeros on IPv4 addresses as decimal, users must not rely on parser 56 // alignment to not being impacted by the associated security advisory: CVE-2021-29923 golang 57 // standard library "net" - Improper Input Validation of octal literals in golang 1.16.2 and below 58 // standard library "net" results in indeterminate SSRF & RFI vulnerabilities. xref: 59 // https://nvd.nist.gov/vuln/detail/CVE-2021-29923
northd is logging an error about this also:
|socket_util|ERR|172.30.0.011:7180: bad IP address "172.30.0.011" ... 2022-08-23T14:14:21.968Z|01839|ovn_util|WARN|bad ip address or port for load balancer key 172.30.0.011:7180
Also, I see the error:
E0823 14:14:34.135115 3284 gateway_shared_intf.go:600] Failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip: failed to delete conntrack entry for service e2e-funny-ips-8626/funny-ip with svcVIP 172.30.0.011, svcPort 7180, protocol TCP: value "<nil>" passed to DeleteConntrack is not an IP address
We should normalise the IPs before sending to OVN-k. I see also theres conntrack error when trying to set this bad IP.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. See above k8 test
Actual results:
Leading zero IP sent to OVN
Expected results:
No leading zero IP sent to OVN
Additional info:
Description of problem:
When log line number is too big, the number will overlap with cut-off line in the log viewer.
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
1.Go to a pod log page with lots of logs, such as pod in openshift-cluster-version namespace. Check log line numbers.
2.
3.
Actual results:
1. When line number is too big, it will overlap with cut-off line.
Expected results:
1. Should have no overlaps in logs
Additional info:
Derrick got an "old and new refs are equal" on rebase error; this is similar to OCPBUGS-1899 but I think has a different root cause. In this case, when a manual rollback is performed via the bootloader, we've computed that there's an osimageurl diff between the expected and desired state, but actually the desired state is already set.
We just need to skip doing the rebase if we're already in the target state.
(A real root of this problem again is that the whole "current/desired config" thing is trying to track state independently of the bootloader...if we made node state == container image, all of that goes away. The MCO would understand that it got booted into a previous state)
This is a clone of issue OCPBUGS-6175. The following is the description of the original issue:
—
Description of problem:
When the cluster is configured with Proxy the swift client in the image registry operator is not using the proxy to authenticate with OpenStack, so it's unable to reach the OpenStack API. This issue became evident since recently the support was added to not fallback to cinder in case swift is available[1].
[1]https://github.com/openshift/cluster-image-registry-operator/pull/819
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Deploy a cluster with proxy and restricted installation 2. 3.
Actual results:
Expected results:
Additional info:
This is a clone of issue OCPBUGS-8702. The following is the description of the original issue:
—
This is a clone of issue OCPBUGS-8523. The following is the description of the original issue:
—
Description of problem:
Due to rpm-ostree regression (OKD-63) MCO was copying /var/lib/kubelet/config.json into /run/ostree/auth.json on FCOS and SCOS. This breaks Assisted Installer flow, which starts with Live ISO and doesn't have /var/lib/kubelet/config.json
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
When user selects a installed operator (for example, openshift elastic search) in operator hub and navigating to installed operator page from operator information page
with the help of "view it here" option, "404 Not found" information has wrongly shown/appeared although it navigates to the installed operator at the end.
Version-Release number of selected components (if applicable):
4.12.0-0.nightly-2022-08-15-150248
How reproducible:
Always
Steps to Reproduce:
Actual results:
Wrong message "404: Not found" while the user selects an installed operator and navigates from operator hub to installed operator page.
Browser console log indicate as below
main-chunk-525818b154a57a9b220a.min.js:1 unhandled error: Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:125992) at HTMLDivElement.l (https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/static/vendors~main-chunk-40fab65853dff2fbc413.min.js:118:126387) TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) window.onerror @ main-chunk-525818b154a57a9b220a.min.js:1 vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 Uncaught TypeError: Cannot read properties of undefined (reading 'firstElementChild') at c (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) at HTMLDivElement.l (vendors~main-chunk-40fab65853dff2fbc413.min.js:72303:1) c @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 l @ vendors~main-chunk-40fab65853dff2fbc413.min.js:72303 scroll (async) componentWillUnmount @ vendor-patternfly-core-chunk-006bb1499791fa7cfea7.min.js:38397 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 hs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 bs @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Oc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Ac @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 pc @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 (anonymous) @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 t.unstable_runWithPriority @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171690 Hi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Vi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 qi @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 De @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 Yt @ vendors~main-chunk-40fab65853dff2fbc413.min.js:171377 main-chunk-525818b154a57a9b220a.min.js:1 GET https://console-openshift-console.apps.jmekkatt-dob.ibmcloud.qe.devcluster.openshift.com/api/kubernetes/apis/operators.coreos.com/v1alpha1/clusterserviceversions/elasticsearch-operator.5.5.0 404 (Not Found)
Expected results:
Installed operator details should show without any error when the user selects an installed operator and navigates from operator hub to installed operator page.
Additional info:
Reproduced in both chrome[103.0.5060.114 (Official Build) (64-bit)] and firefox[91.11.0esr (64-bit)] browsers
Attached screen share for the same issue InstalledOperatorNavigation404.mp4
Description of problem:
i18n translation missing in "Remove component node from application" modal
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Navigate to dev console and create a workload under an Application group 2. On the Toplogy remove the workload from the Application group 3. See the i18n error in the console
Actual results:
Missing i18n key "Remove component node from application" in namespace "topology" and language "en." in console
Expected results:
No i18n error should be shown in the console.
Additional info:
Hi,
Bare Metal IPI provisioning is failing to provision the worker nodes. The metal3-machine-os-downloader InitContainer is getting in CrashLoopBackOff state because it cannot find virt-* commands in the container image.
> oc -n openshift-machine-api get pods | grep -v Running NAME READY STATUS metal3-fc66f5846-gtq9m 0/7 Init:CrashLoopBackOff metal3-image-cache-d4qcz 0/1 Init:1/2 metal3-image-cache-djzcf 0/1 Init:1/2 metal3-image-cache-p5mwg 0/1 Init:1/2
> oc -n openshift-machine-api logs deployment/metal3 -c metal3-machine-os-downloader [omitted] ++ LIBGUESTFS_BACKEND=direct ++ virt-filesystems -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -l /usr/local/bin/get-resource.sh: line 88: virt-filesystems: command not found ++ grep boot ++ cut -f1 '-d ' + BOOT_DISK= ++ LIBGUESTFS_BACKEND=direct ++ virt-ls -a rhcos-412.86.202207142104-0-openstack.x86_64.qcow2 -m '' /boot/loader/entries /usr/local/bin/get-resource.sh: line 90: virt-ls: command not found + BOOT_ENTRIES= + rm -fr /shared/tmp/tmp.CnCd2E3kxN
OpenShift 4.12.0-ec.0+
Since https://github.com/openshift/ocp-build-data/pull/1757, the ironic-machine-os-downloader container image is built using RHEL9 repositories.
However, following upstream move of guestfs tools to a dedicated repository [1], the libguestfs packaging differs between RHEL8 and RHEL9:
Since the Dockerfile specifies only the libguestfs-tools package, the virt-* commands are not installed when using RHEL9 repositories.
A trivial fix is to update the Dockerfile to install the guestfs-tools package instead of the libguestfs-tools package.
Regards,
Denis
This is a clone of issue OCPBUGS-2260. The following is the description of the original issue:
—
TRT-594 investigates failed CI upgrade runs due to alert KubePodNotReady firing. The case was a pod getting skipped over for scheduling over two successive master node update / restarts. The case was determined valid so the ask is to be able to have the monitoring aware that master nodes are restarting and scheduling may be delayed. Presuming we don't want to change the existing tolerance for the non master node restart cases could we suppress it during those restarts and fall back to a second alert with increased tolerances only during those restarts, if we have metrics indicating we are restarting. Or similar if there are better ways to handle.
The scenario is:
This is a clone of issue OCPBUGS-4490. The following is the description of the original issue:
—
Description of problem:
When hypershift HostedCluster has endpointAccess: Private, the csi-snapshot-controller is in CrashLoopBackoff because the guest APIServer url in the admin-kubeconfig isn't reachable in Private mode.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Always
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
The path used by --rotated-pod-logs to gather the rotated pod logs from /var/log/pods node folder via /api/v1/nodes/${NODE}/proxy/logs/${LOG_PATH} is only valid for regular pods but not for static pods.
The main problem is that, while normal pods have their rotated logs at this /var/log/pods/${POD_NAME}_${POD_UID_IN_API}/${CONTAINER_NAME}, static pods have them at /var/log/pods/${POD_NAME}_${CONFIG_HASH}/${CONTAINER_NAME} because the UID cannot be known at the time that the static pod is born (because static pods are created by kubelet before registering them in the kube-apiserver, and UID is assigned by the kube-apiserver).
The visible results of that are:
4.10
Always if there are static pods.
1. oc adm inspect --rotated-pod-logs ns/openshift-etcd (or any other project with static pods).
error: errors occurred while gathering data: one or more errors occurred while gathering pod-specific data for namespace: openshift-etcd [one or more errors occurred while gathering container data for pod etcd-master-0.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-1.example.net: the server could not find the requested resource, one or more errors occurred while gathering container data for pod etcd-master-2.example.net: the server could not find the requested resource]
No errors like the ones above and rotated pod logs to be gathered, if present.
Despite being marked as experimental, this --rotated-pod-logs is used in must-gather, so this issue can be easily reproduced by just running a default must-gather. I focused on bare oc adm inspect reproducers for simplicity.
Description of problem:
During restart egress firewall acls will be deleted and re-created from scratch, meaning that egress firewall rules won't be applied for some time during restart
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
We're seeing frequent private DNS zone creation failures in Azure CI jobs recent two days, the Azure CI jobs have been greatly affected. https://search.ci.openshift.org/?search=error+creating%2Fupdating+Private+DNS+Zone+Virtual+network&maxAge=48h&context=1&type=build-log&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job Such as the following error from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-azure-sdn-upgrade/1566852244215697408 level=info msg=Consuming Openshift Manifests from target directory level=info msg=Consuming Common Manifests from target directory level=info msg=Credentials loaded from file "/var/run/secrets/ci.openshift.io/cluster-profile/osServicePrincipal.json" level=info msg=Creating infrastructure resources... level=error level=error msg=Error: error creating/updating Private DNS Zone Virtual network link "ci-op-1w80vs6f-7f65d-t2zlz-network-link" (Resource Group "ci-op-1w80vs6f-7f65d-t2zlz-rg"): privatedns.VirtualNetworkLinksClient#CreateOrUpdate: Failure sending request: StatusCode=404 -- Original Error: Code="ParentResourceNotFound" Message="Can not perform requested operation on nested resource. Parent resource 'ci-op-1w80vs6f-7f65d.ci2.azure.devcluster.openshift.com' not found." level=error level=error msg= with module.dns.azureprivatedns_zone_virtual_network_link.network, level=error msg= on dns/dns.tf line 13, in resource "azureprivatedns_zone_virtual_network_link" "network": level=error msg= 13: resource "azureprivatedns_zone_virtual_network_link" "network"
Version-Release number of selected component (if applicable):
All OCP versions
How reproducible:
https://search.ci.openshift.org/chart?name=e2e-azure&search=error+creating%2Fupdating+Private+DNS+Zone&maxAge=24h&type=build-log shows 26% of the failed Azure jobs are related to "error creating/updating Private DNS Zone" in the past day. 3/5 of the failed Azure jobs are caused by this in QE’s CI today.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
No Azure outage was reported from https://status.azure.com/en-us/status. No private zone or DNS records quota exceeded was observed.
Description of problem:
On MicroShift, the Route API is served by kube-apiserver as a CRD. Reusing the same defaulting implementation as vanilla OpenShift through a patch to kube- apiserver is expected to resolve OCPBUGS-4189 but have no detectable effect on OCP.
Additional info:
This patch will be inert on OCP, but is implemented in openshift/kubernetes because MicroShift ingests kube-apiserver through its build-time dependency on openshift/kubernetes.
Description of problem:
When all projects are selected, workloads list page and details page shows inconsistent HorizontalPodAutoscaler actions
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-07-25-010250
How reproducible:
Always
Steps to Reproduce:
Actual results:
Expected results:
Additional info:
Description of problem:
OCP v4.9.31 cluster didn't have the $search domain in /etc/resolv.conf, which was there in the v4.8.29 OCP cluster. This was observed in all the nodes of the v4.9.31 cluster.
~~~
OpenShift 4.9.31
sh-4.4# cat /etc/resolv.conf
OpenShift 4.8.29
ENV: OpenStack IAD2, IPI installation. Connected cluster.
Version-Release number of selected component (if applicable):
OCP v4.9.31
How reproducible:
Always
Steps to Reproduce:
1. Install IPI cluster on OpenStack IAD2 platform having cluster version 4.9.31
2. Debug to any of the node(master/worker)
3. Check and confirm the missing search domain on all nodes of the cluster.
Actual results:
The search domain was missing when checked in `/etc/resolv.conf` file on all nodes of the cluster causing serious issues in the cluster.
Expected results:
The installer should embed the search domain in /etc/resolv.conf file on all nodes of the cluster.
Additional info:
set -eo pipefail
DISPATCHER_FILE="/etc/NetworkManager/dispatcher.d/30-resolv-prepender"
DOMAINS="$(grep -E '\s*DOMAINS=.*iad2.dc.paas.redhat.com' $DISPATCHER_FILE \
grep -oE '[a-z0-9]*.dev.iad2.dc.paas.redhat.com' \ |
tr '\n' ' ')" |
>&2 echo "IT-PaaS: overwriting search domains in /etc/resolv.conf with: $DOMAINS"
sed -e "/^search/d" \
-e "/Generated by/c# Generated by KNI resolv prepender NM dispatcher script \nsearch $DOMAINS" \
/etc/resolv.conf > /etc/resolv.tmp
mv /etc/resolv.tmp /etc/resolv.conf
~~~
Description of problem:
Installing 1000+ SNOs via ACM/MCE via ZTP with gitops, a small percentage of clusters end up never completing install because the monitoring operator does not reconcile to available.
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version False True 16h Unable to apply 4.11.0: the cluster operator monitoring has not yet successfully rolled out
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get co monitoring
NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE
monitoring False True True 15h Rollout of the monitoring stack failed and is degraded. Please investigate the degraded status error.
Version-Release number of selected component (if applicable):
How reproducible:
Additional info:
# oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig get po -n openshift-monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 0/6 ContainerCreating 0 15h cluster-monitoring-operator-54dd78cc74-l5w24 2/2 Running 0 15h kube-state-metrics-b6455c4dc-8hcfn 3/3 Running 0 15h node-exporter-k7899 2/2 Running 0 15h openshift-state-metrics-7984888fbd-cl67v 3/3 Running 0 15h prometheus-adapter-785bf4f975-wgmnh 1/1 Running 0 15h prometheus-k8s-0 0/6 Init:0/1 0 15h prometheus-operator-74d8754ff7-9zrgw 2/2 Running 0 15h prometheus-operator-admission-webhook-6665fb687d-c5jgv 1/1 Running 0 15h thanos-querier-575496c665-jcc8l 6/6 Running 0 15h # oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig describe po -n openshift-monitoring alertmanager-main-0 Name: alertmanager-main-0 Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: sno01219/fc00:1001::8aa Start Time: Mon, 15 Aug 2022 23:53:39 +0000 Labels: alertmanager=main app.kubernetes.io/component=alert-router app.kubernetes.io/instance=main app.kubernetes.io/managed-by=prometheus-operator app.kubernetes.io/name=alertmanager app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=0.24.0 controller-revision-hash=alertmanager-main-fcf8dd5fb statefulset.kubernetes.io/pod-name=alertmanager-main-0 Annotations: kubectl.kubernetes.io/default-container: alertmanager openshift.io/scc: nonroot Status: Pending IP: IPs: <none> Controlled By: StatefulSet/alertmanager-main Containers: alertmanager: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91308d35c1e56463f55c1aaa519ff4de7335d43b254c21abdb845fc8c72821a1 Image ID: Ports: 9094/TCP, 9094/UDP Host Ports: 0/TCP, 0/UDP Args: --config.file=/etc/alertmanager/config/alertmanager.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address= --web.listen-address=127.0.0.1:9093 --web.external-url=https:/console-openshift-console.apps.sno01219.rdu2.scalelab.redhat.com/monitoring --web.route-prefix=/ --cluster.peer=alertmanager-main-0.alertmanager-operated:9094 --cluster.reconnect-timeout=5m State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 4m memory: 40Mi Environment: POD_IP: (v1:status.podIP) Mounts: /alertmanager from alertmanager-main-db (rw) /etc/alertmanager/certs from tls-assets (ro) /etc/alertmanager/config from config-volume (rw) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/alertmanager/secrets/alertmanager-main-proxy from secret-alertmanager-main-proxy (ro) /etc/alertmanager/secrets/alertmanager-main-tls from secret-alertmanager-main-tls (ro) /etc/pki/ca-trust/extracted/pem/ from alertmanager-trusted-ca-bundle (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --listen-address=localhost:8080 --reload-url=http://localhost:9093/-/reload --watched-dir=/etc/alertmanager/config --watched-dir=/etc/alertmanager/secrets/alertmanager-main-tls --watched-dir=/etc/alertmanager/secrets/alertmanager-main-proxy --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy --watched-dir=/etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: alertmanager-main-0 (v1:metadata.name) SHARD: -1 Mounts: /etc/alertmanager/config from config-volume (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (ro) /etc/alertmanager/secrets/alertmanager-kube-rbac-proxy-metric from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/alertmanager/secrets/alertmanager-main-proxy from secret-alertmanager-main-proxy (ro) /etc/alertmanager/secrets/alertmanager-main-tls from secret-alertmanager-main-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) alertmanager-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:140f8947593d92e1517e50a201e83bdef8eb965b552a21d3caf346a250d0cf6e Image ID: Port: 9095/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9095 -http-address= -email-domain=* -upstream=http://localhost:9093 -openshift-sar=[{"resource": "namespaces", "verb": "get"}, {"resource": "alertmanagers", "resourceAPIGroup": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "resourceName": "non-existant"}] -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}, "/": {"resource":"alertmanagers", "group": "monitoring.coreos.com", "namespace": "openshift-monitoring", "verb": "patch", "name": "non-existant"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-service-account=alertmanager-main -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from alertmanager-trusted-ca-bundle (ro) /etc/proxy/secrets from secret-alertmanager-main-proxy (rw) /etc/tls/private from secret-alertmanager-main-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) kube-rbac-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9096 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --tls-min-version=VersionTLS12 State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy (rw) /etc/tls/private from secret-alertmanager-main-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) kube-rbac-proxy-metric: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9097/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9097 --upstream=http://127.0.0.1:9093 --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --client-ca-file=/etc/tls/client/client-ca.crt --logtostderr=true --allow-paths=/metrics --tls-min-version=VersionTLS12 State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-alertmanager-kube-rbac-proxy-metric (ro) /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-alertmanager-main-tls (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) prom-label-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2550b2cbdf864515b1edacf43c25eb6b6f179713c1df34e51f6e9bba48d6430a Image ID: Port: <none> Host Port: <none> Args: --insecure-listen-address=127.0.0.1:9096 --upstream=http://127.0.0.1:9093 --label=namespace --error-on-replace State: Waiting Reason: ContainerCreating Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl77l (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: config-volume: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-generated Optional: false tls-assets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: alertmanager-main-tls-assets-0 SecretOptionalName: <nil> secret-alertmanager-main-tls: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-tls Optional: false secret-alertmanager-main-proxy: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-main-proxy Optional: false secret-alertmanager-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-kube-rbac-proxy Optional: false secret-alertmanager-kube-rbac-proxy-metric: Type: Secret (a volume populated by a Secret) SecretName: alertmanager-kube-rbac-proxy-metric Optional: false alertmanager-main-db: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false alertmanager-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: alertmanager-trusted-ca-bundle-2rsonso43rc5p Optional: true kube-api-access-hl77l: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 2m25s (x409 over 15h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_1c367a83-24e3-4249-861a-a107a6beaee2_0(dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/alertmanager-main-0/1c367a83-24e3-4249-861a-a107a6beaee2:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-0 dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b] [openshift-monitoring/alertmanager-main-0 dff5f302f774d060728261b3c86841ebdbd7ba11537ec9f4d90d57be17bdf44b] failed to get pod annotation: timed out waiting for annotations: context deadline exceeded oc --kubeconfig=/root/hv-vm/sno/manifests/sno01219/kubeconfig describe po -n openshift-monitoring prometheus-k8s-0 Name: prometheus-k8s-0 Namespace: openshift-monitoring Priority: 2000000000 Priority Class Name: system-cluster-critical Node: sno01219/fc00:1001::8aa Start Time: Mon, 15 Aug 2022 23:53:39 +0000 Labels: app.kubernetes.io/component=prometheus app.kubernetes.io/instance=k8s app.kubernetes.io/managed-by=prometheus-operator app.kubernetes.io/name=prometheus app.kubernetes.io/part-of=openshift-monitoring app.kubernetes.io/version=2.36.2 controller-revision-hash=prometheus-k8s-546b544f8b operator.prometheus.io/name=k8s operator.prometheus.io/shard=0 prometheus=k8s statefulset.kubernetes.io/pod-name=prometheus-k8s-0 Annotations: kubectl.kubernetes.io/default-container: prometheus openshift.io/scc: nonroot Status: Pending IP: IPs: <none> Controlled By: StatefulSet/prometheus-k8s Init Containers: init-config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: 8080/TCP Host Port: 0/TCP Command: /bin/prometheus-config-reloader Args: --watch-interval=0 --listen-address=:8080 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) Containers: prometheus: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c7df53b796e81ba8301ba74d02317226329bd5752fd31c1b44d028e4832f21c3 Image ID: Port: <none> Host Port: <none> Args: --web.console.templates=/etc/prometheus/consoles --web.console.libraries=/etc/prometheus/console_libraries --storage.tsdb.retention.time=15d --config.file=/etc/prometheus/config_out/prometheus.env.yaml --storage.tsdb.path=/prometheus --web.enable-lifecycle --web.external-url=https:/console-openshift-console.apps.sno01219.rdu2.scalelab.redhat.com/monitoring --web.route-prefix=/ --web.listen-address=127.0.0.1:9090 --web.config.file=/etc/prometheus/web_config/web-config.yaml State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 70m memory: 1Gi Liveness: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=6 Readiness: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=5s #success=1 #failure=3 Startup: exec [sh -c if [ -x "$(command -v curl)" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x "$(command -v wget)" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi] delay=0s timeout=3s period=15s #success=1 #failure=60 Environment: <none> Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/prometheus/certs from tls-assets (ro) /etc/prometheus/config_out from config-out (ro) /etc/prometheus/configmaps/kubelet-serving-ca-bundle from configmap-kubelet-serving-ca-bundle (ro) /etc/prometheus/configmaps/metrics-client-ca from configmap-metrics-client-ca (ro) /etc/prometheus/configmaps/serving-certs-ca-bundle from configmap-serving-certs-ca-bundle (ro) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /etc/prometheus/secrets/kube-etcd-client-certs from secret-kube-etcd-client-certs (ro) /etc/prometheus/secrets/kube-rbac-proxy from secret-kube-rbac-proxy (ro) /etc/prometheus/secrets/metrics-client-certs from secret-metrics-client-certs (ro) /etc/prometheus/secrets/prometheus-k8s-proxy from secret-prometheus-k8s-proxy (ro) /etc/prometheus/secrets/prometheus-k8s-thanos-sidecar-tls from secret-prometheus-k8s-thanos-sidecar-tls (ro) /etc/prometheus/secrets/prometheus-k8s-tls from secret-prometheus-k8s-tls (ro) /etc/prometheus/web_config/web-config.yaml from web-config (ro,path="web-config.yaml") /prometheus from prometheus-k8s-db (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) config-reloader: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:209e20410ec2d3d7a502f568d2b7fe1cd1beadcb36fff2d1e6f59d77be3200e3 Image ID: Port: <none> Host Port: <none> Command: /bin/prometheus-config-reloader Args: --listen-address=localhost:8080 --reload-url=http://localhost:9090/-/reload --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_NAME: prometheus-k8s-0 (v1:metadata.name) SHARD: 0 Mounts: /etc/prometheus/config from config (rw) /etc/prometheus/config_out from config-out (rw) /etc/prometheus/rules/prometheus-k8s-rulefiles-0 from prometheus-k8s-rulefiles-0 (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) thanos-sidecar: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36fc214537c763b3a3f0a9dc7a1bd4378a80428c31b2629df8786a9b09155e6d Image ID: Ports: 10902/TCP, 10901/TCP Host Ports: 0/TCP, 0/TCP Args: sidecar --prometheus.url=http://localhost:9090/ --tsdb.path=/prometheus --http-address=127.0.0.1:10902 --grpc-server-tls-cert=/etc/tls/grpc/server.crt --grpc-server-tls-key=/etc/tls/grpc/server.key --grpc-server-tls-client-ca=/etc/tls/grpc/ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 25Mi Environment: <none> Mounts: /etc/tls/grpc from secret-grpc-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) prometheus-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:140f8947593d92e1517e50a201e83bdef8eb965b552a21d3caf346a250d0cf6e Image ID: Port: 9091/TCP Host Port: 0/TCP Args: -provider=openshift -https-address=:9091 -http-address= -email-domain=* -upstream=http://localhost:9090 -openshift-service-account=prometheus-k8s -openshift-sar={"resource": "namespaces", "verb": "get"} -openshift-delegate-urls={"/": {"resource": "namespaces", "verb": "get"}} -tls-cert=/etc/tls/private/tls.crt -tls-key=/etc/tls/private/tls.key -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token -cookie-secret-file=/etc/proxy/secrets/session_secret -openshift-ca=/etc/pki/tls/cert.pem -openshift-ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 20Mi Environment: HTTP_PROXY: HTTPS_PROXY: NO_PROXY: Mounts: /etc/pki/ca-trust/extracted/pem/ from prometheus-trusted-ca-bundle (ro) /etc/proxy/secrets from secret-prometheus-k8s-proxy (rw) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) kube-rbac-proxy: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 9092/TCP Host Port: 0/TCP Args: --secure-listen-address=0.0.0.0:9092 --upstream=http://127.0.0.1:9090 --allow-paths=/metrics --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --client-ca-file=/etc/tls/client/client-ca.crt --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --logtostderr=true --tls-min-version=VersionTLS12 State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 15Mi Environment: <none> Mounts: /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw) /etc/tls/client from configmap-metrics-client-ca (ro) /etc/tls/private from secret-prometheus-k8s-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) kube-rbac-proxy-thanos: Container ID: Image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5e1c69d005727e3245604cfca7a63e4f9bc6e15128c7489e41d5e967305089e Image ID: Port: 10902/TCP Host Port: 0/TCP Args: --secure-listen-address=[$(POD_IP)]:10902 --upstream=http://127.0.0.1:10902 --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key --client-ca-file=/etc/tls/client/client-ca.crt --config-file=/etc/kube-rbac-proxy/config.yaml --tls-cipher-suites=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --allow-paths=/metrics --logtostderr=true --tls-min-version=VersionTLS12 --client-ca-file=/etc/tls/client/client-ca.crt State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Requests: cpu: 1m memory: 10Mi Environment: POD_IP: (v1:status.podIP) Mounts: /etc/kube-rbac-proxy from secret-kube-rbac-proxy (rw) /etc/tls/client from metrics-client-ca (ro) /etc/tls/private from secret-prometheus-k8s-thanos-sidecar-tls (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85zlc (ro) Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s Optional: false tls-assets: Type: Projected (a volume that contains injected data from multiple sources) SecretName: prometheus-k8s-tls-assets-0 SecretOptionalName: <nil> config-out: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> prometheus-k8s-rulefiles-0: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-k8s-rulefiles-0 Optional: false web-config: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-web-config Optional: false secret-kube-etcd-client-certs: Type: Secret (a volume populated by a Secret) SecretName: kube-etcd-client-certs Optional: false secret-prometheus-k8s-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-tls Optional: false secret-prometheus-k8s-proxy: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-proxy Optional: false secret-prometheus-k8s-thanos-sidecar-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-thanos-sidecar-tls Optional: false secret-kube-rbac-proxy: Type: Secret (a volume populated by a Secret) SecretName: kube-rbac-proxy Optional: false secret-metrics-client-certs: Type: Secret (a volume populated by a Secret) SecretName: metrics-client-certs Optional: false configmap-serving-certs-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: serving-certs-ca-bundle Optional: false configmap-kubelet-serving-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: kubelet-serving-ca-bundle Optional: false configmap-metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false prometheus-k8s-db: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: <unset> metrics-client-ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: metrics-client-ca Optional: false secret-grpc-tls: Type: Secret (a volume populated by a Secret) SecretName: prometheus-k8s-grpc-tls-crdkohb1gb92n Optional: false prometheus-trusted-ca-bundle: Type: ConfigMap (a volume populated by a ConfigMap) Name: prometheus-trusted-ca-bundle-2rsonso43rc5p Optional: true kube-api-access-85zlc: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ConfigMapName: openshift-service-ca.crt ConfigMapOptional: <nil> QoS Class: Burstable Node-Selectors: kubernetes.io/os=linux Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 4m19s (x409 over 15h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_debda4d2-6914-4b36-92e0-78f68d539ab3_0(86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-monitoring/prometheus-k8s-0/debda4d2-6914-4b36-92e0-78f68d539ab3:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/prometheus-k8s-0 86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655] [openshift-monitoring/prometheus-k8s-0 86af91d4e64ab0fbad95352b029762e9856ff24005445b458bccb22e0ee9b655] failed to get pod annotation: timed out waiting for annotations: context deadline exceeded
Both pods in error state seem to be waiting on this issue "failed to get pod annotation: timed out waiting for annotations: context deadline exceeded"
This is a clone of issue OCPBUGS-3186. The following is the description of the original issue:
—
Description of problem:
fail to get clear error message when zones is not match with the the subnets in BYON
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. install-config.yaml yq '.controlPlane.platform.ibmcloud.zones,.platform.ibmcloud.controlPlaneSubnets' install-config.yaml ["ca-tor-1", "ca-tor-2", "ca-tor-3"] - ca-tor-existing-network-1-cp-ca-tor-2 - ca-tor-existing-network-1-cp-ca-tor-3 2. openshift-install create manifests --dir byon-az-test-1
Actual results:
FATAL failed to fetch Master Machines: failed to generate asset "Master Machines": failed to create master machine objects: failed to create provider: no subnet found for ca-tor-1
Expected results:
more clear error message in install-config.yaml
Additional info:
This is a clone of issue OCPBUGS-1453. The following is the description of the original issue:
—
Description of problem:
TargetDown alert fired while it shouldn't. Prometheus endpoints are not always properly unregistered and the alert will therefore think that some Kube service endpoints are down
Version-Release number of selected component (if applicable):
The problem as always been there.
How reproducible:
Not reproducible. Most of the time Prometheus endpoints are properly unregistered. Aim here is to get the TargetDown Prometheus expression be more resilient; this can be tested on past metrics data in which the unregistration issue was encountered.
Steps to Reproduce:
N/A
Actual results:
TargetDown alert triggered while Kube service endpoints are all up & running.
Expected results:
TargetDown alert should not have been trigerred.
Our CMO e2e tests create several containers besides the standard CMO deployment. These pods do currently not set any security context capabilities. Currently this creates a warning like so:
W0705 08:35:38.590283 15206 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "alertmanager-webhook-e2e-testutil" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "alertmanager-webhook-e2e-testutil" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
We should be proactive and set security capability contraints. From this run this seems to impact the following pods/containers:
Both are used more then once.
Description of problem:
The icon color of Alerts in the Topology list view should be based on alert type.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. create a deployment 2. Create a resource quota so that quota alert will be visible in topology list page 3. navigate to topology list page 3.
Actual results:
Alert icon color is black and white. See the screenshots
Expected results:
Alert icon color should be base on alert type.
Additional info:
This is a clone of issue OCPBUGS-1427. The following is the description of the original issue:
—
Description of problem:
Jump looks the worst on gcp, but looking closer Azure and AWS both jumped as well just not as high.
Disruption data indicates that the image registry on GCP was averaging around 30-40 seconds of disruption during an upgrade, until Aug 27th when it jumped to 125-135 seconds and has remained there ever since.
We see similar spikes in ingress-to-console and ingress-to-oauth. NOTE: image registry backend is also behind ingress, so all three are ingress related disruption.
https://datastudio.google.com/s/uBC4zuBFdTE
These charts show the problem on Aug 27 for registry, ingress to console, and ingress to oauth.
sdn network type appears unaffected.
Something merged Aug 26-27 that caused a significant change for anything behind ingress using ovn on gcp.
Description of problem: As discovered in https://issues.redhat.com/browse/OCPBUGS-2795, gophercloud fails to list swift containers when the endpoint speaks HTTP2. This means that CIRO will provision a 100GB cinder volume even though swift is available to the tenant.
We're for example seeing this behavior in our CI on vexxhost.
The gophercloud commit that fixed this issue is https://github.com/gophercloud/gophercloud/commit/b7d5b2cdd7ffc13e79d924f61571b0e5f74ec91c, specifically the `|| ct == ""` part on line 75 of openstack/objectstorage/v1/containers/results.go. This commit made it in gophercloud v0.18.0.
CIRO still depends on gophercloud v0.17.0. We should bump gophercloud to fix the bug.
Version-Release number of selected component (if applicable):
All versions. Fix should go to 4.8 - 4.12.
How reproducible:
Always, when swift speaks HTTP2.
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Description of problem:
Added a script to collect PodNetworkConnectivityChecks to able to view the overall status of the pod network connectivity. Current must-gather collects the contents of `openshift-network-diagnostics` but does not collect the PodNetworkConnectivityCheck.
Version-Release number of selected component (if applicable):
4.12, 4.11, 4.10
This is a clone of issue OCPBUGS-2824. The following is the description of the original issue:
—
Description of problem:
When users adjust their browsers to small size, the deploymnet details page on the Topology page overrides the drop-down list component, which prevents the user from using the drop-down list functionality. All content on the dropdown list would be covered
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-10-24-103753
How reproducible:
Always
Steps to Reproduce:
1. Login OCP, go to developer perspective -> Topology page 2. Click and open one resource (eg: deployment), make sure the resource sidebar has been opened 3. Adjust the browser windows to small size 4. Check if the dropdown list component has been covered
Actual results:
All the dorpdown list component will be covered by the deployment details page (See attachment for more details)
Expected results:
The dropdown list component should be displayed on the top, the function should work even if the windows is small
Additional info:
Description of problem:
Alert actions are not triggering modal from where storage cluster can be expanded.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
1/1
Steps to Reproduce:
1. Fill up a storage cluster to 80% 2. Alert is seen in cluster dashboard. 3. Click the Add Capacity button
Actual results:
Modal is not launched.
Expected results:
Modal should be launched.
Additional info:
This is a clone of issue OCPBUGS-3277. The following is the description of the original issue:
—
I saw this occur one time when running installs in a continuous loop. This was with COMPaCT_IPV4 in a non-disconnected setup.
WaitForBootrapComplete shows it can't access the API
level=info msg=Unable to retrieve cluster metadata from Agent Rest API: no clusterID known for the cluster
level=debug msg=cluster is not registered in rest API
level=debug msg=infraenv is not registered in rest API
This is because create-cluster-and-infraenv.service failed
Failed Units: 2
create-cluster-and-infraenv.service
NetworkManager-wait-online.service
The agentbasedinstaller register command wasn't able to retrieve the image to get the version
Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n" Nov 03 23:03:24 master-0 create-cluster-and-infraenv[2702]: time="2022-11-03T23:03:24Z" level=error msg="failed to get image openshift version from release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451" error="command 'oc adm release info -o template --template '\{{.metadata.version}}' --insecure=false registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451 --registry-config=/tmp/registry-config3852044519' exited with non-zero exit code 1: \nerror: unable to read image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-25-210451: Get \"https://registry.ci.openshift.org/v2/\": dial tcp: lookup registry.ci.openshift.org on 192.168.111.1:53: read udp 192.168.111.80:51315->192.168.111.1:53: i/o timeout\n"
This occurs when attempting to get the release here:
https://github.com/openshift/assisted-service/blob/master/cmd/agentbasedinstaller/register.go#L58
We should add a retry mechanism or restart the service to handle spurious network failures like this.
This is a clone of issue OCPBUGS-3405. The following is the description of the original issue:
—
In case it should be used for publishing artifacts in CI jobs.
Look into to see if the following things are leaked:
Description of problem:
To address: 'Static Pod is managed but errored" err="managed container xxx does not have Resource.Requests'
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Steps to Reproduce:
1. 2. 3.
Actual results:
Expected results:
Additional info:
Already merged in https://github.com/openshift/cluster-kube-apiserver-operator/pull/1398
Description of problem:
Latest implementation of history pruner (pr805 [1]) had increased max upgrade history in cvo to 100, and implemented a weight based pruning priority strategy for in case history length grows any larger. This pruning however is not happening, letting history grow uncontrollably, and potentially reach resource limits of etcd or kubernetes.
Observed the following while running continuous upgrade-rollback cycles:
$ oc get clusterversion version -o json | jq '.status.history|length'
203
Version-Release number of selected component (if applicable):
4.12.0-0.nightly-2022-08-23-223922
4.12.0-0.nightly-2022-08-23-153511
How reproducible:
1/1
Steps to Reproduce:
Same as described in bz2097067 [2], with addition of waiting a few minutes after the first rollback to allow it to reach 'Completed' state.
Actual results:
History grows uncontrollably
Expected results:
History should be pruned to keep max size of 100
Additional info:
[1] https://github.com/openshift/cluster-version-operator/pull/805
[2] https://bugzilla.redhat.com/show_bug.cgi?id=2097067#c4
Description of problem:
mapi_machinehealthcheck_short_circuit is not properly reconciling the state, when a MachineHealthCheck is failing because of unhealthy Machines but then is removed. When doing two MachineSet (called blue and green and only one has running Machines at a specific point in time) with MachineAutoscaler and MachineHealthCheck, the mapi_machinehealthcheck_short_circuit will continue to report 1 for MachineHealth that actually was removed because of a switch from blue to green. $ oc get machineset | egrep 'blue|green' housiocp4-wvqbx-worker-blue-us-east-2a 0 0 2d17h housiocp4-wvqbx-worker-green-us-east-2a 1 1 1 1 2d17h $ oc get machineautoscaler NAME REF KIND REF NAME MIN MAX AGE worker-green-us-east-1a MachineSet housiocp4-wvqbx-worker-green-us-east-2a 1 4 2d17h $ oc get machinehealthcheck NAME MAXUNHEALTHY EXPECTEDMACHINES CURRENTHEALTHY machine-api-termination-handler 100% 0 0 worker-green-us-east-1a 40% 1 1 { "name": "machine-health-check-unterminated-short-circuit", "file": "/etc/prometheus/rules/prometheus-k8s-rulefiles-0/openshift-machine-api-machine-api-operator-prometheus-rules-ccb650d9-6fc4-422b-90bb-70452f4aff8f.yaml", "rules": [ { "state": "firing", "name": "MachineHealthCheckUnterminatedShortCircuit", "query": "mapi_machinehealthcheck_short_circuit == 1", "duration": 1800, "labels": { "severity": "warning" }, "annotations": { "description": "The number of unhealthy machines has exceeded the `maxUnhealthy` limit for the check, you should check\nthe status of machines in the cluster.\n", "summary": "machine health check {{ $labels.name }} has been disabled by short circuit for more than 30 minutes" }, "alerts": [ { "labels": { "alertname": "MachineHealthCheckUnterminatedShortCircuit", "container": "kube-rbac-proxy-mhc-mtrc", "endpoint": "mhc-mtrc", "exported_namespace": "openshift-machine-api", "instance": "10.128.0.58:8444", "job": "machine-api-controllers", "name": "worker-blue-us-east-1a", "namespace": "openshift-machine-api", "pod": "machine-api-controllers-779dcb8769-8gcn6", "service": "machine-api-controllers", "severity": "warning" }, "annotations": { "description": "The number of unhealthy machines has exceeded the `maxUnhealthy` limit for the check, you should check\nthe status of machines in the cluster.\n", "summary": "machine health check worker-blue-us-east-1a has been disabled by short circuit for more than 30 minutes" }, "state": "firing", "activeAt": "2022-12-09T15:59:25.1287541Z", "value": "1e+00" } ], "health": "ok", "evaluationTime": 0.000648129, "lastEvaluation": "2022-12-12T09:35:55.140174009Z", "type": "alerting" } ], "interval": 30, "limit": 0, "evaluationTime": 0.000661589, "lastEvaluation": "2022-12-12T09:35:55.140165629Z" }, As we can see above, worker-blue-us-east-1a is no longer available and active but rather worker-green-us-east-1a. But worker-blue-us-east-1a was there before the switch to green has happen and was actuall reporting some unhealthy Machines. But since it's now gone, mapi_machinehealthcheck_short_circuit should properly reconcile as otherwise this is a false/positive alert.
Version-Release number of selected component (if applicable):
OpenShift Container Platform 4.12.0-rc.3 (but is also seen on previous version)
How reproducible:
- Always
Steps to Reproduce:
1. Setup OpenShift Container Platform 4 on AWS for example 2. Create blue and green MachineSet with MachineAutoScaler and MachineHealthCheck 3. Have active Machines for blue only 4. Trigger unhealthy Machines in blue MachineSet 5. Switch to green MachineSet, by removing MachineHealthCheck, MachineAutoscaler and setting replicate of blue MachineSet to 0 6. Create green MachineHealthCheck, MachineAutoscaler and scale geen MachineSet to 1 7. Observe how mapi_machinehealthcheck_short_circuit continues to report unhealthy state for blue MachineHealthCheck which no longer exists.
Actual results:
mapi_machinehealthcheck_short_circuit reporting problematic MachineHealthCheck even though the faulty MachineHealthCheck does no longer exist.
Expected results:
mapi_machinehealthcheck_short_circuit to properly reconcile it's state and remove MachineHealthChecks that have been removed on OpenShift Container Platform level
Additional info:
It kind of looks like similar to the issue reported in https://bugzilla.redhat.com/show_bug.cgi?id=2013528 respectively https://bugzilla.redhat.com/show_bug.cgi?id=2047702 (although https://bugzilla.redhat.com/show_bug.cgi?id=2047702 may not be super relevant)
Description of problem:
This a bug record to pin down dependencies version in CMO release 4.12 after the release-4.12 branch was detached from master branch.
Version-Release number of selected component (if applicable):
4.12
How reproducible:
N/A
Steps to Reproduce:
N/A
Actual results:
N/A
Expected results:
N/A
Additional info:
None.
This is a clone of issue OCPBUGS-3195. The following is the description of the original issue:
—
Description of problem:
the service ca controller start func seems to return that error as soon as its context is cancelled (which seems to happen the moment the first signal is received): https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f558246b8025584056/pkg/controller/starter.go#L24 that apparently triggers os.Exit(1) immediately https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f55824[…]om/openshift/library-go/pkg/controller/controllercmd/builder.go the lock release doesn't happen until the periodic renew tick breaks out https://github.com/openshift/service-ca-operator/blob/42088528ef8a6a4b8c99b0f55824[…]/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go seems unlikely that you'd reach the call to le.release() before the call to os.Exit(1) in the other goroutine
Version-Release number of selected component (if applicable):
4.13.0
How reproducible:
~always
Steps to Reproduce:
1. oc delete -n openshift-service-ca pod <service-ca pod>
Actual results:
the old pod logs show:
W1103 09:59:14.370594 1 builder.go:106] graceful termination failed, controllers failed with error: stopped
and when a new pod comes up to replace it, it has to wait for a while before acquiring the leader lock
I1103 16:46:00.166173 1 leaderelection.go:248] attempting to acquire leader lease openshift-service-ca/service-ca-controller-lock... .... waiting .... I1103 16:48:30.004187 1 leaderelection.go:258] successfully acquired lease openshift-service-ca/service-ca-controller-lock
Expected results:
new pod can acquire the leader lease without waiting for the old pod's lease to expire
Additional info:
Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-2997.
This is a clone of issue OCPBUGS-3668. The following is the description of the original issue:
—
Description of problem:
Installer fails to install 4.12.0-rc.0 on VMware IPI with the script that worked with prior OCP versions. Error happens during Terraform prepare step when gathering information in the "Platform Provisioning Check". It looks like a permission issue, but we're using the VCenter administrator account. I double checked and that account has all the necessary permissions.
Version-Release number of selected component (if applicable):
OCP installer 4.12.0-rc.0 VSphere & Vcenter 7.0.3 - no pending updates
How reproducible:
always - we observed this already in the nightlies, but wanted to wait for a RC to confirm
Steps to Reproduce:
1. Try to install using the openshift-install binary
Actual results:
Fails during the preparation step
Expected results:
Installs the cluster ;)
Additional info:
This runs in our CICD pipeline, let me know if you want to need access to the full run log: https://gitlab.consulting.redhat.com/cblum/storage-ocs-lab/-/jobs/219304 This includes the install-config.yaml, all component versions and the full debug log output
Description of problem:
some upgrade ci jobs from 4.11.z to 4.12 nightly build are failed, because system unit machine-config-daemon-update-rpmostree-via-container is failed
omg get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-6e18de1272fad7a5ca1529941e3ceaed False True True 3 0 0 1 3h53m master rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 False True True 3 0 0 1 3h53m
check issued node
omg get node/ip-10-0-57-74.us-east-2.compute.internal -o yaml|yq -y '.metadata.annotations' cloud.network.openshift.io/egress-ipconfig: '[{"interface":"eni-0f6de21569b5b65c8","ifaddr":{"ipv4":"10.0.48.0/20"},"capacity":{"ipv4":14,"ipv6":15}}]' csi.volume.kubernetes.io/nodeid: '{"ebs.csi.aws.com":"i-01a34f6b5f2cd1e41"}' machine.openshift.io/machine: openshift-machine-api/ci-op-kb95kxx9-2a438-r6z94-master-2 machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-065664319cfbaee64277097d49a8a5a6 machineconfiguration.openshift.io/desiredConfig: rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/desiredDrain: drain-rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/lastAppliedDrain: drain-rendered-master-60f4ff5893c94f53acd9ebb7a6bf53d4 machineconfiguration.openshift.io/reason: 'error running systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host: Running as unit: machine-config-daemon-update-rpmostree-via-container.service Finished with result: exit-code Main processes terminated with: code=exited/status=125 Service runtime: 2min 52ms CPU time consumed: 144ms : exit status 125' machineconfiguration.openshift.io/state: Degraded volumes.kubernetes.io/controller-managed-attach-detach: 'true'
check mcd log on issued node
omg get pod -n openshift-machine-config-operator -o json | jq -r '.items[]|select(.spec.nodeName=="ip-10-0-57-74.us-east-2.compute.internal")|.metadata.name' | grep daemon machine-config-daemon-znbvf 2022-10-09T22:12:58.797891917Z I1009 22:12:58.797821 179598 update.go:1917] Updating OS to layered image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 2022-10-09T22:12:58.797891917Z I1009 22:12:58.797846 179598 rpm-ostree.go:447] Running captured: rpm-ostree --version 2022-10-09T22:12:58.815829171Z I1009 22:12:58.815800 179598 update.go:2068] rpm-ostree is not new enough for layering; forcing an update via container 2022-10-09T22:12:58.817577513Z I1009 22:12:58.817555 179598 update.go:2053] Running: systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host ... 2022-10-09T22:15:00.831959313Z E1009 22:15:00.831949 179598 writer.go:200] Marking Degraded due to: error running systemd-run --unit machine-config-daemon-update-rpmostree-via-container --collect --wait -- podman run --authfile /var/lib/kubelet/config.json --privileged --pid=host --net=host --rm -v /:/run/host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0daf5c4a35424410e88dde102022fc3581302bc8a98e09e2e4748502c59b3661 rpm-ostree ex deploy-from-self /run/host: Running as unit: machine-config-daemon-update-rpmostree-via-container.service 2022-10-09T22:15:00.831959313Z Finished with result: exit-code 2022-10-09T22:15:00.831959313Z Main processes terminated with: code=exited/status=125 2022-10-09T22:15:00.831959313Z Service runtime: 2min 52ms 2022-10-09T22:15:00.831959313Z CPU time consumed: 144ms 2022-10-09T22:15:00.831959313Z : exit status 125
Version-Release number of selected component (if applicable):
4.12
Steps to Reproduce:
upgrade cluster from 4.11.8 to 4.12.0-0.nightly-2022-10-05-053337
Actual results:
upgrade is failed due to node is degraded, rpm-ostree update via container is failed
Expected results:
upgrade can be completed successfully
Additional info:
must-gather: https://gcsweb-qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/qe-private-deck/logs/periodic-ci-openshift-openshift-tests-private-release-4.12-nightly-4.12-upgrade-from-stable-4.11-aws-ipi-proxy-p1/1579169944476585984/artifacts/aws-ipi-proxy-p1/gather-must-gather/artifacts/must-gather.tar
Other build logs of failed jobs
Tracker issue for bootimage bump in 4.12. This issue should block issues which need a bootimage bump to fix.
The previous bump was OCPBUGS-1941.
This is a clone of issue OCPBUGS-3287. The following is the description of the original issue:
—
Description of problem:
Configure both IPv4 and IPv6 addresses in api/ingress in install-config.yaml, install the cluster using agent-based installer. The cluster provisioned has only IPv4 stack for API/Ingress
Version-Release number of selected component (if applicable):
4.12
How reproducible:
Always
Steps to Reproduce:
1. As description 2. 3.
Actual results:
The cluster provisioned has only IPv4 stack for API/Ingress
Expected results:
The cluster provisioned has both IPv4 and IPv6 for API/Ingress
Additional info:
4.2 AWS boot images such as ami-01e7fdcb66157b224 include the old ignition.platform.id=ec2 kernel command line parameter. When launched against 4.12.0-rc.3, new machines fail with:
coreos-assemblers used ignition.platform.id=ec2, but pivoted to =aws here. It's not clear when that made its way into new AWS boot images. Some time after 4.2 and before 4.6.
Afterburn dropped support for legacy command-line options like the ec2 slug in 5.0.0. But it's not clear when that shipped into RHCOS. The release controller points at this RHCOS diff, but that has afterburn-0-5.3.0-1 builds on both sides.
100%, given a sufficiently old AMI and a sufficiently new OpenShift release target.
The new Machine will get to Provisioned but fail to progress to Running. systemd journal logs will include unknown provider 'ec2' for Afterburn units.
Old boot-image AMIs can successfully update to 4.12.
Alternatively, we pin down the set of exposed boot images sufficiently that users with older clusters can audit for exposure and avoid the issue by updating to more modern boot images (although updating boot images is not trivial, see RFE-3001 and the Ignition spec 2 to 3 transition discussed in kcs#5514051.
This is a clone of issue OCPBUGS-7102. The following is the description of the original issue:
—
Description of problem:
https://github.com/openshift/operator-framework-olm/blob/7ec6b948a148171bd336750fed98818890136429/staging/operator-lifecycle-manager/pkg/controller/operators/olm/plugins/downstream_csv_namespace_labeler_plugin_test.go#L309
has a dependency on creation of a next-version release branch.
Version-Release number of selected component (if applicable):
4.13
How reproducible:
Steps to Reproduce:
1. clone operator-framework/operator-framework-olm 2. make unit/olm 3. deal with a really bumpy first-time kubebuilder/envtest install experience 4. profit
Actual results:
error
Expected results:
pass
Additional info: