Back to index

4.12.0-rc.4

Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |

Changes from 4.11.59

Note: this page shows the Feature-Based Change Log for a release

Complete Features

These features were completed when this image was assembled

1. Proposed title of this feature request
Add runbook_url to alerts in the OCP UI

2. What is the nature and description of the request?
If an alert includes a runbook_url label, then it should appear in the UI for the alert as a link.

3. Why does the customer need this? (List the business requirements here)
Customer can easily reach the alert runbook and be able to address their issues.

4. List any affected packages or components.

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Rebase OpenShift components to k8s v1.24

Why is this important?

  • Rebasing ensures components work with the upcoming release of Kubernetes
  • Address tech debt related to upstream deprecations and removals.

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. k8s 1.24 release

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Incomplete Features

When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release

OLM would have to support a mechanism like podAffinity which allows multiple architecture values to be specified which enables it to pin operators to the matching architecture worker nodes

Ref: https://github.com/openshift/enhancements/pull/1014

 

Cut a new release of the OLM API and update OLM API dependency version (go.mod) in OLM package; then
Bring the upstream changes from OLM-2674 to the downstream olm repo.

A/C:

 - New OLM API version release
 - OLM API dependency updated in OLM Project
 - OLM Subscription API changes  downstreamed
 - OLM Controller changes  downstreamed
 - Changes manually tested on Cluster Bot

Epic Goal

  • Enabling integration of single hub cluster to install both ARM and x86 spoke clusters
  • Enabling support for heterogeneous OCP clusters
  • document requirements deployment flows
  • support in disconnected environment

Why is this important?

  • clients request

Scenarios

  1. Users manage both ARM and x86 machines, we should not require to have two different hub clusters
  2. Users manage a mixed architecture clusters without requirement of all the nodes to be of the same architecture

Acceptance Criteria

  • Process is well documented
  • we are able to install in a disconnected environment

We have a set of images

  • quay.io/edge-infrastructure/assisted-installer-agent:latest
  • quay.io/edge-infrastructure/assisted-installer-controller:latest
  • quay.io/edge-infrastructure/assisted-installer:latest

that should become multiarch images. This should be done both in upstream and downstream.

As a reference, we have built internally those images as multiarch and made them available as

  • registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest
  • registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest
  • registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest

They can be consumed by the Assisted Serivce pod via the following env

    - name: AGENT_DOCKER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:latest
    - name: CONTROLLER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:latest
    - name: INSTALLER_IMAGE
      value: registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:latest

Feature Overview

We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.

Goals

  • Feature enhancements (performance, scale, configuration, UX, ...)
  • Modernization (incorporation and productization of new technologies)

Requirements

  • Core Networking Stability
  • Core Networking Performance and Scale
  • Core Neworking Extensibility (Multus CNIs)
  • Core Networking UX (Observability)
  • Core Networking Security and Compliance

In Scope

  • Network Edge (ingress, DNS, LB)
  • SDN (CNI plugins, openshift-sdn, OVN, network policy, egressIP, egress Router, ...)
  • Networking Observability

Out of Scope

There are definitely grey areas, but in general:

  • CNV
  • Service Mesh
  • CNF

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

Goal: Provide queryable metrics and telemetry for cluster routes and sharding in an OpenShift cluster.

Problem: Today we test OpenShift performance and scale with best-guess or anecdotal evidence for the number of routes that our customers use. Best practices for a large number of routes in a cluster is to shard, however we have no visibility with regard to if and how customers are using sharding.

Why is this important? These metrics will inform our performance and scale testing, documented cluster limits, and how customers are using sharding for best practice deployments.

Dependencies (internal and external):

Prioritized epics + deliverables (in scope / not in scope):

Not in scope:

Estimate (XS, S, M, L, XL, XXL):

Previous Work:

Open questions:

Acceptance criteria:

Epic Done Checklist:

  • CI - CI Job & Automated tests: <link to CI Job & automated tests>
  • Release Enablement: <link to Feature Enablement Presentation> 
  • DEV - Upstream code and tests merged: <link to meaningful PR orf GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>
  • Notes for Done Checklist
    • Adding links to the above checklist with multiple teams contributing; select a meaningful reference for this Epic.
    • Checklist added to each Epic in the description, to be filled out as phases are completed - tracking progress towards “Done” for the Epic.

Description:

As described in the Metrics to be sent via telemetry section of the Design Doc, the following metrics is needed to be sent from OpenShift cluster to Red Hat premises:

  • Minimum Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:min  : min(route_metrics_controller_routes_per_shard)
    • Gives the minimum value of Routes per Shard.
  • Maximum Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:max  : max(route_metrics_controller_routes_per_shard)
    • Gives the maximum value of Routes per Shard.
  • Average Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:avg  : avg(route_metrics_controller_routes_per_shard)
    • Gives the average value of Routes per Shard.
  • Median Routes per Shard
    • Recording Rule – cluster:route_metrics_controller_routes_per_shard:median  : quantile(0.5, route_metrics_controller_routes_per_shard)
    • Gives the median value of Routes per Shard.
  • Number of Routes summed by TLS Termination type
    • Recording Rule – cluster:openshift_route_info:tls_termination:sum : sum (openshift_route_info) by (tls_termination)
    • Gives the number of Routes for each tls_termination value. The possible values for tls_termination are edge, passthrough and reencrypt. 

The metrics should be allowlisted on the cluster side.

The steps described in Sending metrics via telemetry are needed to be followed. Specifically step 5.

Depends on CFE-478.

Acceptance Criteria:

  • Support for sending the above mentioned metrics from OpenShift clusters to the Red Hat premises by allowlisting metrics on the cluster side

Description:

As described in the Design Doc, the following information is needed to be exported from Cluster Ingress Operator:

  • Number of routes/shard

Design 2 will be implemented as part of this story.

 

Acceptance Criteria:

  • Support for exporting the above mentioned metrics by Cluster Ingress Operator

Epic Goal

  • Make it possible to disable the console operator at install time, while still having a supported+upgradeable cluster.

Why is this important?

  • It's possible to disable console itself using spec.managementState in the console operator config. There is no way to remove the console operator, though. For clusters where an admin wants to completely remove console, we should give the option to disable the console operator as well.

Scenarios

  1. I'm an administrator who wants to minimize my OpenShift cluster footprint and who does not want the console installed on my cluster

Acceptance Criteria

  • It is possible at install time to opt-out of having the console operator installed. Once the cluster comes up, the console operator is not running.

Dependencies (internal and external)

  1. Composable cluster installation

Previous Work (Optional):

  1. https://docs.google.com/document/d/1srswUYYHIbKT5PAC5ZuVos9T2rBnf7k0F1WV2zKUTrA/edit#heading=h.mduog8qznwz
  2. https://docs.google.com/presentation/d/1U2zYAyrNGBooGBuyQME8Xn905RvOPbVv3XFw3stddZw/edit#slide=id.g10555cc0639_0_7

Open questions::

  1. The console operator manages the downloads deployment as well. Do we disable the downloads deployment? Long term we want to move to CLI manager: https://github.com/openshift/enhancements/blob/6ae78842d4a87593c63274e02ac7a33cc7f296c3/enhancements/oc/cli-manager.md

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

In the console-operator repo we need to add `capability.openshift.io/console` annotation to all the manifests that the operator either contains creates on the fly.

 

Manifests are currently present in /bindata and /manifest directories.

 

Here is example of the insights-operator change.

Here is the overall enhancement doc.

 

This is a epic bucket for all activities surrounding the creation of declarative approach to release and maintain OLM catalogs.

Epic Goal

  • Allow Operator Authors to easily change the layout of the update graph in a single location so they can version/maintain/release it via git and have more approachable controls about graph vertices than today's replaces, skips and/or skipRange taxonomy
  • Allow Operators authors to have control over channel and bundle channel membership

Why is this important?

  • The imperative catalog maintenance approach so far with opm is being moved to a declarative format (OLM-2127 and OLM-1780) moving away from bundle-level controls but the update graph properties are still attached to a bundle
  • We've received feedback from the RHT internal developer community that maintaining and reasoning about the graph in the context of a single channel is still too hard, even with visualization tools
  • making the update graph easily changeable is important to deliver on some of the promises of declarative index configuration
  • The current interface for declarative index configuration still relies on skips, skipRange and replaces to shape the graph on a per-bundle level - this is too complex at a certain point with a lot of bundles in channels, we need to something at the package level

Scenarios

  1. An Operator author wants to release a new version replacing the latest version published previously
  2. After additional post-GA testing an Operator author wants to establish a new update path to an existing released version from an older, released version
  3. After finding a bug post-GA an Operator author wants to temporarily remove a known to be problematic update path
  4. An automated system wants to push a bundle inbetween an existing update path as a result of an Operator (base) image rebuild (Freshmaker use case)
  5. A user wants to take a declarative graph definition and turn it into a graphical image for visually ensuring the graph looks like they want
  6. An Operator author wants to promote a certain bundle to an additional / different channel to indicate progress in maturity of the operator.

Acceptance Criteria

  • The declarative format has to be user readable and terse enough to make quick modifications
  • The declarative format should be machine writeable (Freshmaker)
  • The update graph is declared and modified in a text based format aligned with the declarative config
  • it has to be possible to add / removes edges at the leave of the graph (releasing/unpublishing a new version)
  • it has to be possible to add/remove new vertices between existing edges (releasing/retracting a new update path)
  • it has to be possible to add/remove new edges in between existing vertices (releasing/unpublishing a version inbetween, freshmaker user case)
  • it has to be possible to change the channel member ship of a bundle after it's published (channel promotion)
  • CI - MUST be running successfully with tests automated
  • it has to be possible to add additional metadata later to implement OLM-2087 and OLM-259 if required

Dependencies (internal and external)

  1. Declarative Index Config (OLM-2127)

Previous Work:

  1. Declarative Index Config (OLM-1780)

Related work

Open questions:

  1. What other manipulation scenarios are required?
    1. Answer: deprecation of content in the spirit of OLM-2087
    2. Answer: cross-channel update hints as described in OLM-2059 if that implementation requires it

 

When working on this Epic, it's important to keep in mind this other potentially related Epic: https://issues.redhat.com/browse/OLM-2276

 

enhance the veneer rendering to be able to read the input veneer data from stdin, via a pipe, in a manner similar to https://dev.to/napicella/linux-pipes-in-golang-2e8j

then the command could be used in a manner similar to many k8s examples like

```shell
opm alpha render-veneer semver -o yaml < infile > outfile
```

Upstream issue link: https://github.com/operator-framework/operator-registry/issues/1011

Jira Description

As an OPM maintainer, I want to downstream the PR for (OCP 4.12 ) and backport it to OCP 4.11 so that IIB will NOT be impacted by the changes when it upgrades the OPM version to use the next/future opm upstream release (v1.25.0).

Summary / Background

IIB(the downstream service that manages the indexes) uses the upstream version and if they bump the OPM version to the next/future (v1.25.0) release with this change before having the downstream images updated then: the process to manage the indexes downstream will face issues and it will impact the distributions. 

Acceptance Criteria

  • The changes in the PR are available for the releases which uses FBC -> OCP 4.11, 4.12

Definition of Ready

  • PRs merged into downstream OCP repos branches 4.11/4.12

Definition of Done

  • We checked that the downstream images are with the changes applied (i.e.: we can try to verify in the same way that we checked if the changes were in the downstream for the fix OLM-2639 )

Feature Overview
Provide CSI drivers to replace all the intree cloud provider drivers we currently have. These drivers will probably be released as tech preview versions first before being promoted to GA.

Goals

  • Framework for rapid creation of CSI drivers for our cloud providers
  • CSI driver for AWS EBS
  • CSI driver for AWS EFS
  • CSI driver for GCP
  • CSI driver for Azure
  • CSI driver for VMware vSphere
  • CSI Driver for Azure Stack
  • CSI Driver for Alicloud
  • CSI Driver for IBM Cloud

Requirements

Requirement Notes isMvp?
Framework for CSI driver  TBD Yes
Drivers should be available to install both in disconnected and connected mode   Yes
Drivers should upgrade from release to release without any impact   Yes
Drivers should be installable via CVO (when in-tree plugin exists)    

Out of Scope

This work will only cover the drivers themselves, it will not include

  • enhancements to the CSI API framework
  • the migration to said drivers from the the intree drivers
  • work for non-cloud provider storage drivers (FC-SAN, iSCSI) being converted to CSI drivers

Background, and strategic fit
In a future Kubernetes release (currently 1.21) intree cloud provider drivers will be deprecated and replaced with CSI equivalents, we need the drivers created so that we continue to support the ecosystems in an appropriate way.

Assumptions

  • Storage SIG won't move out the changeover to a later Kubernetes release

Customer Considerations
Customers will need to be able to use the storage they want.

Documentation Considerations

  • Target audience: cluster admins
  • Updated content: update storage docs to show how to use these drivers (also better expose the capabilities)

This Epic is to track the GA of this feature

Goal

  • Make available the Google Cloud File Service via a CSI driver, it is desirable that this implementation has dynamic provisioning
  • Without GCP filestore support, we are limited to block / RWO only (GCP PD 4.8 GA)
  • Align with what we support on other major public cloud providers.

Why is this important?

  • There is a know storage gap with google cloud where only block is supported
  • More customers deploying on GCE and asking for file / RWX storage.

Scenarios

  1. Install the CSI driver
  2. Remove the CSI Driver
  3. Dynamically provision a CSI Google File PV*
  4. Utilise a Google File PV
  5. Assess optional features such as resize & snapshot

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Customers::

  • Telefonica Spain
  • Deutsche Bank

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

As an OCP user, I want images for GCP Filestore CSI Driver and Operator, so that I can install them on my cluster and utilize GCP Filestore shares.

We need to continue to maintain specific areas within storage, this is to capture that effort and track it across releases.

Goals

  • To allow OCP users and cluster admins to detect problems early and with as little interaction with Red Hat as possible.
  • When Red Hat is involved, make sure we have all the information we need from the customer, i.e. in metrics / telemetry / must-gather.
  • Reduce storage test flakiness so we can spot real bugs in our CI.

Requirements

Requirement Notes isMvp?
Telemetry   No
Certification   No
API metrics   No
     

Out of Scope

n/a

Background, and strategic fit
With the expected scale of our customer base, we want to keep load of customer tickets / BZs low

Assumptions

Customer Considerations

Documentation Considerations

  • Target audience: internal
  • Updated content: none at this time.

Notes

In progress:

  • CI flakes:
    • Configurable timeouts for e2e tests
      • Azure is slow and times out often
      • Cinder times out formatting volumes
      • AWS resize test times out

 

High prio:

  • Env. check tool for VMware - users often mis-configure permissions there and blame OpenShift. If we had a tool they could run, it might report better errors.
    • Should it be part of the installer?
    • Spike exists
  • Add / use cloud API call metrics
    • Helps customers to understand why things are slow
    • Helps build cop to understand a flake
      • With a post-install step that filters data from Prometheus that’s still running in the CI job.
    • Ideas:
      • Cloud is throttling X% of API calls longer than Y seconds
      • Attach / detach / provisioning / deletion / mount / unmount / resize takes longer than X seconds?
    • Capture metrics of operations that are stuck and won’t finish.
      • Sweep operation map from executioner???
      • Report operation metric into the highest bucket after the bucket threshold (i.e. if 10minutes is the last bucket, report an operation into this bucket after 10 minutes and don’t wait for its completion)?
      • Ask the monitoring team?
    • Include in CSI drivers too.
      • With alerts too

Unsorted

  • As the number of storage operators grows, it would be grafana board for storage operators
    • CSI driver metrics (from CSI sidecars + the driver itself  + its operator?)
    • CSI migration?
  • Get aggregated logs in cluster
    • They're rotated too soon
    • No logs from dead / restarted pods
    • No tools to combine logs from multiple pods (e.g. 3 controller managers)
  • What storage issues customers have? it was 22% of all issues.
    • Insufficient docs?
    • Probably garbage
  • Document basic storage troubleshooting for our supports
    • What logs are useful when, what log level to use
    • This has been discussed during the GSS weekly team meeting; however, it would be beneficial to have this documented.
  • Common vSphere errors, their debugging and fixing. 
  • Document sig-storage flake handling - not all failed [sig-storage] tests are ours
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

The End of General support for vSphere 6.7 will be on October 15, 2022. So, vSphere 6.7 will be deprecated for 4.11.

We want to encourage vSphere customers to upgrade to vSphere 7 in OCP 4.11 since VMware is EOLing (general support) for vSphere 6.7 in Oct 2022.

We want the cluster Upgradeable=false + have a strong alert pointing to our docs / requirements.

related slack: https://coreos.slack.com/archives/CH06KMDRV/p1647541493096729

Epic Goal

  • Update all images that we ship with OpenShift to the latest upstream releases and libraries.
  • Exact content of what needs to be updated will be determined as new images are released upstream, which is not known at the beginning of OCP development work. We don't know what new features will be included and should be tested and documented. Especially new CSI drivers releases may bring new, currently unknown features. We expect that the amount of work will be roughly the same as in the previous releases. Of course, QE or docs can reject an update if it's too close to deadline and/or looks too big.

Traditionally we did these updates as bugfixes, because we did them after the feature freeze (FF). Trying no-feature-freeze in 4.12. We will try to do as much as we can before FF, but we're quite sure something will slip past FF as usual.

Why is this important?

  • We want to ship the latest software that contains new features and bugfixes.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

Update all OCP and kubernetes libraries in storage operators to the appropriate version for OCP release.

This includes (but is not limited to):

  • Kubernetes:
    • client-go
    • controller-runtime
  • OCP:
    • library-go
    • openshift/api
    • openshift/client-go
    • operator-sdk

Operators:

  • aws-ebs-csi-driver-operator 
  • aws-efs-csi-driver-operator
  • azure-disk-csi-driver-operator
  • azure-file-csi-driver-operator
  • openstack-cinder-csi-driver-operator
  • gcp-pd-csi-driver-operator
  • gcp-filestore-csi-driver-operator
  • manila-csi-driver-operator
  • ovirt-csi-driver-operator
  • vmware-vsphere-csi-driver-operator
  • alibaba-disk-csi-driver-operator
  • ibm-vpc-block-csi-driver-operator
  • csi-driver-shared-resource-operator

 

  • cluster-storage-operator
  • csi-snapshot-controller-operator
  • local-storage-operator
  • vsphere-problem-detector

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

There is a new driver release 5.0.0 since the last rebase that includes snapshot support:

https://github.com/kubernetes-sigs/ibm-vpc-block-csi-driver/releases/tag/v5.0.0

Rebase the driver on v5.0.0 and update the deployments in ibm-vpc-block-csi-driver-operator.
There are no corresponding changes in ibm-vpc-node-label-updater since the last rebase.

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

(Using separate cards for each driver because these updates can be more complicated)

Update the driver to the latest upstream release. Notify QE and docs with any new features and important bugfixes that need testing or documentation.

This includes ibm-vpc-node-label-updater!

(Using separate cards for each driver because these updates can be more complicated)

Epic Goal

  • Enable the migration from a storage intree driver to a CSI based driver with minimal impact to the end user, applications and cluster
  • These migrations would include, but are not limited to:
    • CSI driver for AWS EBS
    • CSI driver for GCP
    • CSI driver for Azure (file and disk)
    • CSI driver for VMware vSphere

Why is this important?

  • OpenShift needs to maintain it's ability to enable PVCs and PVs of the main storage types
  • CSI Migration is getting close to GA, we need to have the feature fully tested and enabled in OpenShift
  • Upstream intree drivers are being deprecated to make way for the CSI drivers prior to intree driver removal

Scenarios

  1. User initiated move to from intree to CSI driver
  2. Upgrade initiated move from intree to CSI driver
  3. Upgrade from EUS to EUS

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

This Epic tracks the GA of this feature

Epic Goal

Why is this important?

  • OpenShift needs to maintain it's ability to enable PVCs and PVs of the main storage types
  • CSI Migration is getting close to GA, we need to have the feature fully tested and enabled in OpenShift
  • Upstream intree drivers are being deprecated to make way for the CSI drivers prior to intree driver removal

Scenarios

  1. User initiated move to from intree to CSI driver
  2. Upgrade initiated move from intree to CSI driver
  3. Upgrade from EUS to EUS

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

On new installations, we should make the StorageClass created by the CSI operator the default one. 

However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set  a different quota on the CSI driver Storage Class.

Exit criteria:

  • New clusters get the CSI Storage Class as the default one.
  • Existing clusters don't get their default Storage Classes changed.
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

On new installations, we should make the StorageClass created by the CSI operator the default one. 

However, we shouldn't do that on an upgrade scenario. The main reason is that users might have set  a different quota on the CSI driver Storage Class.

Exit criteria:

  • New clusters get the CSI Storage Class as the default one.
  • Existing clusters don't get their default Storage Classes changed.

tldr: three basic claims, the rest is explanation and one example

  1. We cannot improve long term maintainability solely by fixing bugs.
  2. Teams should be asked to produce designs for improving maintainability/debugability.
  3. Specific maintenance items (or investigation of maintenance items), should be placed into planning as peer to PM requests and explicitly prioritized against them.

While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.

One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.

I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.

We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.


Relevant links:

Epic Goal

  • Change the default value for the spec.tuningOptions.maxConnections field in the IngressController API, which configures the HAProxy maxconn setting, to 50000 (fifty thousand).

Why is this important?

  • The maxconn setting constrains the number of simultaneous connections that HAProxy accepts. Beyond this limit, the kernel queues incoming connections. 
  • Increasing maxconn enables HAProxy to queue incoming connections intelligently.  In particular, this enables HAProxy to respond to health probes promptly while queueing other connections as needed.
  • The default setting of 20000 has been in place since OpenShift 3.5 was released in April 2017 (see BZ#1405440, commit, RHBA-2017:0884). 
  • Hardware capabilities have increased over time, and the current default is too low for typical modern machine sizes. 
  • Increasing the default setting improves HAProxy's performance at an acceptable cost in the common case. 

Scenarios

  1. As a cluster administrator who is installing OpenShift on typical hardware, I want OpenShift router to be tuned appropriately to take advantage of my hardware's capabilities.

Acceptance Criteria

  • CI is passing. 
  • The new default setting is clearly documented. 
  • A release note informs cluster administrators of the change to the default setting. 

Dependencies (internal and external)

  1. None.

Previous Work (Optional):

  1. The  haproxy-max-connections-tuning enhancement made maxconn configurable without changing the default.  The enhancement document details the tradeoffs in terms of memory for various settings of nbthreads and maxconn with various numbers of routes. 

Open questions::

  1. ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

 

OCP/Telco Definition of Done

Epic Template descriptions and documentation.

Epic Goal

Why is this important?

  • This regression is a major performance and stability issue and it has happened once before.

Drawbacks

  • The E2E test may be complex due to trying to determine what DNS pods are responding to DNS requests. This is straightforward using the chaos plugin.

Scenarios

  • CI Testing

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. SDN Team

Previous Work (Optional):

  1. N/A

Open questions::

  1. Where do these E2E test go? SDN Repo? DNS Repo?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub
    Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub
    Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Enable the chaos plugin https://coredns.io/plugins/chaos/ in our CoreDNS configuration so that we can use a DNS query to easily identify what DNS pods are responding to our requests.

Feature Overview

  • This Section:* High-Level description of the feature ie: Executive Summary
  • Note: A Feature is a capability or a well defined set of functionality that delivers business value. Features can include additions or changes to existing functionality. Features can easily span multiple teams, and multiple releases.

 

Goals

  • This Section:* Provide high-level goal statement, providing user context and expected user outcome(s) for this feature

 

Requirements

  • This Section:* A list of specific needs or objectives that a Feature must deliver to satisfy the Feature.. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

 

(Optional) Use Cases

This Section: 

  • Main success scenarios - high-level user stories
  • Alternate flow/scenarios - high-level user stories
  • ...

 

Questions to answer…

  • ...

 

Out of Scope

 

Background, and strategic fit

This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

 

Assumptions

  • ...

 

Customer Considerations

  • ...

 

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?  
  • New Content, Updates to existing content,  Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

When OCP is performing cluster upgrade user should be notified about this fact.

There are two possibilities how to surface the cluster upgrade to the users:

  • Display a console notification throughout OCP web UI saying that the cluster is currently under upgrade.
  • Global notification throughout OCP web UI saying that the cluster is currently under upgrade.
  • Have an alert firing for all the users of OCP stating the cluster is undergoing an upgrade. 

 

AC:

  • Console-operator will create a ConsoleNotification CR when the cluster is being upgraded. Once the upgrade is done console-operator will remote that CR. These are the three statuses based on which we are determining if the cluster is being upgraded.
  • Add unit tests

 

Note: We need to decide if we want to distinguish this particular notification by a different color? ccing Ali Mobrem 

 

Created from: https://issues.redhat.com/browse/RFE-3024

As a console user I want to have option to:

  • Restart Deployment
  • Retry latest DeploymentConfig if it failed

 

For Deployments we will add the 'Restart rollout' action button. This action will PATCH the Deployment object's 'spec.template.metadata.annotations' block, by adding 'openshift.io/restartedAt: <actual-timestamp>' annotation. This will restart the deployment, by creating a new ReplicaSet.

  • action is disabled if:
    • Deployment is paused

 

For DeploymentConfig we will add 'Retry rollout' action button.  This action will PATCH the latest revision of ReplicationController object's 'metadata.annotations' block by setting 'openshift.io/deployment/phase: "New"' and removing openshift.io/deployment.cancelled and openshift.io/deployment.status-reason.

  • action is enabled if:
    • latest revision of the ReplicationController resource is in Failed phase
  • action is disabled if:
    • latest revision of the ReplicationController resource is in Complete phase
    • DeploymentConfig does not have any rollouts
    • DeploymentConfigs is paused

 

Acceptance Criteria:

  • Add the 'Restart rollout' action button for the Deployment resource to both action menu and kebab menu
  • Add the 'Retry rollout' action button for the DeploymentConfig resource to both action menu and kebab menu

 

BACKGROUND:

OpenShift console will be updated to allow rollout restart deployment from the console itself.

Currently, from the OpenShift console, for the resource “deploymentconfigs” we can only start and pause the rollout, and for the resource “deployment” we can only resume the rollout. None of the resources (deployment & deployment config) has this option to restart the rollout. So, that is the reason why the customer wants this functionality to perform the same action from the CLI as well as the OpenShift console.

The customer wants developers who are not fluent with the oc tool and terminal utilities, can use the console instead of the terminal to restart deployment, just like we use to do it through CLI using the command “oc rollout restart deploy/<deployment-name>“.
Usually when developers change the config map that deployment uses they have to restart pods. Currently, the developers have to use the oc rollout restart deployment command. The customer wants the functionality to get this button/menu to perform the same action from the console as well.

Design
Doc: https://docs.google.com/document/d/1i-jGtQGaA0OI4CYh8DH5BBIVbocIu_dxNt3vwWmPZdw/edit

As a developer, I want to make status.HostIP for Pods visible in the Pod details page of the OCP Web Console. Currently there is no way to view the node IP for a Pod in the OpenShift Web Console.  When viewing a Pod in the console, the field status.HostIP is not visible.

 

Acceptance criteria:

  • Make pod's HostIP field visible in the pod details page, similarly to PodIP field

Feature Overview

  • As an infrastructure owner, I want a repeatable method to quickly deploy the initial OpenShift cluster.
  • As an infrastructure owner, I want to install the first (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters.

Goals

  • Enable customers and partners to successfully deploy a single “first” cluster in disconnected, on-premises settings

Requirements

4.11 MVP Requirements

  • Customers and partners needs to be able to download the installer
  • Enable customers and partners to deploy a single “first” cluster (cluster 0) using single node, compact, or highly available topologies in disconnected, on-premises settings
  • Installer must support advanced network settings such as static IP assignments, VLANs and NIC bonding for on-premises metal use cases, as well as DHCP and PXE provisioning environments.
  • Installer needs to support automation, including integration with third-party deployment tools, as well as user-driven deployments.
  • In the MVP automation has higher priority than interactive, user-driven deployments.
  • For bare metal deployments, we cannot assume that users will provide us the credentials to manage hosts via their BMCs.
  • Installer should prioritize support for platforms None, baremetal, and VMware.
  • The installer will focus on a single version of OpenShift, and a different build artifact will be produced for each different version.
  • The installer must not depend on a connected registry; however, the installer can optionally use a previously mirrored registry within the disconnected environment.

Use Cases

  • As a Telco partner engineer (Site Engineer, Specialist, Field Engineer), I want to deploy an OpenShift cluster in production with limited or no additional hardware and don’t intend to deploy more OpenShift clusters [Isolated edge experience].
  • As a Enterprise infrastructure owner, I want to manage the lifecycle of multiple clusters in 1 or more sites by first installing the first  (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters [Cluster before your cluster].
  • As a Partner, I want to package OpenShift for large scale and/or distributed topology with my own software and/or hardware solution.
  • As a large enterprise customer or Service Provider, I want to install a “HyperShift Tugboat” OpenShift cluster in order to offer a hosted OpenShift control plane at scale to my consumers (DevOps Engineers, tenants) that allows for fleet-level provisioning for low CAPEX and OPEX, much like AKS or GKE [Hypershift].
  • As a new, novice to intermediate user (Enterprise Admin/Consumer, Telco Partner integrator, RH Solution Architect), I want to quickly deploy a small OpenShift cluster for Poc/Demo/Research purposes.

Questions to answer…

  •  

Out of Scope

Out of scope use cases (that are part of the Kubeframe/factory project):

  • As a Partner (OEMs, ISVs), I want to install and pre-configure OpenShift with my hardware/software in my disconnected factory, while allowing further (minimal) reconfiguration of a subset of capabilities later at a different site by different set of users (end customer) [Embedded OpenShift].
  • As an Infrastructure Admin at an Enterprise customer with multiple remote sites, I want to pre-provision OpenShift centrally prior to shipping and activating the clusters in remote sites.

Background, and strategic fit

  • This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

Assumptions

  1. The user has only access to the target nodes that will form the cluster and will boot them with the image presented locally via a USB stick. This scenario is common in sites with restricted access such as government infra where only users with security clearance can interact with the installation, where software is allowed to enter in the premises (in a USB, DVD, SD card, etc.) but never allowed to come back out. Users can't enter supporting devices such as laptops or phones.
  2. The user has access to the target nodes remotely to their BMCs (e.g. iDrac, iLo) and can map an image as virtual media from their computer. This scenario is common in data centers where the customer provides network access to the BMCs of the target nodes.
  3. We cannot assume that we will have access to a computer to run an installer or installer helper software.

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

 

References

 

 

Set the ClusterDeployment CRD to deploy OpenShift in FIPS mode and make sure that after deployment the cluster is set in that mode

In order to install FIPS compliant clusters, we need to make sure that installconfig + agentoconfig based deployments take into account the FIPS config in installconfig.

This task is about passing the config to agentclusterinstall so it makes it into the iso. Once there, AGENT-374 will give it to assisted service

Epic Goal

As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with dual-stack IPv4/IPv6

As a OpenShift infrastructure owner, I want to deploy OpenShift clusters with single-stack IPv6

Why is this important?

IPv6 and dual-stack clusters are requested often by customers, especially from Telco customers. Working with dual-stack clusters is a requirement for many but also a transition into a single-stack IPv6 clusters, which for some of our users is the final destination.

Acceptance Criteria

  • Agent-based installer can deploy IPv6 clusters
  • Agent-based installer can deploy dual-stack clusters
  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Previous Work

Karim's work proving how agent-based can deploy IPv6: IPv6 deploy with agent based installer]

Done Checklist * CI - CI is running, tests are automated and merged.

  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>|

For dual-stack installations the agent-cluster-install.yaml must have both an IPv4 and IPv6 subnet in the networkking.MachineNetwork or assisted-service will throw an error. This field is in InstallConfig but it must be added to agent-cluster-install in its Generate().

For IPv4 and IPv6 installs, setting up the MachineNetwork is not needed but it also does not cause problems if its set, so it should be fine to set it all times.

Epic Goal

As an OpenShift infrastructure owner, I want to deploy a cluster zero with RHACM or MCE and have the required components installed when the installation is completed

Why is this important?

BILLI makes it easier to deploy a cluster zero. BILLI users know at installation time what the purpose of their cluster is when they plan the installation. Day-2 steps are necessary to install operators and users, especially when automating installations, want to finish the installation flow when their required components are installed.

Acceptance Criteria

  • A user can provide MCE manifests and have it installed without additional manual steps after the installation is completed
  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story:

As a customer, I want to be able to:

  • Install MCE with the agent-installer

so that I can achieve

  • create an MCE hub with my openshift install

Acceptance Criteria:

Description of criteria:

  • Upstream documentation including examples of the extra manifests needed
  • Unit tests that include MCE extra manifests
  • Ability to install MCE using agent-installer is tested
  • Point 3

(optional) Out of Scope:

We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)

Engineering Details:

This requires/does not require a design proposal.
This requires/does not require a feature gate.

User Story:

As a customer, I want to be able to:

  • Install MCE with the agent-installer

so that I can achieve

  • create an MCE hub with my openshift install

Acceptance Criteria:

Description of criteria:

  • Upstream documentation including examples of the extra manifests needed
  • Unit tests that include MCE extra manifests
  • Ability to install MCE using agent-installer is tested
  • Point 3

(optional) Out of Scope:

We are only allowing the user to provide extra manifests to install MCE at this time. We are not adding an option to "install mce" on the command line (or UI)

Engineering Details:

This requires/does not require a design proposal.
This requires/does not require a feature gate.

Epic Goal

  • Rebase cluster autoscaler on top of Kubernetes 1.25

Why is this important?

  • Need to pick up latest upstream changes

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a user I would like to see all the events that the autoscaler creates, even duplicates. Having the CAO set this flag will allow me to continue to see these events.

Background

We have carried a patch for the autoscaler that would enable the duplication of events. This patch can now be dropped because the upstream added a flag for this behavior in https://github.com/kubernetes/autoscaler/pull/4921

Steps

  • add the --record-duplicated-events flag to all autoscaler deployments from the CAO

Stakeholders

  • openshift eng

Definition of Done

  • autoscaler continues to work as expected and produces events for everything
  • Docs
  • this does not require documentation as it preserves existing behavior and provides no interface for user interaction
  • Testing
  • current tests should continue to pass

Feature Overview

Add GA support for deploying OpenShift to IBM Public Cloud

Goals

Complete the existing gaps to make OpenShift on IBM Cloud VPC (Next Gen2) General Available

Requirements

Optional requirements

  • OpenShift can be deployed using Mint mode and STS for cloud provider credentials (future release, tbd)
  • OpenShift can be deployed in disconnected mode https://issues.redhat.com/browse/SPLAT-737)
  • OpenShift on IBM Cloud supports User Provisioned Infrastructure (UPI) deployment method (future release, 4.14?)

Epic Goal

  • Enable installation of private clusters on IBM Cloud. This epic will track associated work.

Why is this important?

  • This is required MVP functionality to achieve GA.

Scenarios

  1. Install a private cluster on IBM Cloud.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Background and Goal

Currently in OpenShift we do not support distributing hotfix packages to cluster nodes. In time-sensitive situations, a RHEL hotfix package can be the quickest route to resolving an issue. 

Acceptance Criteria

  1. Under guidance from Red Hat CEE, customers can deploy RHEL hotfix packages to MachineConfigPools.
  2. Customers can easily remove the hotfix when the underlying RHCOS image incorporates the fix.

Before we ship OCP CoreOS layering in https://issues.redhat.com/browse/MCO-165 we need to switch the format of what is currently `machine-os-content` to be the new base image.

The overall plan is:

  • Publish the new base image as `rhel-coreos-8` in the release image
  • Also publish the new extensions container (https://github.com/openshift/os/pull/763) as `rhel-coreos-8-extensions`
  • Teach the MCO to use this without also involving layering/build controller
  • Delete old `machine-os-content`

As a OCP CoreOS layering developer, having telemetry data about number of cluster using osImageURL will help understand how broadly this feature is getting used and improve accordingly.

Acceptance Criteria:

  • Cluster using Custom osImageURL is available via telemetry

After https://github.com/openshift/os/pull/763 is in the release image, teach the MCO how to use it. This is basically:

  • Schedule the extensions container as a kubernetes service (just serves a yum repo via http)
  • Change the MCD to write a file into `/etc/yum.repos.d/machine-config-extensions.repo` that consumes it instead of what it does now in pulling RPMs from the mounted container filesystem

 

Why?

  • Decouple control and data plane. 
    • Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.
  • Improve security
    • Shift credentials out of cluster that support the operation of core platform vs workload
  • Improve cost
    • Allow a user to toggle what they don’t need.
    • Ensure a smooth path to scale to 0 workers and upgrade with 0 workers.

 

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure , and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

 

 

Doc: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

Overview 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure, and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

DoD 

cluster-snapshot-controller-operator is running on the CP. 

More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

As HyperShift Cluster Instance Admin, I want to run cluster-csi-snapshot-controller-operator in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Move creation of manifests/08_webhook_service.yaml from CVO to the operator - it needs to be created in the management cluster.
  • Tag manifests of objects that should not be deployed by CVO in HyperShift by
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
    • Don’t create operand’s PodDisruptionBudget?
    • Update ValidationWebhookConfiguration to point directly to URL exposed by manifests/08_webhook_service.yaml instead of a Service. The Service is not available in the guest cluster.
    • Pass only the guest kubeconfig to the operands (both the webhook and csi-snapshot-controller).
    • Update unit tests to handle two kube clients.

Exit criteria:

  • cluster-csi-snapshot-controller-operator runs in the management cluster in HyperShift
  • csi-snapshot-controller runs in the management cluster in HyperShift
  • It is possible to take & restore volume snapshot in the guest cluster.
  • No regressions in standalone OCP.

As OpenShift developer I want cluster-csi-snapshot-controller-operator to use existing controllers in library-go, so I don’t need to maintain yet another code that does the same thing as library-go.

  • Check and remove manifests/03_configmap.yaml, it does not seem to be useful.
  • Check and remove manifests/03_service.yaml, it does not seem to be useful (at least now).
  • Use DeploymentController from library-go to sync Deployments.
  • Get rid of common/ package? It does not seem to be useful.
  • Use StaticResourceController for static content, including the snapshot CRDs.

Note: if this refactoring introduces any new conditions, we must make sure that 4.11 snapshot controller clears them to support downgrade! This will need 4.11 BZ + z-stream update!

Similarly, if some conditions become obsolete / not managed by any controller, they must be cleared by 4.12 operator.

Exit criteria:

  • The operator code is smaller.
  • No regressions in standalone OCP.
  • Upgrade/downgrade from/to standalone OCP 4.11 works.

Epic Goal

  • To improve debug-ability of ovn-k in hypershift
  • To verify the stability of of ovn-k in hypershift
  • To introduce a EgressIP reach-ability check that will work in hypershift

Why is this important?

  • ovn-k is supposed to be GA in 4.12. We need to make sure it is stable, we know the limitations and we are able to debug it similar to the self hosted cluster.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. This will need consultation with the people working on HyperShift

Previous Work (Optional):

  1. https://issues.redhat.com/browse/SDN-2589

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Overview 

Customers do not pay Red Hat more to run HyperShift control planes and supporting infrastructure than Standalone control planes and supporting infrastructure.

Assumption

  • A customer will be able to associate a cluster as “Infrastructure only”
  • E.g. one option: management cluster has role=master, and role=infra nodes only, control planes are packed on role=infra nodes
  • OR the entire cluster is labeled infrastructure, and node roles are ignored.
  • Anything that runs on a master node by default in Standalone that is present in HyperShift MUST be hosted and not run on a customer worker node.

DoD 

Run cluster-storage-operator (CSO) + AWS EBS CSI driver operator + AWS EBS CSI driver control-plane Pods in the management cluster, run the driver DaemonSet in the hosted cluster.

More information here: https://docs.google.com/document/d/1sXCaRt3PE0iFmq7ei0Yb1svqzY9bygR5IprjgioRkjc/edit 

 

As OCP support engineer I want the same guest cluster storage-related objects in output of "hypershift dump cluster --dump-guest-cluster" as in "oc adm must-gather ", so I can debug storage issues easily.

 

must-gather collects: storageclasses persistentvolumes volumeattachments csidrivers csinodes volumesnapshotclasses volumesnapshotcontents

hypershift collects none of this, the relevant code is here: https://github.com/openshift/hypershift/blob/bcfade6676f3c344b48144de9e7a36f9b40d3330/cmd/cluster/core/dump.go#L276

 

Exit criteria:

  • verify that hypershift dump cluster --dump-guest-cluster has storage objects from the guest cluster.

As HyperShift Cluster Instance Admin, I want to run AWS EBS CSI driver operator + control plane of the CSI driver in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
  •  
  •  
    • Pass only the guest kubeconfig to the operand (control-plane Deployment of the CSI driver).

Exit criteria:

  • Control plane Deployment of AWS EBS CSI driver runs in the management cluster in HyperShift.
  • Storage works in the guest cluster.
  • No regressions in standalone OCP.

As HyperShift Cluster Instance Admin, I want to run cluster-storage-operator (CSO) in the management cluster, so the guest cluster runs just my applications.

  • Add a new cmdline option for the guest cluster kubeconfig file location
  • Parse both kubeconfigs:
    • One from projected service account, which leads to the management cluster.
    • Second from the new cmdline option introduced above. This one leads to the guest cluster.
  • Tag manifests of objects that should not be deployed by CVO in HyperShift
  • Only on HyperShift:
    • When interacting with Kubernetes API, carefully choose the right kubeconfig to watch / create / update objects in the right cluster.
    • Replace namespaces in all Deployments and other objects that are created in the management cluster. They must be created in the same namespace as the operator.
    • Pass only the guest kubeconfig to the operands (AWS EBS CSI driver operator).

Exit criteria:

  • CSO and AWS EBS CSI driver operator runs in the management cluster in HyperShift
  • Storage works in the guest cluster.
  • No regressions in standalone OCP.

OC mirror is GA product as of Openshift 4.11 .

The goal of this feature is to solve any future customer request for new features or capabilities in OC mirror 

Epic Goal

  • Mirror to mirror operations and custom mirroring flows required by IBM CloudPak catalog management

Why is this important?

  • IBM needs additional customization around the actual mirroring of images to enable CloudPaks to fully adopt OLM-style operator packaging and catalog management
  • IBM CloudPaks introduce additional compute architectures, increasing the download volume by 2/3rds to day, we need the ability to effectively filter non-required image versions of OLM operator catalogs during filtering for other customers that only require a single or a subset of the available image architectures
  • IBM CloudPaks regularly run on older OCP versions like 4.8 which require additional work to be able to read the mirrored catalog produced by oc mirror

Scenarios

  1. Customers can use the oc utility and delegate the actual image mirror step to another tool
  2. Customers can mirror between disconnected registries using the oc utility
  3. The oc utility supports filtering manifest lists in the context of multi-arch images according to the sparse manifest list proposal in the distribution spec

Acceptance Criteria

  • Customers can use the oc utility to mirror between two different air-gapped environments
  • Customers can specify the desired computer architectures and oc mirror will create sparse manifest lists in the target registry as a result

Dependencies (internal and external)

Previous Work:

  1. WRKLDS-369
  2. Disconnected Mirroring Improvement Proposal

Related Work:

  1. https://github.com/opencontainers/distribution-spec/pull/310
  2. https://github.com/distribution/distribution/pull/3536
  3. https://docs.google.com/document/d/10ozLoV7sVPLB8msLx4LYamooQDSW-CAnLiNiJ9SER2k/edit?usp=sharing

Pre-Work Objectives

Since some of our requirements from the ACM team will not be available for the 4.12 timeframe, the team should work on anything we can get done in the scope of the console repo so that when the required items are available in 4.13, we can be more nimble in delivering GA content for the Unified Console Epic.

Overall GA Key Objective
Providing our customers with a single simplified User Experience(Hybrid Cloud Console)that is extensible, can run locally or in the cloud, and is capable of managing the fleet to deep diving into a single cluster. 
Why customers want this?

  1. Single interface to accomplish their tasks
  2. Consistent UX and patterns
  3. Easily accessible: One URL, one set of credentials

Why we want this?

  • Shared code -  improve the velocity of both teams and most importantly ensure consistency of the experience at the code level
  • Pre-built PF4 components
  • Accessibility & i18n
  • Remove barriers for enabling ACM

Phase 2 Goal: Productization of the united Console 

  1. Enable user to quickly change context from fleet view to single cluster view
    1. Add Cluster selector with “All Cluster” Option. “All Cluster” = ACM
    2. Shared SSO across the fleet
    3. Hub OCP Console can connect to remote clusters API
    4. When ACM Installed the user starts from the fleet overview aka “All Clusters”
  2. Share UX between views
    1. ACM Search —> resource list across fleet -> resource details that are consistent with single cluster details view
    2. Add Cluster List to OCP —> Create Cluster

As a developer I would like to disable clusters like *KS that we can't support for multi-cluster (for instance because we can't authenticate). The ManagedCluster resource has a vendor label that we can use to know if the cluster is supported.

cc Ali Mobrem Sho Weimer Jakub Hadvig 

UPDATE: 9/20/22 : we want an allow-list with OpenShift, ROSA, ARO, ROKS, and  OpenShiftDedicated

Acceptance criteria:

  • Investigate if console-operator should pass info about which cluster are supported and unsupported to the frontend
  • Unsupported clusters should not appear in the cluster dropdown
  • Unsupported clusters based off
    • defined vendor label
    • non 4.x ocp clusters

Feature Overview

RHEL CoreOS should be updated to RHEL 9.2 sources to take advantage of newer features, hardware support, and performance improvements.

 

Requirements

  • RHEL 9.x sources for RHCOS builds starting with OCP 4.13 and RHEL 9.2.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

(Optional) Use Cases

  • 9.2 Preview via Layering No longer necessary assuming we stay the course of going all in on 9.2

Assumptions

  • ...

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

PROBLEM

We would like to improve our signal for RHEL9 readiness by increasing internal engineering engagement and external partner engagement on our community OpehShift offering, OKD.

PROPOSAL

Adding OKD to run on SCOS (a CentOS stream for CoreOS) brings the community offering closer to what a partner or an internal engineering team might expect on OCP.

ACCEPTANCE CRITERIA

Image has been switched/included: 

DEPENDENCIES

The SCOS build payload.

RELATED RESOURCES

OKD+SCOS proposal: https://docs.google.com/presentation/d/1_Xa9Z4tSqB7U2No7WA0KXb3lDIngNaQpS504ZLrCmg8/edit#slide=id.p

OKD+SCOS work draft: https://docs.google.com/document/d/1cuWOXhATexNLWGKLjaOcVF4V95JJjP1E3UmQ2kDVzsA/edit

 

Acceptance Criteria

A stable OKD on SCOS is built and available to the community sprintly.

 

This comes up when installing ipi-on-aws on arm64 with the custom payload build at quay.io/aleskandrox/okd-release:4.12.0-0.okd-centos9-full-rebuild-arm64 that is using scos as machine-content-os image

 

```

[root@ip-10-0-135-176 core]# crictl logs c483c92e118d8
2022-08-11T12:19:39+00:00 [cnibincopy] FATAL ERROR: Unsupported OS ID=scos
```

 

The probable fix has to land on https://github.com/openshift/cluster-network-operator/blob/master/bindata/network/multus/multus.yaml#L41-L53

The details of this Jira Card are restricted (Red Hat Employee and Contractors only)
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Feature Overview (aka. Goal Summary)  

The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike. 

Some customer cases have revealed scenarios where the MCO state reporting is misleading and therefore could be unreliable to base decisions and automation on.

In addition to correcting some incorrect states, the MCO will be enhanced for a more granular view of update rollouts across machines.

The MCO should properly report its state in a way that's consistent and able to be understood by customers, troubleshooters, and maintainers alike. 

For this epic, "state" means "what is the MCO doing?" – so the goal here is to try to make sure that it's always known what the MCO is doing. 

This includes: 

  • Conditions
  • Some Logging 
  • Possibly Some Events 

While this probably crosses a little bit into the "status" portion of certain MCO objects, as some state is definitely recorded there, this probably shouldn't turn into a "better status reporting" epic.  I'm interpreting "status" to mean "how is it going" so status is maybe a "detail attached to a state". 

 

Exploration here: https://docs.google.com/document/d/1j6Qea98aVP12kzmPbR_3Y-3-meJQBf0_K6HxZOkzbNk/edit?usp=sharing

 

https://docs.google.com/document/d/17qYml7CETIaDmcEO-6OGQGNO0d7HtfyU7W4OMA6kTeM/edit?usp=sharing

 

The current property description is:

configuration represents the current MachineConfig object for the machine config pool.

But in a 4.12.0-ec.4 cluster, the actual semantics seem to be something closer to "the most recent rendered config that we completely leveled on". We should at least update the godocs to be more specific about the intended semantics. And perhaps consider adjusting the semantics?

Complete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled

This epic contains all the Dynamic Plugins related stories for OCP release-4.11 

Epic Goal

  • Track all the stories under a single epic

Acceptance Criteria

  •  

This story only covers API components. We will create a separate story for other utility functions.

Today we are generating documentation for Console's Dynamic Plugin SDK in
frontend/packages/dynamic-plugin-sdk. We are missing ts-doc for a set of hooks and components.

We are generating the markdown from the dynamic-plugin-sdk using

yarn generate-doc

Here is the list of the API that the dynamic-plugin-sdk is exposing:

https://gist.github.com/spadgett/0ddefd7ab575940334429200f4f7219a

Acceptance Criteria:

  • Add missing jsdocs for the API that dynamic-plugin-sdk exposes

Out of Scope:

  • This does not include work for integrating the API docs into the OpenShift docs
  • This does not cover other public utilities, only components.

An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.

As a developer, I want to be able to clean up the css markup after making the css / scss changes required for dark mode and remove any old unused css / scss content. 

 

Acceptance criteria:

  • Remove any unused scss / css content after revamping for dark mode

1. Proposed title of this feature request
Basic authentication for Helm Chart repository in helmchartrepositories.helm.openshift.io CRD.

2. What is the nature and description of the request?
As of v4.6.9, the HelmChartRepository CRD only supports client TLS authentication through spec.connectionConfig.tlsClientConfig.

3. Why do you need this? (List the business requirements here)
Basic authentication is widely used by many chart repositories managers (Nexus OSS, Artifactory, etc.)
Helm CLI also supports them with the helm repo add command.
https://helm.sh/docs/helm/helm_repo_add/

4. How would you like to achieve this? (List the functional requirements here)
Probably by extending the CRD:

spec:
connectionConfig:
username: username
password:
secretName: secret-name

The secret namespace should be openshift-config to align with the tlsClientConfig behavior.

5. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented.
Trying to pull helm charts from remote private chart repositories that has disabled anonymous access and offers basic authentication.
E.g.: https://github.com/sonatype/docker-nexus

Owner: Architect:

Story (Required)

As an OCP user I will like to be able to install helm charts from repos added to ODC with basic authentication fields populated

Background (Required)

We need to support helm installs for Repos that have the basic authentication secret name and namespace.

Glossary

Out of scope

Updating the ProjectHelmChartRepository CRD, already done in diff story
Supporting the HelmChartRepository CR, this feature will be scoped first to project/namespace scope repos.

In Scope

<Defines what is included in this story>

Approach(Required)

If the new fields for basic auth are set in the repo CR then use those credentials when making API calls to helm to install/upgrade charts. We will error out if user logged in does not have access to the secret referenced by Repo CR. If basic auth fields are not present we assume is not an authenticated repo.

Dependencies

Nonet

Edge Case

NA

Acceptance Criteria

I can list, install and update charts on authenticated repos from ODC
Needs Documentation both upstream and downstream
Needs new unit test covering repo auth

INVEST Checklist

Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated

Legend

Unknown
Verified
Unsatisfied

Epic Goal

  • Support manifest lists by image streams and the integrated registry. Clients should be able to pull/push manifests lists from/into the integrated registry. They also should be able to import images via `oc import-image` and them pull them from the internal registry.

Why is this important?

  • Manifest lists are becoming more and more popular. Customers want to mirror manifest lists into the registry and be able to pull them by digest.

Scenarios

  1. Manifest lists can be pushed into the integrated registry
  2. Imported manifests list can be pulled from the integrated registry
  3. Image triggers work with manifest lists

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Existing functionality shouldn't change its behavior

Dependencies (internal and external)

  1. ...

Previous Work (Optional)

  1. https://github.com/openshift/enhancements/blob/master/enhancements/manifestlist/manifestlist-support.md

Open questions

  1. Can we merge creation of images without having the pruner?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

ACCEPTANCE CRITERIA

  • The ImageStream object should contain a new flag indicating that it refers to a manifest list
  • openshift-controller-manager uses new openshift/api code to import image streams
  • changing `importMode` of an image stream tag triggers a new import (i.e. updates generation in the tag spec)

NOTES

Epic Goal

  • We need the installer to accept a LB type from user and then we could set type of LB in the following object.
    oc get ingress.config.openshift.io/cluster -o yaml
    Then we can fetch info from this object and reconcile the operator to have the NLB changes reflected.

 

This is an API change and we will consider this as a feature request.

Why is this important?

https://issues.redhat.com/browse/NE-799 Please check this for more details

 

Scenarios

https://issues.redhat.com/browse/NE-799 Please check this for more details

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. installer
  2. ingress operator

Previous Work (Optional):

 No

Open questions::

N/A

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

We need tests for the ovirt-csi-driver and the cluster-api-provider-ovirt. These tests help us to

  • minimize bugs,
  • reproduce and fix them faster and
  • pin down current behavior of the driver

Also, having dedicated tests on lower levels with a smaller scope (unit, integration, ...) has the following benefits:

  • fast feedback cycle (local test execution)
  • developer in-code documentation
  • easier onboarding for new contributers
  • lower resource consumption
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Description

As a user, I would like to be informed in an intuitive way,  when quotas have been reached in a namespace

Acceptance Criteria

  1. Show an alert banner on the Topology and add page for this project/namespace when there is a RQ (Resource Quota) / ACRQ (Applied Cluster Resource Quota) issue
    PF guideline: https://www.patternfly.org/v4/components/alert/design-guidelines#using-alerts 
  2. The above alert should have a CTA link to the search page with all RQ, ACRQ and if there is just one show the details page for the same
  3. For RQ, ACRQ list view show one more column called status with details as shown in the project view.

Additional Details:

 

Refer below for more details 

Description

As a user, In the topology view, I would like to be updated intuitively if any of the deployments have reached quota limits

Acceptance Criteria

  1. Show a yellow border around deployments if any of the deployments have reached the quota limit
  2. For deployments, if there are any errors associated with resource limits or quotas, include a warning alert in the side panel.
    1. If we know resource limits are the cause, include link to Edit resource limits
    2. If we know pod count is the cause, include a link to Edit pod count

Additional Details:

 

Refer below for more details 

Goal

Provide a form driven experience to allow cluster admins to manage the perspectives to meet the ACs below.

Problem:

We have heard the following requests from customers and developer advocates:

  • Some admins do not want to provide access to the Developer Perspective from the console
  • Some admins do not want to provide non-priv users access to the Admin Perspective from the console

Acceptance criteria:

  1. Cluster administrator is able to "hide" the admin perspective for non-priv users
  2. Cluster administrator is able to "hide" the developer perspective for all users
  3. Be user that User Preferences for individual users behaves appropriately. If only one perspective is available, the perspective switcher is not needed.

Dependencies (External/Internal):

Design Artifacts:

Exploration:

Note:

Description

As an admin, I want to hide user perspective(s) based on the customization.

Acceptance Criteria

  1. Hide perspective(s) based on the customization
    1. When the admin perspective is disabled -> we hide the admin perspective for all unprivileged users
    2. When the dev perspective is disabled -> we hide the dev perspective for all users
  2. When all the perspectives are hidden from a user or for all users, show the Admin perspective by default

Additional Details:

Description

As an admin, I want to hide the admin perspective for non-privileged users or hide the developer perspective for all users

Based on the https://issues.redhat.com/browse/ODC-6730 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource

Acceptance Criteria

  1. Extend the "customization" spec type definition for the CRD in the openshift/api project

Additional Details:

Previous customization work:

  1. https://issues.redhat.com/browse/ODC-5416
  2. https://issues.redhat.com/browse/ODC-5020
  3. https://issues.redhat.com/browse/ODC-5447

Description

As an admin, I should be able to see a code snippet that shows how to add user perspectives

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add user perspectives

To support the cluster-admin to configure the perspectives correctly, the developer console should provide a code snippet for the customization of yaml resource (Console CRD).

Customize Perspective Enhancement PR: https://github.com/openshift/enhancements/pull/1205

Acceptance Criteria

  1. When the admin opens the Console CRD there is a snippet in the sidebar which provides a default YAML which supports the admin to add user perspectives

Additional Details:

Previous work:

  1. https://issues.redhat.com/browse/ODC-5080
  2. https://issues.redhat.com/browse/ODC-5449

Description

As an admin, I want to be able to use a form driven experience  to hide user perspective(s)

Acceptance Criteria

  1. Add checkboxes with the options
    1. Hide "Administrator" perspective for non-privileged users
    2.  Hide "Developer" perspective for all users
  2. The console configuration CR should be updated as per the selected option

Additional Details:

Problem:

Customers don't want their users to have access to some/all of the items which are available in the Developer Catalog.  The request is to change access for the cluster, not per user or persona.

Goal:

Provide a form driven experience to allow cluster admins easily disable the Developer Catalog, or one or more of the sub catalogs in the Developer Catalog.

Why is it important?

Multiple customer requests.

Acceptance criteria:

  1. As a cluster admin, I can hide/disable access to the developer catalog for all users across all namespaces.
  2. As a cluster admin, I can hide/disable access to a specific sub-catalog in the developer catalog for all users across all namespaces.
    1. Builder Images
    2. Templates
    3. Helm Charts
    4. Devfiles
    5. Operator Backed

Notes

We need to consider how this will work with subcatalogs which are installed by operators: VMs, Event Sources, Event Catalogs, Managed Services, Cloud based services

Dependencies (External/Internal):

Design Artifacts:

Exploration:

Note:

Description

As an admin, I want to hide/disable access to specific sub-catalogs in the developer catalog or the complete dev catalog for all users across all namespaces.

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, it is required to extend the console configuration CRD to enable the cluster admins to configure this data in the console resource

Acceptance Criteria

Extend the "customization" spec type definition for the CRD in the openshift/api project

Additional Details:

Previous customization work:

  1. https://issues.redhat.com/browse/ODC-5416
  2. https://issues.redhat.com/browse/ODC-5020
  3. https://issues.redhat.com/browse/ODC-5447

Description

As an admin, I want to hide sub-catalogs in the developer catalog or hide the developer catalog completely based on the customization.

Acceptance Criteria

  1. Hide all links to the sub-catalog(s) from the add page, topology actions, empty states, quick search, and the catalog itself
  2. The sub-catalog should show Not found if the user opens the sub-catalog directly
  3. The feature should not be hidden if a sub-catalog option is disabled

Additional Details:

Description

As a cluster-admin, I should be able to see a code snippet that shows how to enable sub-catalogs or the entire dev catalog.

Based on the https://issues.redhat.com/browse/ODC-6732 enhancement proposal, the cluster admin can add sub-catalog(s)  from the Developer Catalog or the Dev catalog as a whole.

To support the cluster-admin to configure the sub-catalog list correctly, the developer console should provide a code snippet for the customization yaml resource (Console CRD).

Acceptance Criteria

  1. When the admin opens the Console CRD there is a snippet in the sidebar which provides a default YAML, which supports the admin to add sub-catalogs/the whole dev catalog

Additional Details:

Previous work:

  1. https://issues.redhat.com/browse/ODC-5080
  2. https://issues.redhat.com/browse/ODC-5449

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Come up with a consistent way to detect node down on OCP and hypershift. Current mechanism for OCP (probe port 9) does not work for hypershift, meaning, hypershift node down detection will be longer (~40 secs). We should aim to have a common mechanism for both. As well, we should consider alternatives to the probing port 9. Perhaps BFD, or other detection.
  • Get clarification on node down detection times. Some customers have (apparently) asked for detection on the order of 100ms, recommendation is to use multiple Egress IPs, so this may not be a hard requirement. Need clarification from PM/Customers.

Why is this important?

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Add sock proxy to cluster-network-operator so egressip can use grpc to reach worker nodes.
 
With the introduction of grpc as means for determining the state of a given egress node, hypershift should
be able to leverage socks proxy and become able to know the state of each egress node.
 
References relevant to this work:
1281-network-proxy
[+https://coreos.slack.com/archives/C01C8502FMM/p1658427627751939+]
[+https://github.com/openshift/hypershift/pull/1131/commits/28546dc587dc028dc8bded715847346ff99d65ea+]

Incomplete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled

Epic Goal

  • Update OpenShift components that are owned by the Builds + Jenkins Team to use Kubernetes 1.25

Why is this important?

  • Our components need to be updated to ensure that they are using the latest bug/CVE fixes, features, and that they are API compatible with other OpenShift components.

Acceptance Criteria

  • Existing CI/CD tests must be passing

This is epic tracks "business as usual" requirements / enhancements / bug fixing of Insights Operator.

Today the links point at a rule-scoped page, but that page lacks information about recommended resolution.  You can click through by cluster ID to your specific cluster and get that recommendation advice, but it would be more convenient and less confusing for customers if we linked directly to the cluster-scoped recommendation page.

We can implement by updating the template here to be:

fmt.Sprintf("https://console.redhat.com/openshift/insights/advisor/clusters/%s?first=%s%%7C%s", clusterID, ruleIDStr, rec.ErrorKey)

or something like that.

 

unknowns

request is clear, solution/implementation to be further clarified

This epic contains all the Dynamic Plugins related stories for OCP release-4.12

Epic Goal

  • Track all the stories under a single epic

Acceptance Criteria

Based on API review CONSOLE-3145, we have decided to deprecate the following APIs:

  • useAccessReviewAllowed (use useAccessReview instead)
  • useSafetyFirst

cc Andrew Ballantyne Bryan Florkiewicz 

Currently our `api.md` does not generate docs with "tags" (aka `@deprecated`) – we'll need to add that functionality to the `generate-doc.ts` script. See the code that works for `console-extensions.md`

We should have a global notification or the `Console plugins` page (e.g., k8s/cluster/operator.openshift.io~v1~Console/cluster/console-plugins) should alert users when console operator `spec.managementState` is `Unmanaged` as changes to `enabled` for plugins will have no effect.

Currently the ConsolePlugins API version is v1alpha1. Since we are going GA with dynamic plugins we should be creating a v1 version.

This would require updates in following repositories:

  1. openshift/api (add the v1 version and generate a new CRD)
  2. openshift/client-go (picku the changes in the openshift/api repo and generate clients & informers for the new v1 version)
  3. openshift/console-operator repository will using both the new v1 version and v1alpha1 in code and manifests folder.

AC:

  • both v1 and v1alpha1 ConsolePlugins should be passed to the console-config.yaml when the plugins are enabled and present on the cluster.

 

NOTE: This story does not include the conversion webhook change which will be created as a follow on story

During the development of https://issues.redhat.com/browse/CONSOLE-3062, it was determined additional information is needed in order to assist a user when troubleshooting a Failed plugin (see https://github.com/openshift/console/pull/11664#issuecomment-1159024959). As it stands today, there is no data available to the console to relay to the user regarding why the plugin Failed. Presumably, a message should be added to NotLoadedDynamicPlugin to address this gap.

 

AC: Add `message` property to NotLoadedDynamicPluginInfo type.

Move `frontend/public/components/nav` to `packages/console-app/src/components/nav` and address any issues resulting from the move.

There will be some expected lint errors relating to cyclical imports. These will require some refactoring to address.

`@openshift-console/plugin-shared` (NPM) is a package that will contain shared components that can be upversioned separately by the Plugins so they can keep core compatibility low but upversion and support more shared components as we need them.

This isn't documented today. We need to do that.

Acceptance Criteria

  • Add a note in the "SDK packages" section of the README about the existence of this package and it's purpose
    • The purpose of being a static utility delivery library intended not to be tied to OpenShift Console versions and compatible with multiple version of OpenShift Console

Following https://coreos.slack.com/archives/C011BL0FEKZ/p1650640804532309, it would be useful for us (network observability team) to have access to ResourceIcon in dynamic-plugin-sdk.

Currently ResourceLink is exported but not ResourceIcon

 

AC:

  • Require the ResourceIcon  from public to dynamic-plugin-sdk
  • Add the component to the dynamic-demo-plugin
  • Add a CI test to check for the ResourceIcon component

 

To align with https://github.com/openshift/dynamic-plugin-sdk, plugin metadata field dependencies as well as the @console/pluginAPI entry contained within should be made optional.

If a plugin doesn't declare the @console/pluginAPI dependency, the Console release version check should be skipped for that plugin.

when defining two proxy endpoints, 
apiVersion: console.openshift.io/v1alpha1
kind: ConsolePlugin
metadata:
...
name: forklift-console-plugin
spec:
displayName: Console Plugin Template
proxy:

  • alias: forklift-inventory
    authorize: true
    service:
    name: forklift-inventory
    namespace: konveyor-forklift
    port: 8443
    type: Service
  • alias: forklift-must-gather-api
    authorize: true
    service:
    name: forklift-must-gather-api
    namespace: konveyor-forklift
    port: 8443
    type: Service

service:
basePath: /
I get two proxy endpoints
/api/proxy/plugin/forklift-console-plugin/forklift-inventory
and
/api/proxy/plugin/forklift-console-plugin/forklift-must-gather-api

but both proxy to the `forklift-must-gather-api` service

e.g.
curl to:
[server url]/api/proxy/plugin/forklift-console-plugin/forklift-inventory
will point to the `forklift-must-gather-api` service, instead of the `forklift-inventory` service

The extension `console.dashboards/overview/detail/item` doesn't constrain the content to fit the card.

The details-card has an expectation that a <dd> item will be the last item (for spacing between items). Our static details-card items use a component called 'OverviewDetailItem'. This isn't enforced in the extension and can cause undesired padding issues if they just do whatever they want.

I feel our approach here should be making the extension take the props of 'OverviewDetailItem' where 'children' is the new 'component'.

Acceptance Criteria:

  • Deprecate the old extension (in docs, with date/stamp)
  • Make a new extension that applies a stricter type
  • Include this new extension next to the old one (with the error boundary around it)

We neither use nor support static plugin nav extensions anymore so we should remove the API in the static plugin SDK and get rid of related cruft in our current nav components.

 

AC: Remove static plugin nav extensions code. Check the navigation code for any references to the old API.

The console has good error boundary components that are useful for dynamic plugin.
Exposing them will enable the plugins to get the same look and feel of handling react errors as console
The minimum requirement right now is to expose the ErrorBoundaryFallbackPage component from
https://github.com/openshift/console/blob/master/frontend/packages/console-shared/src/components/error/fallbacks/ErrorBoundaryFallbackPage.tsx

This epic contains all the OLM related stories for OCP release-4.12

Epic Goal

  • Track all the stories under a single epic

This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.

 

We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. kubernetes.io/arch=arm64, kubernetes.io/arch=amd64 etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes.

 

AC: 

  1. Implement logic in the console-operator that will scan though all the nodes and build a set of all the architecture types that the cluster nodes run on and pass it to the console-config.yaml
  2. Add unit and e2e test cases in the console-operator repository.

 

@jpoulin is good to ask about heterogeneous clusters.

This enhancement Introduces support for provisioning and upgrading heterogenous architecture clusters in phases.

 

We need to scan through the compute nodes and build a set of supported architectures from those. Each node on the cluster has a label for architecture: e.g. `kuberneties.io/arch:arm64`, `kubernetes.io/arch:amd64` etc. Based on the set of supported architectures console will need to surface only those operators in the Operator Hub, which are supported on our Nodes. Each operator's PackageManifest contains a labels that indicates whats the operator's supported architecture, e.g.  `operatorframework.io/arch.s390x: supported`. An operator can be supported on multiple architectures

AC:

  1. Implement logic in the console's backend to read the set of architecture types from console-config.yaml and set it as a SERVER_FLAG.nodeArchitectures (Change similar to https://github.com/openshift/console/commit/39aabe171a2e89ed3757ac2146d252d087fdfd33)
  2. In Operator hub render only operators that are support on any given node, based on the SERVER_FLAG.nodeArchitectures field implemented in CONSOLE-3242.

 

OS and arch filtering: https://github.com/openshift/console/blob/2ad4e17d76acbe72171407fc1c66ca4596c8aac4/frontend/packages/operator-lifecycle-manager/src/components/operator-hub/operator-hub-items.tsx#L49-L86

 

@jpoulin is good to ask about heterogeneous clusters.

Epic Goal

  • Enable OpenShift IPI Installer to deploy OCP to a shared VPC in GCP.
  • The host project is where the VPC and subnets are defined. Those networks are shared to one or more service projects.
  • Objects created by the installer are created in the service project where possible. Firewall rules may be the only exception.
  • Documentation outlines the needed minimal IAM for both the host and service project.

Why is this important?

  • Shared VPC's are a feature of GCP to enable granular separation of duties for organizations that centrally manage networking but delegate other functions and separation of billing. This is used more often in larger organizations where separate teams manage subsets of the cloud infrastructure. Enterprises that use this model would also like to create IPI clusters so that they can leverage the features of IPI. Currently organizations that use Shared VPC's must use UPI and implement the features of IPI themselves. This is repetative engineering of little value to the customer and an increased risk of drift from upstream IPI over time. As new features are built into IPI, organizations must become aware of those changes and implement them themselves instead of getting them "for free" during upgrades.

Scenarios

  1. Deploy cluster(s) into service project(s) on network(s) shared from a host project.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story:

As a user, I want to be able to:

  • skip creating service accounts in Terraform when using passthrough credentialsMode.
  • pass the installer service account to Terraform to be used as the service account for instances when using passthrough credentialsMode.

so that I can achieve

  • creating an IPI cluster using Shared VPC networks using a pre-created service account with the necessary permissions in the Host Project.

Acceptance Criteria:

Description of criteria:

  • Upstream documentation
  • Point 1
  • Point 2
  • Point 3

(optional) Out of Scope:

Detail about what is specifically not being delivered in the story

Engineering Details:

This is a follow up Epic to https://issues.redhat.com/browse/MCO-144, which aimed to get in-place upgrades for Hypershift. This epic aims to capture additional work to focus on using CoreOS/OCP layering into Hypershift, which has benefits such as:

 

 - removing or reducing the need for ignition

 - maintaining feature parity between self-driving and managed OCP models

 - adding additional functionality such as hotfixes

Right now in https://github.com/openshift/hypershift/pull/1258 you can only perform one upgrade at a time. Multiple upgrades will break due to controller logic

 

Properly create logic to handle manifest creation/updates and deletion, so the logic is more bulletproof

Currently not implemented, and will require the MCD hypershift mode to be adjusted to handle disruptionless upgrades like regular MCD

Changes made in METAL-1 open up opportunities to improve our handling of images by cleaning up redundant code that generates extra work for the user and extra load for the cluster.

We only need to run the image cache DaemonSet if there is a QCOW URL to be mirrored (effectively this means a cluster installed with 4.9 or earlier). We can stop deploying it for new clusters installed with 4.10 or later.

Currently, the image-customization-controller relies on the image cache running on every master to provide the shared hostpath volume containing the ISO and initramfs. The first step is to replace this with a regular volume and an init container in the i-c-c pod that extracts the images from machine-os-images. We can use the copy-metal -image-build flag (instead of -all used in the shared volume) to provide only the required images.

Once i-c-c has its own volume, we can switch the image extraction in the metal3 Pod's init container to use the -pxe flag instead of -all.

The machine-os-images init container for the image cache (not the metal3 Pod) can be removed. The whole image cache deployment is now optional and need only be started if provisioningOSDownloadURL is set (and in fact should be deleted if it is not).

We plan to build Ironic Container Images using RHEL9 as base image in OCP 4.12

This is required because the ironic components have abandoned support for CentOS Stream 8 and Python 3.6/3.7 upstream during the most recent development cycle that will produce the stable Zed release, in favor of CentOS Stream 9 and Python 3.8/3.9

More info on RHEL8 to RHEL9 transition in OCP can be found at https://docs.google.com/document/d/1N8KyDY7KmgUYA9EOtDDQolebz0qi3nhT20IOn4D-xS4

Epic Goal

  • To improve the reliability of disk cleaning before installation and to provide the user with sufficient warning regarding the consequences of the cleaning

Why is this important?

  • Insufficient cleaning can lead to installation failure
  • Insufficient warning can lead to complaints of unexpected data loss

Scenarios

  1.  

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Description of the problem:
When running assisted-installer on a machine where is more than one volume group per physical volume. Only the first volume group will be cleaned up. This leads to problems later and will lead to errors such as

Failed - failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- pvremove /dev/sda -y -ff], Error exit status 5, LastOutput "Can't open /dev/sda exclusively. Mounted filesystem? 

How reproducible:

Set up a VM with more than one volume group per physical volume. As an example, look at the following sample from a customer cluster.

List block devices
/usr/bin/lsblk -o NAME,MAJ:MIN,SIZE,TYPE,FSTYPE,KNAME,MODEL,UUID,WWN,HCTL,VENDOR,STATE,TRAN,PKNAME
NAME              MAJ:MIN   SIZE TYPE FSTYPE      KNAME MODEL            UUID                                   WWN                HCTL       VENDOR   STATE   TRAN PKNAME
loop0               7:0   125.9G loop xfs         loop0                  c080b47b-2291-495c-8cc0-2009ebc39839                                                       
loop1               7:1   885.5M loop squashfs    loop1                                                                                                             
sda                 8:0   894.3G disk             sda   INTEL SSDSC2KG96                                        0x55cd2e415235b2db 1:0:0:0    ATA      running sas  
|-sda1              8:1     250M part             sda1                                                          0x55cd2e415235b2db                                  sda
|-sda2              8:2     750M part ext2        sda2                   3aa73c72-e342-4a07-908c-a8a49767469d   0x55cd2e415235b2db                                  sda
|-sda3              8:3      49G part xfs         sda3                   ffc3ccfe-f150-4361-8ae5-f87b17c13ac2   0x55cd2e415235b2db                                  sda
|-sda4              8:4   394.2G part LVM2_member sda4                   Ua3HOc-Olm4-1rma-q0Ug-PtzI-ZOWg-RJ63uY 0x55cd2e415235b2db                                  sda
`-sda5              8:5     450G part LVM2_member sda5                   W8JqrD-ZvaC-uNK9-Y03D-uarc-Tl4O-wkDdhS 0x55cd2e415235b2db                                  sda
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sda5
sdb                 8:16  894.3G disk             sdb   INTEL SSDSC2KG96                                        0x55cd2e415235b31b 1:0:1:0    ATA      running sas  
`-sdb1              8:17  894.3G part LVM2_member sdb1                   6ETObl-EzTd-jLGw-zVNc-lJ5O-QxgH-5wLAqD 0x55cd2e415235b31b                                  sdb
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdb1
sdc                 8:32  894.3G disk             sdc   INTEL SSDSC2KG96                                        0x55cd2e415235b652 1:0:2:0    ATA      running sas  
`-sdc1              8:33  894.3G part LVM2_member sdc1                   pBuktx-XlCg-6Mxs-lddC-qogB-ahXa-Nd9y2p 0x55cd2e415235b652                                  sdc
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdc1
sdd                 8:48  894.3G disk             sdd   INTEL SSDSC2KG96                                        0x55cd2e41521679b7 1:0:3:0    ATA      running sas  
`-sdd1              8:49  894.3G part LVM2_member sdd1                   exVSwU-Pe07-XJ6r-Sfxe-CQcK-tu28-Hxdnqo 0x55cd2e41521679b7                                  sdd
  `-nova-instance 253:0     3.1T lvm  ext4        dm-0                   d15e2de6-2b97-4241-9451-639f7b14594e                                          running      sdd1
sr0                11:0     989M rom  iso9660     sr0   Virtual CDROM0   2022-06-17-18-18-33-00                                    0:0:0:0    AMI      running usb  

Now run the assisted installer and try to install an SNO node on this machine, you will find that the installation will fail with a message that indicates that it could not exclusively access /dev/sda

Actual results:

 The installation will fail with a message that indicates that it could not exclusively access /dev/sda

Expected results:

The installation should proceed and the cluster should start to install.

Suspected Cases
https://issues.redhat.com/browse/AITRIAGE-3809
https://issues.redhat.com/browse/AITRIAGE-3802
https://issues.redhat.com/browse/AITRIAGE-3810

Description of the problem:

Cluster Installation fail if installation disk has lvm on raid:

Host: test-infra-cluster-3cc862c9-master-0, reached installation stage Failed: failed executing nsenter [--target 1 --cgroup --mount --ipc --pid -- mdadm --stop /dev/md0], Error exit status 1, LastOutput "mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?" 

How reproducible:

100%

Steps to reproduce:

1. Install a cluster while master nodes has disk with LVM on RAID (reproduces using test: https://gitlab.cee.redhat.com/ocp-edge-qe/kni-assisted-installer-auto/-/blob/master/api_tests/test_disk_cleanup.py#L97)

Actual results:

Installation failed

Expected results:

Installation success

Epic Goal

  • Increase success-rate of of our CI jobs
  • Improve debugability / visibility or tests 

Why is this important?

  • Failed presubmit jobs (required or optional) can make an already tested+approved PR to not get in
  • Failed periodic jobs interfere our visibility around stability of features

Description of problem:

check_pkt_length cannot be offloaded without
1) sFlow offload patches in Openvswitch
2) Hardware driver support.

Since 1) will not be done anytime soon. We need a work around for the check_pkt_length issue.

Version-Release number of selected component (if applicable):

4.11/4.12

How reproducible:

Always

Steps to Reproduce:

1. Any flow that has check_pkt_len()
  5-b: Pod -> NodePort Service traffic (Pod Backend - Different Node)
  6-b: Pod -> NodePort Service traffic (Host Backend - Different Node)
  4-b: Pod -> Cluster IP Service traffic (Host Backend - Different Node)
  10-b: Host Pod -> Cluster IP Service traffic (Host Backend - Different Node)
  11-b: Host Pod -> NodePort Service traffic (Pod Backend - Different Node)
  12-b: Host Pod -> NodePort Service traffic (Host Backend - Different Node)   

Actual results:

Poor performance due to upcalls when check_pkt_len() is not supported.

Expected results:

Good performance.

Additional info:

https://docs.google.com/spreadsheets/d/1LHY-Af-2kQHVwtW4aVdHnmwZLTiatiyf-ySffC8O5NM/edit#gid=670206692

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Run OpenShift builds that do not execute as the "root" user on the host node.

Why is this important?

  • OpenShift builds require an elevated set of capabilities to build a container image
  • Builds currently run as root to maintain adequate performance
  • Container workloads should run as non-root from the host's perspective. Containers running as root are a known security risk.
  • Builds currently run as root and require a privileged container. See BUILD-225 for removing the privileged container requirement.

Scenarios

  1. Run BuildConfigs in a multi-tenant environment
  2. Run BuildConfigs in a heightened security environment/deployment

Acceptance Criteria

  • Developers can opt into running builds in a cri-o user namespace by providing an environment variable with a specific value.
  • When the correct environment variable is provided, builds run in a cri-o user namespace, and the build pod does not require the "privileged: true" security context.
  • User namespace builds can pass basic test scenarios for the Docker and Source strategy build.
  • Steps to run unprivileged builds are documented.

Dependencies (internal and external)

  1. Buildah supports running inside a non-privileged container
  2. CRI-O allows workloads to opt into running containers in user namespaces.

Previous Work (Optional):

  1. BUILD-225 - remove privileged requirement for builds.

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges

Acceptance Criteria

  • Developers can provide an environment variable to indicate the build should not use privileged containers
  • When the correct env var + value is specified, builds run in a user namespace (non-root on the host)

QE Impact

No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.

Docs Impact

We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.

PX Impact

This likely warrants an OpenShift blog post, potentially?

Notes

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • ...

Why is this important?

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

We have been running into a number of problems with configure-ovs and nodeip-configuration selecting different interfaces in OVNK deployments. This causes connectivity issues, so we need some way to ensure that everything uses the same interface/IP.

Currently configure-ovs runs before nodeip-configuration, but since nodeip-configuration is the source of truth for IP selection regardless of CNI plugin, I think we need to look at swapping that order. That way configure-ovs could look at what nodeip-configuration chose and not have to implement its own interface selection logic.

I'm targeting this at 4.12 because even though there's probably still time to get it in for 4.11, changing the order of boot services is always a little risky and I'd prefer to do it earlier in the cycle so we have time to tease out any issues that arise. We may need to consider backporting the change though since this has been an issue at least back to 4.10.

Epic Goal

  • Facilitate the transition to for OLM and content to PSA enforcing the `restricted` security profile
  • Use the label synch'er to enforce the required security profile
  • Current content should work out-of-the-box as is
  • Upgrades should not be blocked

Why is this important?

  • PSA helps secure the cluster by enforcing certain security restrictions that the pod must meet to be scheduled
  • 4.12 will enforce the `restricted` profile, which will affect the deployment of operators in `openshift-*` namespaces 

Scenarios

  1. Admin installs operator in an `openshift-*`namespace that is not managed by the label syncher -> label should be applied
  2. Admin installs operator in an `openshift-*` namespace that has a label asking the label syncher to not reconcile it -> nothing changes

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Done only downstream
  • Transition documentation written and reviewed

Dependencies (internal and external)

  1. label syncher (still searching for the link)

Open questions::

  1. Is this only for openshift-* namespaces?

Resources

Stakeholders

  • Daniel S...?

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

As an admin, I would like openshift-* namespaces with an operator to be labeled with security.openshift.io/scc.podSecurityLabelSync=true to ensure the continual functioning of operators without manual intervention. The label should only be applied to openshift-* namespaces with an operator (the presence of a ClusterServiceVersion resource) IF the label is not already present. This automation will help smooth functioning of the cluster and avoid frivolous operational events.

Context: As part of the PSA migration period, Openshift will ship with the "label sync'er" - a controller that will automatically adjust PSA security profiles in response to the workloads present in the namespace. We can assume that not all operators (produced by Red Hat, the community or ISVs) will have successfully migrated their deployments in response to upstream PSA changes. The label sync'er will sync, by default, any namespace not prefixed with "openshift-", of which an explicit label (security.openshift.io/scc.podSecurityLabelSync=true) is required for sync.

A/C:
 - OLM operator has been modified (downstream only) to label any unlabelled "openshift-" namespace in which a CSV has been created
 - If a labeled namespace containing at least one non-copied csv becomes unlabelled, it should be relabelled 
 - The implementation should be done in a way to eliminate or minimize subsequent downstream sync work (it is ok to make slight architectural changes to the OLM operator in the upstream to enable this)

Goal
Provide an indication that advanced features are used

Problem

Today, customers and RH don't have the information on the actual usage of advanced features.

Why is this important?

  1. Better focus upsell efforts
  2. Compliance information for customers that are not aware they are not using the right subscription

 

Prioritized Scenarios

In Scope
1. Add a boolean variable in our telemetry to mark if the customer is using advanced features (PV encryption, encryption with KMS, external mode). 

Not in Scope

Integrate with subscription watch - will be done by the subscription watch team with our help.

Customers

All

Customer Facing Story
As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions

What does success look like?

A clear indication in subscription watch for ODF usage (either essential or advanced). 

1. Proposed title of this feature request

  • Request to add a bool variable into telemetry which indicates the usage of any of the advanced feature, like PV encryption or KMS encryption or external mode etc.

2. What is the nature and description of the request?

  • Today, customers and RH don't have the information on the actual usage of advanced features. This feature will help RH to have a better indication on the statistics of customers using the advanced features and focus better on upsell efforts.

3. Why does the customer need this? (List the business requirements here)

  • As a compliance manager, I should be able to easily see if all my clusters are using the right amount of subscriptions.

4. List any affected packages or components.

  • Telemetry

_____________________

Link to main epic: https://issues.redhat.com/browse/RHSTOR-3173

 

This epic tracks network tooling improvements for 4.12

New framework and process should be developed to make sharing network tools with devs, support and customers convenient. We are going to add some tools for ovn troubleshooting before ovn-k goes default, also some tools that we got from customer cases, and some more to help analyze and debug collected logs based on stable must-gather/sosreport format we get now thanks to 4.11 Epic.

Our estimation for this Epic is 1 engineer * 2 Sprints

WHY:
This epic is important to help improve the time it takes our customers and our team to understand an issue within the cluster.
A focus of this epic is to develop tools to quickly allow debugging of a problematic cluster. This is crucial for the engineering team to help us scale. We want to provide a tool to our customers to help lower the cognitive burden to get at a root cause of an issue.

 

Alert if any of the ovn controllers disconnected for a period of time from the southbound database using metric ovn_controller_southbound_database_connected.

The metric updates every 2 minutes so please be mindful of this when creating the alert.

If the controller is disconnected for 10 minutes, fire an alert.

DoD: Merged to CNO and tested by QE

This Epic is here to track the rebase we need to do when kube 1.25 is GA https://www.kubernetes.dev/resources/release/

Keeping this in mind can help us plan our time better. ATTOW GA is planned for August 23

https://docs.google.com/document/d/1h1XsEt1Iug-W9JRheQas7YRsUJ_NQ8ghEMVmOZ4X-0s/edit --> this is the link for rebase help

Other Complete

This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled

With CSISnapshot capability is disabled, all CSI driver operators are Degraded. For example AWS EBS CSI driver operator during installation:

18:12:16.895: Some cluster operators are not ready: storage (Degraded=True AWSEBSCSIDriverOperatorCR_AWSEBSDriverStaticResourcesController_SyncError: AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverStaticResourcesControllerDegraded: "volumesnapshotclass.yaml" (string): the server could not find the requested resource
AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverStaticResourcesControllerDegraded: )
Ginkgo exit error 1: exit with code 1}

Version-Release number of selected component (if applicable):
4.12.nightly

The reason is that cluster-csi-snapshot-controller-operator does not create VolumeSnapshotClass CRD, which AWS EBS CSI driver operator expects to exist.

CSI driver operators must skip VolumeSnapshotClass creation if the CRD does not exist.

Description of problem:

openshift-apiserver, openshift-oauth-apiserver and kube-apiserver pods cannot validate the certificate when trying to reach etcd reporting certificate validation errors:

}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10"
W1018 11:36:43.523673      15 logging.go:59] [core] [Channel #186 SubChannel #187] grpc: addrConn.createTransport failed to connect to {
  "Addr": "[2620:52:0:198::10]:2379",
  "ServerName": "2620:52:0:198::10",
  "Attributes": null,
  "BalancerAttributes": null,
  "Type": 0,
  "Metadata": null
}. Err: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for ::1, 127.0.0.1, ::1, fd69::2, not 2620:52:0:198::10"

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-18-041406

How reproducible:

100%

Steps to Reproduce:

1. Deploy SNO with single stack IPv6 via ZTP procedure

Actual results:

Deployment times out and some of the operators aren't deployed successfully.

NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.12.0-0.nightly-2022-10-18-041406   False       False         True       124m    APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node....
baremetal                                  4.12.0-0.nightly-2022-10-18-041406   True        False         False      112m    
cloud-controller-manager                   4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
cloud-credential                           4.12.0-0.nightly-2022-10-18-041406   True        False         False      115m    
cluster-autoscaler                         4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
config-operator                            4.12.0-0.nightly-2022-10-18-041406   True        False         False      124m    
console                                                                                                                      
control-plane-machine-set                  4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
csi-snapshot-controller                    4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
dns                                        4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
etcd                                       4.12.0-0.nightly-2022-10-18-041406   True        False         True       121m    ClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries
image-registry                             4.12.0-0.nightly-2022-10-18-041406   False       True          True       104m    Available: The registry is removed...
ingress                                    4.12.0-0.nightly-2022-10-18-041406   True        True          True       111m    The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: DeploymentReplicasAllAvailable=False (DeploymentReplicasNotAvailable: 0/1 of replicas are available)
insights                                   4.12.0-0.nightly-2022-10-18-041406   True        False         False      118s    
kube-apiserver                             4.12.0-0.nightly-2022-10-18-041406   True        False         False      102m    
kube-controller-manager                    4.12.0-0.nightly-2022-10-18-041406   True        False         True       107m    GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp [fd02::3c5f]:9091: connect: connection refused
kube-scheduler                             4.12.0-0.nightly-2022-10-18-041406   True        False         False      107m    
kube-storage-version-migrator              4.12.0-0.nightly-2022-10-18-041406   True        False         False      117m    
machine-api                                4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
machine-approver                           4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
machine-config                             4.12.0-0.nightly-2022-10-18-041406   True        False         False      115m    
marketplace                                4.12.0-0.nightly-2022-10-18-041406   True        False         False      116m    
monitoring                                                                      False       True          True       98m     deleting Thanos Ruler Route failed: Timeout: request did not complete within requested timeout - context deadline exceeded, deleting UserWorkload federate Route failed: Timeout: request did not complete within requested timeout - context deadline exceeded, reconciling Alertmanager Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io alertmanager-main), reconciling Thanos Querier Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io thanos-querier), reconciling Prometheus API Route failed: retrieving Route object failed: the server was unable to return a response in the time allotted, but may still be processing the request (get routes.route.openshift.io prometheus-k8s), prometheuses.monitoring.coreos.com "k8s" not found
network                                    4.12.0-0.nightly-2022-10-18-041406   True        False         False      124m    
node-tuning                                4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
openshift-apiserver                        4.12.0-0.nightly-2022-10-18-041406   True        False         False      104m    
openshift-controller-manager               4.12.0-0.nightly-2022-10-18-041406   True        False         False      107m    
openshift-samples                                                               False       True          False      103m    The error the server was unable to return a response in the time allotted, but may still be processing the request (get imagestreams.image.openshift.io) during openshift namespace cleanup has left the samples in an unknown state
operator-lifecycle-manager                 4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
operator-lifecycle-manager-catalog         4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m    
operator-lifecycle-manager-packageserver   4.12.0-0.nightly-2022-10-18-041406   True        False         False      106m    
service-ca                                 4.12.0-0.nightly-2022-10-18-041406   True        False         False      124m    
storage                                    4.12.0-0.nightly-2022-10-18-041406   True        False         False      111m  

Expected results:

Deployment succeeds without issues.

Additional info:

I was unable to run must-gather so attaching the pods logs copied from the host file system.

Not all of the errors reported by the assisted API (and shown in the wait-for bootstrap complete output) actually require user action.

Some appear when the agents first register but resolve themselves relatively quickly in the natural course of events.

Some, like the availability of NTP, don't block the installation from proceeding at all.

We need to think about the best ways of exposing this information to the user.

Description of problem:

i18n translation missing in "Remove component node from application" modal

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1. Navigate to dev console and create a workload under an Application group
2. On the Toplogy remove the workload from the Application group
3. See the i18n error in the console

Actual results:

Missing i18n key "Remove component node from application" in namespace "topology" and language "en." in console

Expected results:

No i18n error should be shown in the console.

Additional info:

 

This is a clone of issue OCPBUGS-3069. The following is the description of the original issue:

Description of problem:

On cluster setting page, it shows available upgrade on page. After user chooses one target version and clicks "Upgrade", wait for a long time, there is no info about upgrade status.

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-25-210451

How reproducible:

Always

Steps to Reproduce:

1.Login console with available upgrade for the cluster, select a target version in the available version list. Then click "Update". Check the upgrade progress on the cluster setting page.
2.Check upgrade info from client with "oc adm upgrade".
3.

Actual results:

1.There is not any information or upgrade progress shown on the page.
2.It shows info about retrieving target version failed.
$ oc adm upgrade 
Cluster version is 4.12.0-0.nightly-2022-10-25-210451
  ReleaseAccepted=False  
  Reason: RetrievePayload
  Message: Retrieving payload failed version="4.12.0-0.nightly-2022-10-27-053332" image="registry.ci.openshift.org/ocp/release@sha256:fd4e9bec095b845c6f726f9ce17ee70449971b8286bb9b7478c06c5f697f05f1" failure=The update cannot be verified: unable to verify sha256:fd4e9bec095b845c6f726f9ce17ee70449971b8286bb9b7478c06c5f697f05f1 against keyrings: verifier-public-key-redhatUpstream: https://openshift-release.apps.ci.l2s4.p1.openshiftapps.com/graph
Channel: stable-4.12
Recommended updates:  
  VERSION                            IMAGE
  4.12.0-0.nightly-2022-11-01-135441 registry.ci.openshift.org/ocp/release@sha256:f79d25c821a73496f4664a81a123925236d0c7818fd6122feb953bc64e91f5d0
  4.12.0-0.nightly-2022-10-31-232349 registry.ci.openshift.org/ocp/release@sha256:cb2d157805abc413394fc579776d3f4406b0a2c2ed03047b6f7958e6f3d92622
  4.12.0-0.nightly-2022-10-28-001703 registry.ci.openshift.org/ocp/release@sha256:c914c11492cf78fb819f4b617544cd299c3a12f400e106355be653c0013c2530
  4.12.0-0.nightly-2022-10-27-053332 registry.ci.openshift.org/ocp/release@sha256:fd4e9bec095b845c6f726f9ce17ee70449971b8286bb9b7478c06c5f697f05f1

Expected results:

1. It should also show this kind of message on console page if retrieving target payload failed, so that user knows the actual result after try to upgrade.

Additional info:

 

Description of problem:

If a customer creates a machine with a networks section like this

networks:
- filter: {}
  noAllowedAddressPairs: false
  subnets:
  - filter: {}
    uuid: primary-subnet-uuid
- filter: {}
  noAllowedAddressPairs: true
  subnets:
  - filter: {}
    uuid: other-subnet-uuid
primarySubnet: primary-subnet-uuid

Then all the ports are created without the allowed address pairs.

Doing some research in the source code, I have found that:
- For each entry on the networks: section, networks are filtered as per its filter: section[1]
- Then, if the subnets: section of the network entry is not empty, for each of the network IDs found above[2], 2 things are done that are relevant for this situatoin:
  - The net ID is saved on a netsWithoutAllowedAddressPairs[3]. That map is later checked while creating any port[4].
  - For each subnet entry that matches the network ID, a port is created[5].

So, the problematic behavior happens due to the following:

- Both entries in the networks array have empty filters. This means that both entries selected all the neutron networks.
- This configuration results in one port per subnet as expected because, in the later traversal of the subnets array of each entry[5], it is filtering by subnet and creating a single port as expected.
- However, the entry with "noAllowedAddressPairs: true" is selecting all the neutron networks, so it adds all of them to the netsWithoutAllowedAddressPairs map[3], regardless of the subnets filtering.
- As all the networks are in noAllowedAddressPairs: true array, all the ports created for the VM have their allowed address pairs removed[4].

Why do we consider this behavior undesired?

I understand that, if we create a port for a network that has no allowed pairs, we create all the other ports in the same networks without the pairs. However, it is surprising that a port in a network is removed the allowed address pairs due to a setting in an entry that yielded no port on that network. In other words, one would expect that the same subnet filtering that happens on each network entry in what regards yielding ports for the VM would also work for the noAllowedPairs parameter.

Version-Release number of selected component (if applicable):

4.10.30

How reproducible:

Always

Steps to Reproduce:

1. Create a machineset like in the description
2.
3.

Actual results:

All ports have no address pairs

Expected results:

Only the port on the secondary subnet has no address pairs.

Additional info:

A simple workaround would be to just fill the filter so that a single network is selected for each network entry.

References:
[1] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L576
[2] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L580
[3] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L581-L583
[4] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L658-L660
[5] - https://github.com/openshift/cluster-api-provider-openstack/blob/f6b51710d4f395ded401347589447f5f41dd5c4c/pkg/cloud/openstack/clients/machineservice.go#L610-L625

Description of problem:

When log line number is too big, the number will overlap with cut-off line in the log viewer.

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-08-15-150248

How reproducible:

Always

Steps to Reproduce:
1.Go to a pod log page with lots of logs, such as pod in openshift-cluster-version namespace. Check log line numbers.
2.
3.

Actual results:

1. When line number is too big, it will overlap with cut-off line.

Expected results:

1. Should have no overlaps in logs

Additional info:

Description of problem:

NPE on topology for the ns which just got deleted, see screenshot below

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1. Login as regular user
2. Create a ns and delete the ns
3. visit the deleted ns in topology

Actual results:

console breaks dur to NPE

Expected results:

console shouldn't break

Additional info:

 

With CSISnapshot capability is disabled, all Azure Disk CSI Driver Operator gets Degraded.

The reason is that cluster-csi-snapshot-controller-operator does not create VolumeSnapshotClass CRD, which the operator expects to exist.

This is a clone of issue OCPBUGS-3214. The following is the description of the original issue:

Description of problem:

The installer has logic that avoids adding the router CAs to the kubeconfig if the console is not available.  It's not clear why it does this, but it means that the router CAs don't get added when the console is deliberately disabled (it is now an optional capability in 4.12).

Version-Release number of selected component (if applicable):

Seen in 4.12+4.13

How reproducible:

Always, when starting a cluster w/o the Console capability

Steps to Reproduce:

1. Edit the install-config to set:
capabilities:
  baselineCapabilitySet: None
2. install the cluster
3. check the CAs in the kubeconfig, the wildcard route CA will be missing (compare it w/ a normal cluster)

Actual results:

router CAs missing

Expected results:

router CAs should be present

Additional info:

This needs to be backported to 4.12.

This is a clone of issue OCPBUGS-2513. The following is the description of the original issue:

Description of problem:

Agent based installation is failing for Disconnected env due to pull secret is required for registry.ci.openshift.org. As we are installing cluster in disconnected env, only mirror registry secrets should be enough for pulling the image.

Version-Release number of selected component (if applicable):

registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-18-041406

How reproducible:

Always

Steps to Reproduce:

1. Setup mirror registry with this registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-10-18-041406 release. 
2. Add the ICSP information in the install-config file
4. Create agent.iso using install-config.yaml and agent-config.yaml
5. ssh to the node zero to see the error in create-cluster-and-infraenv.service. 

Actual results:

create-cluster-and-infraenv.service is failing with below error:
 
time="2022-10-18T09:36:13Z" level=fatal msg="Failed to register cluster with assisted-service: AssistedServiceError Code: 400 Href:  ID: 400 Kind: Error Reason: pull secret for new cluster is invalid: pull secret must contain auth for \"registry.ci.openshift.org\""

Expected results:

create-cluster-and-infraenv.service should be successfully started.

Additional info:

Refer this similar bug https://bugzilla.redhat.com/show_bug.cgi?id=1990659

Description of problem:

TO address: 'Static Pod is managed but errored" err="managed container xxx does not have Resource.Requests'

Version-Release number of selected component (if applicable):

4.12

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Description of problem:

Setting up Github App from the console is lacking the required permission 
Events and Permissions: https://pipelinesascode.com/docs/install/github_apps/

Version-Release number of selected component (if applicable):
4.12

How reproducible:
Always

Steps to Reproduce:

1. Setup Github App from administrator perspective.
2. Create Repository and configure it to use the Github App method.

Actual results:
Creates Github App with limited permission.

Expected results:
Created Github App should contain all the required permission and should trigger the pipelinerun successfully on git events.

Additional info:

Console needs to update the default_events and default_permissions here it has to be matching with the CLI - refer this.

we need to update the  See Github permission section in the UI as well.

Description of problem:
project viewer is able to see a 'Create Pod Disruption Budget' button on Pods list page while the creation will fail finally due to less permission, in this way console should not show a 'Create Pod Disruption Budget' button for project viewer, other resources list page doesn’t have the issue

Version-Release number of selected component (if applicable):
4.10.0-0.nightly-2021-09-16-212009

How reproducible:
Always

Steps to Reproduce:
1. normal user has a project and workloads

  1. oc get all -n yapei1-project
    NAME READY STATUS RESTARTS AGE
    pod/example-787f749bb-czkms 1/1 Running 0 79s
    pod/example-787f749bb-m7wxt 1/1 Running 0 79s
    pod/example-787f749bb-mw8jv 1/1 Running 0 79s

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/example 3/3 3 3 79s

NAME DESIRED CURRENT READY AGE
replicaset.apps/example-787f749bb 3 3 3 79s

2. grant another user with view access to user project 'yapei1-project'

  1. oc adm policy add-role-to-user view uiauto1 -n yapei1-project
    clusterrole.rbac.authorization.k8s.io/view added: "uiauto1"
    3. login with user 'uiauto1' and check the permissions on Pods list page

Actual results:
3. project viewer 'uiauto1' can see pods list successfully, at the same time console also shows a 'Create Pod Disruption Budget' button while the creation will finally fail if project viewer tries to create a pod

Expected results:
3. console should not show 'Create Pod Disruption Budget' button for a project viewer

Additional info:
For comparison: we doesn't show resource creation button('Create xxx' button) on other workloads list page for a project viewer, such as Deployments, DeploymentConfigs list etc

This is a clone of issue OCPBUGS-3278. The following is the description of the original issue:

Description of problem:

When doing openshift-install agent create image, one should not need to provide platform specific data like boot MAC addresses.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

100%

Steps to Reproduce:

1.Create install-config with only VIPs in Baremetal platform section

apiVersion: v1
metadata:
  name: foo
baseDomain: test.metalkube.org
networking:
  clusterNetwork:
    - cidr: 10.128.0.0/14
      hostPrefix: 23
  machineNetwork:
    - cidr: 192.168.122.0/23
  networkType: OpenShiftSDN
  serviceNetwork:
    - 172.30.0.0/16
compute:
  - architecture: amd64
    hyperthreading: Enabled
    name: worker
    platform: {}
    replicas: 0
controlPlane:
  name: master
  replicas: 3
  hyperthreading: Enabled
  architecture: amd64
platform:
  baremetal:
    apiVIPs:
      - 192.168.122.10
    ingressVIPs:
      - 192.168.122.11
---
apiVersion: v1beta1
metadata:
  name: foo
rendezvousIP: 192.168.122.14

2.openshift-install agent create image

Actual results:

ERROR failed to write asset (Agent Installer ISO) to disk: cannot generate ISO image due to configuration errors 
ERROR failed to fetch Agent Installer ISO: failed to load asset "Install Config": failed to create install config: invalid "install-config.yaml" file: [platform.baremetal.hosts: Invalid value: []*baremetal.Host(nil): bare metal hosts are missing, platform.baremetal.Hosts: Required value: not enough hosts found (0) to support all the configured ControlPlane replicas (3)]

Expected results:

Image gets generated

Additional info:

We should go into install-config validation code, detect if we are doing agent-based installation and skip the hosts checks

Description of problem:

 

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Description of problem:

The Console Operator has a suite of tests responsible for assuring that Console can successfully interact with Operators managed by OLM. The operator-hub.spec test references an operator no longer present in the 4.12 certified operators catalog source: https://github.com/openshift/console/blob/master/frontend/packages/operator-lifecycle-manager/integration-tests-cypress/tests/operator-hub.spec.ts#L64

OLM is unable to set the default catalog sources to the 4.12 image tag until the test is update to reference an operator in both the 4.11 and 4.12 images of the certified operators catalog source.


Version-Release number of selected component (if applicable):4.12


How reproducible: always


Steps to Reproduce:

1. Update the certified operators catalogSource images to the 4.12 tag
2. Attempt to run the operatorhub.spec test suite.

Actual results:

The test fails

Expected results:

The test passes

Additional info:


Description of problem:

There were 4 ingress-controllers and totally 15 routes. On web console, try to query "route_metrics_controller_routes_per_shard" in Observe >> Metrics page. the stats for 3 ingress-controllers are 15, and it is 1 for the last ingress-controller

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-23-154914

How reproducible:

Create pods, services, ingress-controllers, routes, then check  "route_metrics_controller_routes_per_shard" on web console

Steps to Reproduce:

1. get cluster's base domain
% oc get dnses.config/cluster -oyaml | grep -i domain
  baseDomain: shudi-412gcpop36.qe.gcp.devcluster.openshift.com

2. create 3 clusters
% oc -n openshift-ingress-operator get ingresscontroller
NAME         AGE
default      7h5m
extertest3   120m
internal1    120m
internal2    120m
% 

3. check the spec of the 4 ingress-controllres
a, default

b, extertest3
spec:
  domain: extertest3.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
  endpointPublishingStrategy:
    loadBalancer:
      dnsManagementPolicy: Managed
      scope: External
    type: LoadBalancerService
c, internal1
spec:
  domain: internal1.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
  endpointPublishingStrategy:
    loadBalancer:
      dnsManagementPolicy: Managed
      scope: Internal
    type: LoadBalancerService
d, internal2
spec:
  domain: internal2.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
  endpointPublishingStrategy:
    loadBalancer:
      dnsManagementPolicy: Managed
      scope: Internal
    type: LoadBalancerService
  routeSelector:
    matchLabels:
      shard: alpha

4. check the route, there are 15 routes
% oc get route -A | awk '{print $3}'
HOST/PORT
oauth-openshift.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
console-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
downloads-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
canary-openshift-ingress-canary.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
alertmanager-main-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
prometheus-k8s-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
prometheus-k8s-federate-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
thanos-querier-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
edge1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
int1reen2-test.internal1.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
pass1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
reen1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
service-unsecure-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
int1edge2-test.internal1.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
test.shudi.com
%

% oc get route -A | awk '{print $3}' | grep apps.shudi
oauth-openshift.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
console-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
downloads-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
canary-openshift-ingress-canary.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
alertmanager-main-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
prometheus-k8s-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
prometheus-k8s-federate-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
thanos-querier-openshift-monitoring.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
edge1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
pass1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
reen1-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
service-unsecure-test.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com
%

% oc get route -A | awk '{print $3}' | grep apps.shudi | wc -l
      12
% oc get route -A | awk '{print $3}' | grep internal1 | wc -l 
       2
% oc get route -A | awk '{print $3}' | grep shudi.com | wc -l
       1
%

5. only route unsvc5 had the shard=alpha label
 % oc get route unsvc5  -oyaml | grep labels: -A2
  labels:
    name: unsvc5
    shard: alpha
 % oc get route unsvc5 -oyaml | grep spec: -A1
  spec:
    host: test.shudi.com

6. login web console(https://https://console-openshift-console.apps.shudi-412gcpop36.qe.gcp.devcluster.openshift.com/monitoring/query-browser), then navigate to Observe >> Metrics 

7. input"route_metrics_controller_routes_per_shard ", then click the "Run queries" button. As the attached picture showed:
​​name                           value
default                        15
extertest3                     15
internal1                      15      
internal2                      1

8. Also there was a minor issue: As the attached picture showed, there were two name in the header line

Name                                           name      value                              
route_metrics_controller_routes_per_shard     default    15
route_metrics_controller_routes_per_shard     extertest3 15
route_metrics_controller_routes_per_shard     internal1  15
route_metrics_controller_routes_per_shard     internal2  1

Actual results:

​​name                         value 
default                      15
extertest3                   15 
internal1                    15
internal2                    1

Expected results:

​​name                         value
default                      12
extertest3                   0
internal1                    2 
internal2                    1

Additional info:

 

Console should be using v1 version of the ConsolePlugin model rather then the old v1alpha1.

CONSOLE-3077 was updating this version, but did not made the cut for the 4.12 release. Based on discussion with Samuel Padgett we should be backporting to 4.12.

 

The risk should be minimal since we are only updating the model itself + validation + Readme

Description of problem:

cluster-version-operator pod crashloop during the bootstrap process might be leading to a longer bootstrap process causing the installer to timeout and fail.

The cluster-version-operator pod is continuously restarting due to a go panic. The bootstrap process fails due to the timeout although it completes the process correctly after more time, once the cluster-version-operator pod runs correctly.

$ oc -n openshift-cluster-version logs -p cluster-version-operator-754498df8b-5gll8
I0919 10:25:05.790124       1 start.go:23] ClusterVersionOperator 4.12.0-202209161347.p0.gc4fd1f4.assembly.stream-c4fd1f4                                                                                                                    
F0919 10:25:05.791580       1 start.go:29] error: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 127.0.0.1:6443: connect: connection refused                                                        
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0x1)
        /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:860 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x2bee180, 0x3, 0x0, 0xc00017d5e0, 0x1, {0x22e9abc?, 0x1?}, 0x2beed80?, 0x0)
        /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:825 +0x686
k8s.io/klog/v2.(*loggingT).printfDepth(0x2bee180, 0x0?, 0x0, {0x0, 0x0}, 0x1?, {0x1b9cff0, 0x9}, {0xc000089140, 0x1, ...})                                                                                                                   
        /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2
k8s.io/klog/v2.(*loggingT).printf(...)
        /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:612
k8s.io/klog/v2.Fatalf(...)
        /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1516
main.init.3.func1(0xc00012ac80?, {0x1b96f60?, 0x6?, 0x6?})
        /go/src/github.com/openshift/cluster-version-operator/cmd/start.go:29 +0x1e6
github.com/spf13/cobra.(*Command).execute(0xc00012ac80, {0xc0002fea20, 0x6, 0x6})
        /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:860 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x2bd52a0)
        /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:974 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
        /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:902
main.main()
        /go/src/github.com/openshift/cluster-version-operator/cmd/main.go:29 +0x46

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-18-234318

How reproducible:

Most of the times, with any network type and installation type (IPI, UPI and proxy).

Steps to Reproduce:

1. Install OCP 4.12 IPI
   $ openshift-install create cluster
2. Wait until bootstrap is completed

Actual results:

[...]
level=error msg="Bootstrap failed to complete: timed out waiting for the condition"
level=error msg="Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane."
NAMESPACE                                          NAME                                                         READY   STATUS             RESTARTS        AGE 
openshift-cluster-version                          cluster-version-operator-754498df8b-5gll8                    0/1     CrashLoopBackOff   7 (3m21s ago)   24m 
openshift-image-registry                           image-registry-94fd8b75c-djbxb                               0/1     Pending            0               6m44s 
openshift-image-registry                           image-registry-94fd8b75c-ft66c                               0/1     Pending            0               6m44s 
openshift-ingress                                  router-default-64fbb749b4-cmqgw                              0/1     Pending            0               13m   
openshift-ingress                                  router-default-64fbb749b4-mhtqx                              0/1     Pending            0               13m   
openshift-monitoring                               prometheus-operator-admission-webhook-6d8cb95cf7-6jn5q       0/1     Pending            0               14m 
openshift-monitoring                               prometheus-operator-admission-webhook-6d8cb95cf7-r6nnk       0/1     Pending            0               14m 
openshift-network-diagnostics                      network-check-source-8758bd6fc-vzf5k                         0/1     Pending            0               18m 
openshift-operator-lifecycle-manager               collect-profiles-27726375-hlq89                              0/1     Pending            0               21m 
$ oc -n openshift-cluster-version describe pod cluster-version-operator-754498df8b-5gll8
Name:                 cluster-version-operator-754498df8b-5gll8
Namespace:            openshift-cluster-version                                                            
Priority:             2000000000              
Priority Class Name:  system-cluster-critical                                                       
Node:                 ostest-4gtwr-master-1/10.196.0.68
Start Time:           Mon, 19 Sep 2022 10:17:41 +0000                       
Labels:               k8s-app=cluster-version-operator
                      pod-template-hash=754498df8b
Annotations:          openshift.io/scc: hostaccess 
Status:               Running                      
IP:                   10.196.0.68
IPs:                 
  IP:           10.196.0.68
Controlled By:  ReplicaSet/cluster-version-operator-754498df8b
Containers:        
  cluster-version-operator:
    Container ID:  cri-o://1e2879600c89baabaca68c1d4d0a563d4b664c507f0617988cbf9ea7437f0b27
    Image:         registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69                                                                                                             
    Image ID:      registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69
    Port:          <none>                                                                                                                                                                                                                    
    Host Port:     <none>                                                                                                                                                                                                                    
    Args:                                                     
      start                                                                                                                                                                                                                                  
      --release-image=registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69                                                                                                          
      --enable-auto-update=false                                                                                                                                                                                                             
      --listen=0.0.0.0:9099                                                  
      --serving-cert-file=/etc/tls/serving-cert/tls.crt
      --serving-key-file=/etc/tls/serving-cert/tls.key                                                                                                                                                                                       
      --v=2             
    State:       Waiting 
      Reason:    CrashLoopBackOff
    Last State:  Terminated
      Reason:    Error
      Message:   I0919 10:33:07.798614       1 start.go:23] ClusterVersionOperator 4.12.0-202209161347.p0.gc4fd1f4.assembly.stream-c4fd1f4
F0919 10:33:07.800115       1 start.go:29] error: Get "https://127.0.0.1:6443/apis/config.openshift.io/v1/featuregates/cluster": dial tcp 127.0.0.1:6443: connect: connection refused
goroutine 1 [running]:                                                                                                                                                                                                                [43/497]
k8s.io/klog/v2.stacks(0x1)
  /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:860 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x2bee180, 0x3, 0x0, 0xc000433ea0, 0x1, {0x22e9abc?, 0x1?}, 0x2beed80?, 0x0)
  /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:825 +0x686
k8s.io/klog/v2.(*loggingT).printfDepth(0x2bee180, 0x0?, 0x0, {0x0, 0x0}, 0x1?, {0x1b9cff0, 0x9}, {0xc0002d6630, 0x1, ...})
  /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2
k8s.io/klog/v2.(*loggingT).printf(...)
  /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:612
k8s.io/klog/v2.Fatalf(...)
  /go/src/github.com/openshift/cluster-version-operator/vendor/k8s.io/klog/v2/klog.go:1516
main.init.3.func1(0xc0003b4f00?, {0x1b96f60?, 0x6?, 0x6?})
  /go/src/github.com/openshift/cluster-version-operator/cmd/start.go:29 +0x1e6
github.com/spf13/cobra.(*Command).execute(0xc0003b4f00, {0xc000311980, 0x6, 0x6})
  /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:860 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x2bd52a0)
  /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:974 +0x3b4
github.com/spf13/cobra.(*Command).Execute(...)
  /go/src/github.com/openshift/cluster-version-operator/vendor/github.com/spf13/cobra/command.go:902
main.main()
  /go/src/github.com/openshift/cluster-version-operator/cmd/main.go:29 +0x46      Exit Code:    255
      Started:      Mon, 19 Sep 2022 10:33:07 +0000
      Finished:     Mon, 19 Sep 2022 10:33:07 +0000
    Ready:          False
    Restart Count:  7
    Requests:
      cpu:     20m
      memory:  50Mi
    Environment:
      KUBERNETES_SERVICE_PORT:  6443
      KUBERNETES_SERVICE_HOST:  127.0.0.1
      NODE_NAME:                 (v1:spec.nodeName)
      CLUSTER_PROFILE:          self-managed-high-availability
    Mounts:
      /etc/cvo/updatepayloads from etc-cvo-updatepayloads (ro)
      /etc/ssl/certs from etc-ssl-certs (ro)
      /etc/tls/service-ca from service-ca (ro)
      /etc/tls/serving-cert from serving-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access (ro)
onditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  etc-ssl-certs:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/ssl/certs
    HostPathType:
  etc-cvo-updatepayloads:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/cvo/updatepayloads
    HostPathType:
  serving-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-version-operator-serving-cert
    Optional:    false
  service-ca:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      openshift-service-ca.crt
    Optional:  false
  kube-api-access:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3600
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              node-role.kubernetes.io/master=
Tolerations:                 node-role.kubernetes.io/master:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 120s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 120s
Events:
  Type     Reason            Age                   From               Message
  ----     ------            ----                  ----               -------
  Warning  FailedScheduling  25m                   default-scheduler  no nodes available to schedule pods
  Warning  FailedScheduling  21m                   default-scheduler  0/2 nodes are available: 2 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/2 nodes are available: 2 Preemption is no
t helpful for scheduling.
  Normal   Scheduled         19m                   default-scheduler  Successfully assigned openshift-cluster-version/cluster-version-operator-754498df8b-5gll8 to ostest-4gtwr-master-1 by ostest-4gtwr-bootstrap
  Warning  FailedMount       17m                   kubelet            Unable to attach or mount volumes: unmounted volumes=[serving-cert], unattached volumes=[service-ca kube-api-access etc-ssl-certs etc-cvo-updatepayloads serving-cert]:
timed out waiting for the condition
  Warning  FailedMount       17m (x9 over 19m)     kubelet            MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found
  Normal   Pulling           15m                   kubelet            Pulling image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69"
  Normal   Pulled            15m                   kubelet            Successfully pulled image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" in 7.481824271s
  Normal   Started           14m (x3 over 15m)     kubelet            Started container cluster-version-operator
  Normal   Created           14m (x4 over 15m)     kubelet            Created container cluster-version-operator
  Normal   Pulled            14m (x3 over 15m)     kubelet            Container image "registry.ci.openshift.org/ocp/release@sha256:2e38cd73b402a990286829aebdf00aa67a5b99124c61ec2f4fccd1135a1f0c69" already present on machine
  Warning  BackOff           4m22s (x52 over 15m)  kubelet            Back-off restarting failed container
  
  

Expected results:

No panic?

Additional info:

Seen in most of OCP on OSP QE CI jobs.

Attached [^must-gather-install.tar.gz]

AWS CPMS changes made here causes the single node clusters to fail installation
https://github.com/openshift/installer/pull/6172

 

Need to fix the issue by checking and not creating the CPMS manifest if the installation type is single node.

This relates to the recovery of a cluster following an etcd outage.

The ingress path to kube-apiserver is:

───────────> VIP ─────────────────> Local HAProxy ────┬─> kube-apiserver-master-0
    (managed by keepalived)                           │
                                                      ├─> kube-apiserver-master-1
                                                      │
                                                      └─> kube-apiserver-master-2

Each master is running an HAProxy which load balances between the 3 kube-apiservers. Each HAProxy is running health checks against each kube-apiserver, and will add or remove it from the available pool based on its health.

We only use keepalived to ensure that HAProxy is not a single point of failure. It is the job of keepalived to ensure that incoming traffic is being directed to an HAProxy which is functioning correctly.

The current health check we are using for keepalived involves polling /readyz against the local HAProxy. While this seems intuitively correct it is in fact testing the wrong thing. It is testing whether the kube-apiserver it connects to is functioning correctly. However, this is not the purpose of keepalived. HAProxy runs health checks against kube-apiserver backends. keepalived simply selects a correctly functioning HAProxy.

This becomes important during recovery from an outage. When none of the kube-apiservers are healthy this health check will fail continuously, and the API VIP will move uselessly between masters. However the situation is much worse when only one of the kube-apiservers is up. In this case there is a high probability that it is overloaded and at least rate limiting incoming connections. This may lead us to fail the keepalived health check and fail the VIP over to the next HAProxy. This will cause all open kube-apiserver connections to reset, even the established ones. This increases the load on the kube-apiserver and increases the probability that the health check will fail again.

Ideally the keepalived health check would check only the health of HAProxy itself, not the health of the pool of kube-apiservers. In practise it will probably never be necessary to move the VIP while the master is up, regardless of the health of the cluster. A network partition affecting HAProxy would already be handled by VRRP between the masters, so it may be that it would be sufficient to check that the local HAProxy pod is healthy.

https://github.com/openshift/origin/pull/27444 was intended to move the scaling test out of serial to it's own test suite, but it added it to parallel – meaning it's running in all our normal upgrade jobs, causing them to frequently fail with repeating pathological events as well as greatly increasing their run time.

See https://github.com/openshift/origin/pull/27444#discussion_r991296925 for more info

Description of problem:

The Alertmanager silence create / edit form got a new "Negative matcher" option in 4.12 (see https://issues.redhat.com/browse/OCPBUGSM-47734). However, there is nothing to explain what this option means and it will likely not be obvious from the label alone unless you are already quite familiar with Alertmanager.

After discussion with the docs team, it was decided that adding some explanation in context in the UI would be much better than adding an explanation to the documentation. 

Version-Release number of selected component (if applicable):

 

How reproducible:

Always

Steps to Reproduce:

1. Go to Admin perspective
2. Go to Observe > Alerting > Silences page
3. Click on the Create button ("Negative matcher" option is shown with no explanation)

Actual results:

 

Expected results:

 

Additional info:

 

job=pull-ci-openshift-origin-master-e2e-gcp-builds=all

This test has started permafailing on e2e-gcp-builds:

[sig-builds][Feature:Builds][Slow] s2i build with environment file in sources Building from a template should create a image from "test-env-build.json" template and run it in a pod [apigroup:build.openshift.io][apigroup:image.openshift.io]

The error in the test says

Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:21 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Pulling: Pulling image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-sti-env-nglnt/test@sha256:262820fd1a94d68442874346f4c4024fdf556631da51cbf37ce69de094f56fe8"
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:23 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Pulled: Successfully pulled image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-sti-env-nglnt/test@sha256:262820fd1a94d68442874346f4c4024fdf556631da51cbf37ce69de094f56fe8" in 1.763914719s
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:23 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Created: Created container test
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:23 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Started: Started container test
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:24 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-sti-env-nglnt/test@sha256:262820fd1a94d68442874346f4c4024fdf556631da51cbf37ce69de094f56fe8" already present on machine
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:25 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Unhealthy: Readiness probe failed: Get "http://10.129.2.63:8080/": dial tcp 10.129.2.63:8080: connect: connection refused
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:26 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} BackOff: Back-off restarting failed container

Description of problem:

cloud-network-config-controller pod crashloops in proxy deployments as it tries to reach Openstack keystone API directly (not through the proxy) and there is no connectivity.

NAMESPACE                                          NAME                                                         READY   STATUS             RESTARTS          AGE
openshift-cloud-network-config-controller          cloud-network-config-controller-c4867b748-vlq9h              0/1     CrashLoopBackOff   158 (2m10s ago)   13h

$ oc -n openshift-cloud-network-config-controller logs -p cloud-network-config-controller-c4867b748-vlq9h
W0927 05:48:18.678947       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0927 05:48:18.680269       1 leaderelection.go:248] attempting to acquire leader lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock...
I0927 05:48:26.754377       1 leaderelection.go:258] successfully acquired lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock
I0927 05:48:26.755413       1 openstack.go:121] Custom CA bundle found at location '/kube-cloud-config/ca-bundle.pem' - reading certificate information
F0927 05:48:28.233519       1 main.go:101] Error building cloud provider client, err: Get "https://10.46.44.10:13000/": dial tcp 10.46.44.10:13000: connect: no route to host
goroutine 51 [running]:
k8s.io/klog/v2.stacks(0x1)
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:860 +0x8a
k8s.io/klog/v2.(*loggingT).output(0x37696c0, 0x3, 0x0, 0xc000636000, 0x1, {0x2cbcbd8?, 0x1?}, 0xc000438400?, 0x0)
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:825 +0x686
k8s.io/klog/v2.(*loggingT).printfDepth(0x37696c0, 0x237798a?, 0x0, {0x0, 0x0}, 0x7fff81041af7?, {0x23a20d0, 0x2d}, {0xc00052c050, 0x1, ...})
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:630 +0x1f2
k8s.io/klog/v2.(*loggingT).printf(...)
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:612
k8s.io/klog/v2.Fatalf(...)
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/klog/v2/klog.go:1516
main.main.func1({0x26e5638, 0xc00016c040})
        /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:101 +0x26d
created by k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:211 +0x11bgoroutine 1 [select]:
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00052bb60?, {0x26cee20, 0xc000581740}, 0x1, 0xc00052bb60)
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x135
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00016c080?, 0x60db88400, 0x0, 0x20?, 0x7fea470ec108?)
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/leaderelection.(*LeaderElector).renew(0xc0000a8120, {0x26e5638?, 0xc00016c040?})
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:268 +0xd0
k8s.io/client-go/tools/leaderelection.(*LeaderElector).Run(0xc0000a8120, {0x26e5638, 0xc00025fcc0})
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:212 +0x12f
k8s.io/client-go/tools/leaderelection.RunOrDie({0x26e5638, 0xc00025fcc0}, {{0x26e7430, 0xc00062afa0}, 0x1fe5d61a00, 0x18e9b26e00, 0x60db88400, {0xc00065e630, 0xc000634810, 0x0}, ...})
        /go/src/github.com/openshift/cloud-network-config-controller/vendor/k8s.io/client-go/tools/leaderelection/leaderelection.go:226 +0x94
main.main()
        /go/src/github.com/openshift/cloud-network-config-controller/cmd/cloud-network-config-controller/main.go:86 +0x450

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-26-050728

How reproducible:

Always

Steps to Reproduce:

1. Install OCP with proxy

Actual results:

Bootstrap failure and pod crashloop

Expected results:

Successful installation

Additional info:

Please find the must-gather here.

Description of problem:

The current version of openshift's corendns is based on Kubernetes 1.24 packages.  OpenShift 4.12 is based on Kubernetes 1.25.  

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Always

Steps to Reproduce:

1. Check https://github.com/openshift/coredns/blob/release-4.12/go.mod 

Actual results:

Kubernetes packages (k8s.io/api, k8s.io/apimachinery, and k8s.io/client-go) are at version v0.24.0.

Expected results:

Kubernetes packages are at version v0.25.0 or later.

Additional info:

Using old Kubernetes API and client packages brings risk of API compatibility issues.

This is a clone of issue OCPBUGS-3973. The following is the description of the original issue:

Description of problem:

Upgrade SNO cluster from 4.12 to 4.13, the csi-snapshot-controller is degraded with message (same with log from csi-snapshot-controller-operator): 
E1122 09:02:51.867727       1 base_controller.go:272] StaticResourceController reconciliation failed: ["csi_controller_deployment_pdb.yaml" (string): poddisruptionbudgets.policy "csi-snapshot-controller-pdb" is forbidden: User "system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator" cannot delete resource "poddisruptionbudgets" in API group "policy" in the namespace "openshift-cluster-storage-operator", "webhook_deployment_pdb.yaml" (string): poddisruptionbudgets.policy "csi-snapshot-webhook-pdb" is forbidden: User "system:serviceaccount:openshift-cluster-storage-operator:csi-snapshot-controller-operator" cannot delete resource "poddisruptionbudgets" in API group "policy" in the namespace "openshift-cluster-storage-operator"]

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-11-19-191518 to 4.13.0-0.nightly-2022-11-19-182111

How reproducible:

1/1

Steps to Reproduce:

Upgrade SNO cluster from 4.12 to 4.13 

Actual results:

csi-snapshot-controller is degraded

Expected results:

csi-snapshot-controller should be healthy

Additional info:

It also happened on from scratch cluster on 4.13: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64-single-node/1594946128904720384

Description of problem:

In looking at jobs on an accepted payload at https://amd64.ocp.releases.ci.openshift.org/releasestream/4.12.0-0.ci/release/4.12.0-0.ci-2022-08-30-122201 , I observed this job https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-serial/1564589538850902016 with "Undiagnosed panic detected in pod" "pods/openshift-controller-manager-operator_openshift-controller-manager-operator-74bf985788-8v9qb_openshift-controller-manager-operator.log.gz:E0830 12:41:48.029165       1 runtime.go:79] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)" 

Version-Release number of selected component (if applicable):

4.12

How reproducible:

probably relatively easy to reproduce (but not consistently) given it's happened several times according to this search: https://search.ci.openshift.org/?search=Observed+a+panic%3A+%22invalid+memory+address+or+nil+pointer+dereference%22&maxAge=48h&context=1&type=junit&name=&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job

Steps to Reproduce:

1. let nightly payloads run or run one of the presubmit jobs mentioned in the search above
2.
3.

Actual results:

Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)}

Expected results:

no panics

Additional info:

 

We added server groups for control plane and computes as part of OSASINFRA-2570, except for UPI that only creates server group for the control plane.

We need to update the UPI scripts to create server group for computes to be consistent with IPI and have the instruction at https://docs.openshift.com/container-platform/4.11/machine_management/creating_machinesets/creating-machineset-osp.html work out of the box in case customers want to create MachineSets on their UPI clusters.

Related to OCPCLOUD-1135.

Description of problem:

health_statuses_insights metrics is showing disabled rules in "total". In other fields, it shows the correct amount.
In the code linked below, we can see that the "Disabled" rules are only skipped during the value assigning of TotalRisk

https://github.com/openshift/insights-operator/blob/master/pkg/insights/insightsreport/insightsreport.go#L268

How reproducible:

Always

Steps to Reproduce:

1. Upload a fake archive to trigger health checks (for example with rule CVE_2020_8555_kubernetes)
2. Disable one of the rules through https://console.redhat.com/api/insights-results-aggregator/v1/clusters/{cluster.id}/rules/{rule}/error_key/{error_key}/disable
3. Create support secret and set endpoint="https://httpstat.us/200"
4. restart insights operator
5. wait for alerts to trigger
6. Check health_statuses_insights metrics. 

rule:

ccx_rules_ocp.external.rules.ocp_version_end_of_life.report

error_key:

OCP4X_BEYOND_EOL

 

Actual results:

"moderate" health_statuses_insights shows 2 triggers
"total" shows 3. Therefore, it is accounting for the deactivated rule.

Expected results:

"moderate" health_statuses_insights shows 2 triggers
"total" health_statuses_insights shows 2 triggers (doesn't account for deactivated rule)

Additional info:

If there is any issue in triggering this events, you may contact me and I can help with the steps.

 

Description of problem:

The cluster-dns-operator does not reconcile the openshift-dns namespace, which has been exposed as an issue in 4.12 due to the requirement for the namespace to have pod-security labels.

If a cluster has been incrementally updated from a version less than or equal to 4.9, the openshift-dns namespace will most likely not contain the required pod-security labels since the namespace was statically created when the cluster was installed with old namespace configuration.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Always if cluster originally installed with v4.9 or less

Steps to Reproduce:

1. Install v4.9
2. Upgrade to v4.12 (incrementally if required for upgrade path)
3. openshift-dns namespace will be missing pod-security labels

Actual results:

"oc get ns openshift-dns -o yaml" will show missing pod-security labels: 

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/node-selector: ""
    openshift.io/sa.scc.mcs: s0:c15,c0
    openshift.io/sa.scc.supplemental-groups: 1000210000/10000
    openshift.io/sa.scc.uid-range: 1000210000/10000
  creationTimestamp: "2020-05-21T19:36:15Z"
  labels:
    kubernetes.io/metadata.name: openshift-dns
    olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: ""
    openshift.io/cluster-monitoring: "true"
    openshift.io/run-level: "0"
  name: openshift-dns
  resourceVersion: "3127555382"
  uid: 0fb4571e-952f-4bea-bc45-461beec54369
spec:
  finalizers:
  - kubernetes

Expected results:

pod-security labels should exist:
 
 labels:
    kubernetes.io/metadata.name: openshift-dns
    olm.operatorgroup.uid/3d42c0c1-01cd-4c55-bf88-864f041c7e7a: ""
    openshift.io/cluster-monitoring: "true"
    openshift.io/run-level: "0"
    pod-security.kubernetes.io/audit: privileged
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/warn: privileged

Additional info:

Issue found in CI during upgrade

https://coreos.slack.com/archives/C03G7REB4JV/p1663676443155839 

Description of problem:
Kebab menu for helm repository is showing inconsistent behavior

Version-Release number of selected component (if applicable): 4.12

How reproducible: Always

Steps to Reproduce:
1. Create some helm chart repository
2. Go to the Helm page and switch to the repositories tab
3. Open kebab menu for different repos

Actual results:
Menus are overlapping

Expected results:
The menu should work properly; one menu should close before opening a new one

Additional info:
Video has been added for the reference

ovnkube-trace: ofproto/trace fails for IPv6

[akaris@linux go-controller (fix-ovnkube-trace-ipv6)]$ oc exec -ti ovn-trace-two -n ovn-tests-two -- ovnkube-trace -src-namespace ovn-tests-two -src ovn-trace-two -dst-ip 2404:6800:4003:c06::69 -tcp
I1021 12:16:56.478752    3356 ovs.go:90] Maximum command line arguments set to: 191102
ovn-trace from pod to IP indicates success from ovn-trace-two to 2404:6800:4003:c06::69
F1021 12:16:57.075803    3356 ovnkube-trace.go:601] ovs-appctl ofproto/trace pod to IP error command terminated with exit code 2 stdOut: 
 stdErr: Bad openflow flow syntax: in_port=73af56a18042ab9, tcp, dl_src=0a:58:17:2b:b6:42, dl_dst=0a:58:69:bd:ba:d8, nw_src=fd01:0:0:5::13, nw_dst=2404:6800:4003:c06::69, nw_ttl=64, tcp_dst=80, tcp_src=12345: bad value for nw_src (fd01:0:0:5::13: invalid IP address)
ovs-appctl: ovs-vswitchd: server returned an error
command terminated with exit code 1
[akaris@linux go-controller (fix-ovnkube-trace-ipv6)]$ oc exec -ti ovn-trace-two -n ovn-tests-two -- ovnkube-trace -src-namespace ovn-tests-two -src ovn-trace-two -dst-namespace ovn-tests -dst ovn-trace -udp
I1021 12:17:26.695325    3386 ovs.go:90] Maximum command line arguments set to: 191102
ovn-trace source pod to destination pod indicates success from ovn-trace-two to ovn-trace
ovn-trace destination pod to source pod indicates success from ovn-trace to ovn-trace-two
F1021 12:17:27.708822    3386 ovnkube-trace.go:601] ovs-appctl ofproto/trace source pod to destination pod error command terminated with exit code 2 stdOut: 
 stdErr: Bad openflow flow syntax: in_port=73af56a18042ab9, udp, dl_src=0a:58:17:2b:b6:42, dl_dst=0a:58:69:bd:ba:d8, nw_src=fd01:0:0:5::13, nw_dst=fd01:0:0:5::14, nw_ttl=64, udp_dst=80, udp_src=12345: bad value for nw_src (fd01:0:0:5::13: invalid IP address)
ovs-appctl: ovs-vswitchd: server returned an error
command terminated with exit code 1

Description of problem:

Alert actions are not triggering modal from where storage cluster can be expanded.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

1/1

Steps to Reproduce:

1. Fill up a storage cluster to 80%
2. Alert is seen in cluster dashboard.
3. Click the Add Capacity button

Actual results:

Modal is not launched.

Expected results:

Modal should be launched.

Additional info:

 

Description of problem:

prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0" on SNO after hard reboot tests

Version-Release number of selected component (if applicable):

4.11.6

How reproducible:

Not always, after ~10 attempts

Steps to Reproduce:

1. Deploy SNO with Telco DU profile applied
2. Hard reboot node via out of band interface
3. oc -n openshift-monitoring get pods prometheus-k8s-0 

Actual results:

NAME               READY   STATUS             RESTARTS          AGE
prometheus-k8s-0   5/6     CrashLoopBackOff   125 (4m57s ago)   5h28m

Expected results:

Running

Additional info:

Attaching must-gather.

The pod recovers successfully after deleting/re-creating.


[kni@registry.kni-qe-0 ~]$ oc -n openshift-monitoring logs prometheus-k8s-0
ts=2022-09-26T14:54:01.919Z caller=main.go:552 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=rhaos-4.11-rhel-8, revision=0d81ba04ce410df37ca2c0b1ec619e1bc02e19ef)"
ts=2022-09-26T14:54:01.919Z caller=main.go:557 level=info build_context="(go=go1.18.4, user=root@371541f17026, date=20220916-14:15:37)"
ts=2022-09-26T14:54:01.919Z caller=main.go:558 level=info host_details="(Linux 4.18.0-372.26.1.rt7.183.el8_6.x86_64 #1 SMP PREEMPT_RT Sat Aug 27 22:04:33 EDT 2022 x86_64 prometheus-k8s-0 (none))"
ts=2022-09-26T14:54:01.919Z caller=main.go:559 level=info fd_limits="(soft=1048576, hard=1048576)"
ts=2022-09-26T14:54:01.919Z caller=main.go:560 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2022-09-26T14:54:01.921Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=127.0.0.1:9090
ts=2022-09-26T14:54:01.922Z caller=main.go:989 level=info msg="Starting TSDB ..."
ts=2022-09-26T14:54:01.924Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false
ts=2022-09-26T14:54:01.926Z caller=main.go:848 level=info msg="Stopping scrape discovery manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:862 level=info msg="Stopping notify discovery manager..."
ts=2022-09-26T14:54:01.926Z caller=manager.go:951 level=info component="rule manager" msg="Stopping rule manager..."
ts=2022-09-26T14:54:01.926Z caller=manager.go:961 level=info component="rule manager" msg="Rule manager stopped"
ts=2022-09-26T14:54:01.926Z caller=main.go:899 level=info msg="Stopping scrape manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:858 level=info msg="Notify discovery manager stopped"
ts=2022-09-26T14:54:01.926Z caller=main.go:891 level=info msg="Scrape manager stopped"
ts=2022-09-26T14:54:01.926Z caller=notifier.go:599 level=info component=notifier msg="Stopping notification manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:844 level=info msg="Scrape discovery manager stopped"
ts=2022-09-26T14:54:01.926Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:1120 level=info msg="Notifier manager stopped"
ts=2022-09-26T14:54:01.926Z caller=main.go:1129 level=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0"

Description of problem: The product name for Azure Red Hat OpenShift was incorrect in Customer Case Management (CCM). As a result, the console included this incorrect product name in order for the support case link to correctly route. https://issues.redhat.com/browse/CPCCM-9926 fixed the incorrect product name, so now the support case link for Azure needs to be updated to reflect the correct product name.

Description of problem:

The platform-operators-aggregated cluster operator wasn't created after enabling "TechPreviewNoUpgrade" featureGate, as follows,

MacBook-Pro:~ jianzhang$ oc patch featuregate cluster -p '{"spec": {"featureSet": "TechPreviewNoUpgrade"}}' --type=merge
featuregate.config.openshift.io/cluster patched

MacBook-Pro:~ jianzhang$ oc wait --for=condition=Available=True clusteroperators.config.openshift.io/platform-operators-aggregated
Error from server (NotFound): clusteroperators.config.openshift.io "platform-operators-aggregated" not found

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-20-095559

How reproducible:

always

Steps to Reproduce:

1. Install OCP 4.12 cluster.

2. Enable "TechPreviewNoUpgrade" feature gate.
MacBook-Pro:~ jianzhang$ oc patch featuregate cluster -p '{"spec": {"featureSet": "TechPreviewNoUpgrade"}}' --type=merge
featuregate.config.openshift.io/cluster patched 

3. Check platform-operators-aggregated cluster operator.
 

Actual results:

MacBook-Pro:~ jianzhang$ oc wait --for=condition=Available=True clusteroperators.config.openshift.io/platform-operators-aggregated
Error from server (NotFound): clusteroperators.config.openshift.io "platform-operators-aggregated" not found

Expected results:

The platform-operators-aggregated cluster operator can be created successfully.

Additional info:

The openshift-platform-operators pods running well.

MacBook-Pro:~ jianzhang$ oc get deploy -n openshift-platform-operators
NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
platform-operators-controller-manager   1/1     1            1           126m
platform-operators-rukpak-core          1/1     1            1           126m
platform-operators-rukpak-webhooks      2/2     2            2           126m
MacBook-Pro:~ jianzhang$ oc get co platform-operators-aggregated
Error from server (NotFound): clusteroperators.config.openshift.io "platform-operators-aggregated" not found

Description of problem:

The API Explorer page layout is incorrect,  please check the attachment for more details

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-08-15-150248

How reproducible:

Always

Steps to Reproduce:
1. Login OCP, Go to Home -> API Explorer page

2. Check if there is an extra blank line between the dropdown filter and the list 

Actual results:

There is an extra blank line between the dropdown filter and the list 

Expected results:

Use right patternfly package, remove the extra blank line

Additional info:

104.0.5112.79 (Official Build) (64-bit)

This is a clone of issue OCPBUGS-3924. The following is the description of the original issue:

The APIs are scheduled for removal in Kube 1.26, which will ship with OpenShift 4.13. We want the 4.12 CVO to move to modern APIs in 4.12, so the APIRemovedInNext.*ReleaseInUse alerts are not firing on 4.12. We'll need the components setting manifests for these deprecated APIs to move to modern APIs. And then we should drop our ability to reconcile the deprecated APIs, to avoid having other components leak back in to using them.

Specifically cluster-monitoring-operator touches:

Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times

Full output of the test at https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/27560/pull-ci-openshift-origin-master-e2e-gcp-ovn/1593697975584952320/artifacts/e2e-gcp-ovn/openshift-e2e-test/build-log.txt:

[It] clients should not use APIs that are removed in upcoming releases [apigroup:config.openshift.io] [Suite:openshift/conformance/parallel]
  github.com/openshift/origin/test/extended/apiserver/api_requests.go:27
Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times
Nov 18 21:59:06.261: INFO: api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times
Nov 18 21:59:06.261: INFO: api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times
Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times
Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times
Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times
Nov 18 21:59:06.261: INFO: user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times
Nov 18 21:59:06.261: INFO: user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times
Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times
api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times
api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times
user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times
user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times
user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times
user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times
Nov 18 21:59:06.261: INFO: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times
api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times
api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times
user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times
user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times
user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times
user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times
[AfterEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:158
[AfterEach] [sig-arch][Late]
  github.com/openshift/origin/test/extended/util/client.go:159
flake: api flowschemas.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 254 times
api horizontalpodautoscalers.v2beta2.autoscaling, removed in release 1.26, was accessed 10 times
api prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io, removed in release 1.26, was accessed 22 times
user/system:admin accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 14 times
user/system:serviceaccount:openshift-cluster-version:default accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 224 times
user/system:serviceaccount:openshift-cluster-version:default accessed prioritylevelconfigurations.v1beta1.flowcontrol.apiserver.k8s.io 22 times
user/system:serviceaccount:openshift-kube-storage-version-migrator:kube-storage-version-migrator-sa accessed flowschemas.v1beta1.flowcontrol.apiserver.k8s.io 16 times
user/system:serviceaccount:openshift-monitoring:kube-state-metrics accessed horizontalpodautoscalers.v2beta2.autoscaling 10 times
Ginkgo exit error 4: exit with code 4

This is required to unblock https://github.com/openshift/origin/pull/27561

Description of problem:

acquiring node lock for assigning ip address, node: %s, ip: %sci-ln-g470i52-1d09d-slz7m-worker-westus-6wt7k10.0.128.102

Description of problem:

A cluster hit a panic in etcd operator in bootstrap:
I0829 14:46:02.736582 1 controller_manager.go:54] StaticPodStateController controller terminated
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e940ab]

goroutine 2701 [running]:
github.com/openshift/cluster-etcd-operator/pkg/etcdcli.checkSingleMemberHealth({0x29374c0, 0xc00217d920}, 0xc0021fb110)
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:135 +0x34b
github.com/openshift/cluster-etcd-operator/pkg/etcdcli.getMemberHealth.func1()
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:58 +0x7f
created by github.com/openshift/cluster-etcd-operator/pkg/etcdcli.getMemberHealth
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:54 +0x2ac
Version-Release number of selected component (if applicable):

 

How reproducible:

Pulled up a 4.12 cluster and hit panic during bootstrap

Steps to Reproduce:

1.
2.
3.

Actual results:

panic as above

Expected results:

no panic

Additional info:

 

This is a clone of issue OCPBUGS-4168. The following is the description of the original issue:

Description of problem:

Prometheus continuously restarts due to slow WAL replay

Version-Release number of selected component (if applicable):

openshift - 4.11.13

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Description of problem:

 

During ocp multinode spoke cluster creation agent provisioning is stuck on "configuring" because machineConfig service is crashing on the node.
After restarting the service still fails with 

Can't read link "/var/lib/containers/storage/overlay/l/V2OP2CCVMKSOHK2XICC546DUCG" because it does not exist. A storage corruption might have occurred, attempting to recreate the missing symlinks. It might be best wipe the storage to avoid further errors due to storage corruption. 

Version-Release number of selected component (if applicable):

Podman 4.0.2 + 

How reproducible:

sometimes

Steps to Reproduce:

1. deploy multinode spoke (ipxe + boot order )
2.
3.

Actual results:

4 agents in done state and 1 is in "configuring"

 

Expected results:

all agents are in "done" state

Additional info:

issue mentioned in https://github.com/containers/podman/issues/14003

 

Fix: https://github.com/containers/storage/issues/1136

 

 

 

This bug is a backport clone of [Bugzilla Bug 2050230](https://bugzilla.redhat.com/show_bug.cgi?id=2050230). The following is the description of the original bug:

Description of problem:
In a large cluster, sdn daemonset can DoS the kube-apiserver with un-paginated LIST calls on high count resources.

Version-Release number of selected component (if applicable):

How reproducible:
NA

Steps to Reproduce:
NA

Actual results:
Kube API Server and Openshift API Server in one of the cluster keeps restarting, without proper exception. The cluster is not accessible.

Expected results:
Kube API Server and Openshift API Server should be stable.

Additional info:

Description of problem:

When providing the openshift-install agent create command with installconfig + agentconfig manifests that contain the InstallConfig Proxy section, the Proxy configuration does not get applied.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

100%

Steps to Reproduce:

1.Define InstallConfig with Proxy section
2.openshift-install agent create image
3.Boot ISO
4.Check /etc/assisted/manifests for InfraEnv to contain its Proxy section

Actual results:

Missing proxy

Expected results:

Proxy present and matching InstallConfig's

Additional info:

 

Description of problem:

A nil-pointer dereference occurred in the TestRouterCompressionOperation test in the e2e-gcp-operator CI job for the openshift/cluster-ingress-operator repository.

Version-Release number of selected component (if applicable):

4.12.

How reproducible:

Observed once. However, we run e2e-gcp-operator infrequently.

Steps to Reproduce:

1. Run the e2e-gcp-operator CI job on a cluster-ingress-operator PR.

Actual results:

 panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x38 pc=0x14cabef]
goroutine 8048 [running]:
testing.tRunner.func1.2({0x1624920, 0x265b870})
	/usr/lib/golang/src/testing/testing.go:1389 +0x24e
testing.tRunner.func1()
	/usr/lib/golang/src/testing/testing.go:1392 +0x39f
panic({0x1624920, 0x265b870})
	/usr/lib/golang/src/runtime/panic.go:838 +0x207
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x40e43e5698?})
	/go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:56 +0xd8
panic({0x1624920, 0x265b870})
	/usr/lib/golang/src/runtime/panic.go:838 +0x207
github.com/openshift/cluster-ingress-operator/test/e2e.getHttpHeaders(0xc0002b9380?, 0xc0000e4540, 0x1)
	/go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:257 +0x2ef
github.com/openshift/cluster-ingress-operator/test/e2e.testContentEncoding.func1()
	/go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:220 +0x57
k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x18, 0xc00003f000})
	/go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:222 +0x1b
k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x1b25d40?, 0xc000138000?}, 0xc000befe08?)
	/go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:235 +0x57
k8s.io/apimachinery/pkg/util/wait.poll({0x1b25d40, 0xc000138000}, 0x48?, 0xc4fa25?, 0x30?)
	/go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:582 +0x38
k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x1b25d40, 0xc000138000}, 0xc000b1da00?, 0xc000befe98?, 0x414207?)
	/go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:528 +0x4a
k8s.io/apimachinery/pkg/util/wait.PollImmediate(0xc00088cea0?, 0x3b9aca00?, 0xc000138000?)
	/go/src/github.com/openshift/cluster-ingress-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:514 +0x50
github.com/openshift/cluster-ingress-operator/test/e2e.testContentEncoding(0xc00088cea0, 0xc000a8a270, 0xc0000e4540, 0x1, {0x17fe569, 0x4})
	/go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:219 +0xfc
github.com/openshift/cluster-ingress-operator/test/e2e.TestRouterCompressionOperation(0xc00088cea0)
	/go/src/github.com/openshift/cluster-ingress-operator/test/e2e/router_compression_test.go:208 +0x454
testing.tRunner(0xc00088cea0, 0x191cdd0)
	/usr/lib/golang/src/testing/testing.go:1439 +0x102
created by testing.(*T).Run
	/usr/lib/golang/src/testing/testing.go:1486 +0x35f 

Expected results:

The test should pass.

Additional info:

The faulty logic was introduced in https://github.com/openshift/cluster-ingress-operator/pull/679/commits/211b9c15b1fd6217dee863790c20f34c26c138aa.
The test was subsequently marked as a parallel test in https://github.com/openshift/cluster-ingress-operator/pull/756/commits/a22322b25569059c61e1973f37f0a4b49e9407bc.
The job history shows that the e2e-gcp-operator job has only run once since June: https://prow.ci.openshift.org/job-history/gs/origin-ci-test/pr-logs/directory/pull-ci-openshift-cluster-ingress-operator-master-e2e-gcp-operator. I see failures in May, but none of those failures shows the panic.

 

 

Description of problem:

This a bug record to pin down dependencies version in CMO release 4.12 after the release-4.12 branch was detached from master branch.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

N/A

Steps to Reproduce:

N/A

Actual results:

N/A

Expected results:

N/A

Additional info:

None.

Create a script that gathers debug information from a host running the agent ISO and exports it in a standard format so that we can ask customers to provide it for debugging when something has gone wrong (and also use it in CI).

For now, it is fine to require the user to ssh into the host to run the script. The script should be already in place inside the agent ISO.

The output should probably be a compressed tar file. That file could be saved locally, or potentially piped to stdout so that a user only has to run a command like: ssh node0 -c agent-gather >node0.tgz

Things we need to collect:

  • systemctl status and journal for each of the systemd services created by the agent installer (ideally this should be determined programmatically so we can't forget to add any)
  • network information: ifconfig; ip -j -p addr
  • Data supplied by the agent installer in /etc/assisted/*
  • /etc/containers/registries.conf
  • /etc/assisted-service/node0 (if it exists)
  • /usr/local/share/assisted-service/*.env

This is a clone of issue OCPBUGS-2083. The following is the description of the original issue:

Description of problem:
Currently we are running VMWare CSI Operator in OpenShift 4.10.33. After running vulnerability scans, the operator was discovered to be running a known weak cipher 3DES. We are attempting to upgrade or modify the operator to customize the ciphers available. We were looking at performing a manual upgrade via Quay.io but can't seem to pull the image and was trying to steer away from performing a custom install from scratch. Looking for any suggestions into mitigated the weak cipher in the kube-rbac-proxy under VMware CSI Operator.

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

There is a bug where creating OLM subscription manifests early in the installation process results in those OLM operators not being installed.

This is because the OLM installation Jobs fail when they are tried early in the installation process, and OLM does not retry those jobs sufficiently and eventually gives up on them.

This should be solved starting OCP 4.12, but until then, we should solve this using Assisted.

A way to solve this is to delay the installation of OLM operators to only occur after the cluster is up and healthy. 

This can be done by creating the subscriptions with "installPlanApproval" set to "Manual" instead of "Automatic". Then once the cluster is up and healthy, the assisted-controller should approve the InstallPlans that OLM will create for the operators. This will then trigger the installation which is more likely to succeed since the cluster is up and healthy at this point

Description of problem:

Machine cannot go into Failed phase when providing an invalid vmSize, it stuck in Provisioning, and the prompt message is not accurate.

The case works well in 4.11 and previous versions, it’s a regression issue on 4.12, and seems introduced here: 
https://github.com/openshift/machine-api-provider-azure/pull/32/files#diff-af805e1e45f03df0b5b56ff4413e5ad52cd31904a94d37e8e916751953e4687dR565

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-28-204419

How reproducible:

always

Steps to Reproduce:

1. Create a machineset with invalid vmSize

vmSize: invalid

liuhuali@Lius-MacBook-Pro huali-test % oc create -f ms1.yaml               
machineset.machine.openshift.io/huliu-azure02pr-jmvl2-1 created

liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                                                 PHASE          TYPE              REGION           ZONE   AGE
huliu-azure02pr-jmvl2-1-6gbdw                        Provisioning                                             4m58s
huliu-azure02pr-jmvl2-master-0                       Running        Standard_D8s_v3   southcentralus   1      5h11m
huliu-azure02pr-jmvl2-master-1                       Running        Standard_D8s_v3   southcentralus   2      5h11m
huliu-azure02pr-jmvl2-master-2                       Running        Standard_D8s_v3   southcentralus   3      5h11m
huliu-azure02pr-jmvl2-worker-southcentralus1-9hgmk   Running        Standard_D4s_v3   southcentralus   1      4h56m
huliu-azure02pr-jmvl2-worker-southcentralus2-44mf6   Running        Standard_D4s_v3   southcentralus   2      4h56m
huliu-azure02pr-jmvl2-worker-southcentralus3-4m9b7   Running        Standard_D4s_v3   southcentralus   3      4h56m
liuhuali@Lius-MacBook-Pro huali-test % oc get machine huliu-azure02pr-jmvl2-1-6gbdw  -o yaml
apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
  creationTimestamp: "2022-09-29T06:36:03Z"
  finalizers:
  - machine.machine.openshift.io
  generateName: huliu-azure02pr-jmvl2-1-
  generation: 2
  labels:
    machine.openshift.io/cluster-api-cluster: huliu-azure02pr-jmvl2
    machine.openshift.io/cluster-api-machine-role: worker
    machine.openshift.io/cluster-api-machine-type: worker
    machine.openshift.io/cluster-api-machineset: huliu-azure02pr-jmvl2-1
  name: huliu-azure02pr-jmvl2-1-6gbdw
  namespace: openshift-machine-api
  ownerReferences:
  - apiVersion: machine.openshift.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: MachineSet
    name: huliu-azure02pr-jmvl2-1
    uid: f729cb01-274a-4c6e-8f69-808cff412fe3
  resourceVersion: "174604"
  uid: 2c4b9dd4-5666-47cd-8fc5-38bac0b9cad1
spec:
  lifecycleHooks: {}
  metadata: {}
  providerSpec:
    value:
      acceleratedNetworking: true
      apiVersion: machine.openshift.io/v1beta1
      credentialsSecret:
        name: azure-cloud-credentials
        namespace: openshift-machine-api
      diagnostics: {}
      image:
        offer: ""
        publisher: ""
        resourceID: /resourceGroups/huliu-azure02pr-jmvl2-rg/providers/Microsoft.Compute/images/huliu-azure02pr-jmvl2-gen2
        sku: ""
        version: ""
      kind: AzureMachineProviderSpec
      location: southcentralus
      managedIdentity: huliu-azure02pr-jmvl2-identity
      metadata:
        creationTimestamp: null
        name: huliu-azure02pr-jmvl2
      networkResourceGroup: huliu-azure02pr-jmvl2-rg
      osDisk:
        diskSettings: {}
        diskSizeGB: 128
        managedDisk:
          storageAccountType: Premium_LRS
        osType: Linux
      publicIP: false
      publicLoadBalancer: huliu-azure02pr-jmvl2
      resourceGroup: huliu-azure02pr-jmvl2-rg
      subnet: huliu-azure02pr-jmvl2-worker-subnet
      userDataSecret:
        name: worker-user-data
      vmSize: invalid
      vnet: huliu-azure02pr-jmvl2-vnet
      zone: "1"
status:
  conditions:
  - lastTransitionTime: "2022-09-29T06:36:03Z"
    status: "True"
    type: Drainable
  - lastTransitionTime: "2022-09-29T06:36:03Z"
    message: Instance has not been created
    reason: InstanceNotCreated
    severity: Warning
    status: "False"
    type: InstanceExists
  - lastTransitionTime: "2022-09-29T06:36:03Z"
    status: "True"
    type: Terminable
  lastUpdated: "2022-09-29T06:36:03Z"
  phase: Provisioning
  providerStatus:
    conditions:
    - lastTransitionTime: "2022-09-29T06:36:03Z"
      message: 'failed to create nic huliu-azure02pr-jmvl2-1-6gbdw-nic for machine
        huliu-azure02pr-jmvl2-1-6gbdw: failed to find sku invalid'
      reason: MachineCreationFailed
      status: "True"
      type: MachineCreated
    metadata: {}

machine-controller log:
...
W0929 11:38:25.817887       1 controller.go:382] huliu-azure02pr-jmvl2-invalid-lzzb2: failed to create machine: requeue in: 20s
I0929 11:38:25.817905       1 controller.go:412] Actuator returned requeue-after error: requeue in: 20s
I0929 11:38:25.817984       1 logr.go:252] events "msg"="Warning"  "message"="CreateError: failed to reconcile machine \"huliu-azure02pr-jmvl2-invalid-lzzb2\"s: failed to create nic huliu-azure02pr-jmvl2-invalid-lzzb2-nic for machine huliu-azure02pr-jmvl2-invalid-lzzb2: failed to find sku invalid" "object"={"kind":"Machine","namespace":"openshift-machine-api","name":"huliu-azure02pr-jmvl2-invalid-lzzb2","uid":"bab43f44-7da9-4b62-bbdc-01a180cc1de7","apiVersion":"machine.openshift.io/v1beta1","resourceVersion":"316506"} "reason"="FailedCreate"
I0929 11:38:25.817989       1 controller.go:187] huliu-azure02pr-jmvl2-invalid-lzzb2: reconciling Machine
I0929 11:38:25.818015       1 actuator.go:213] huliu-azure02pr-jmvl2-invalid-lzzb2: actuator checking if machine exists
W0929 11:38:25.916417       1 virtualmachines.go:99] vm huliu-azure02pr-jmvl2-invalid-lzzb2 not found: %!w(string=compute.VirtualMachinesClient#Get: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The Resource 'Microsoft.Compute/virtualMachines/huliu-azure02pr-jmvl2-invalid-lzzb2' under resource group 'huliu-azure02pr-jmvl2-rg' was not found. For more details please go to https://aka.ms/ARMResourceNotFoundFix")
I0929 11:38:25.916463       1 controller.go:380] huliu-azure02pr-jmvl2-invalid-lzzb2: reconciling machine triggers idempotent create
I0929 11:38:25.916476       1 actuator.go:85] Creating machine huliu-azure02pr-jmvl2-invalid-lzzb2
I0929 11:38:25.917540       1 machine_scope.go:176] huliu-azure02pr-jmvl2-invalid-lzzb2: status unchanged
I0929 11:38:25.917596       1 machine_scope.go:192] huliu-azure02pr-jmvl2-invalid-lzzb2: patching machine
E0929 11:38:25.941083       1 actuator.go:79] Machine error: failed to reconcile machine "huliu-azure02pr-jmvl2-invalid-lzzb2"s: failed to create nic huliu-azure02pr-jmvl2-invalid-lzzb2-nic for machine huliu-azure02pr-jmvl2-invalid-lzzb2: failed to find sku invalid

Actual results:

Machine stuck in Provisioning, the prompt message is not accurate

Expected results:

Machine go into Failed phase and give InvalidConfiguration error, as the previous versions. 

Additional info:

test result on previous version:

liuhuali@Lius-MacBook-Pro huali-test % oc get machine
NAME                               PHASE     TYPE              REGION   ZONE   AGE
jfan49-jn66b-master-0              Running   Standard_D8s_v3   westus          6h27m
jfan49-jn66b-master-1              Running   Standard_D8s_v3   westus          6h27m
jfan49-jn66b-master-2              Running   Standard_D8s_v3   westus          6h27m
jfan49-jn66b-worker-1-tdpdt        Failed                                      61s
jfan49-jn66b-worker-westus-2fz6b   Running   Standard_D4s_v3   westus          6h21m
jfan49-jn66b-worker-westus-6fkgb   Running   Standard_D4s_v3   westus          6h21m
jfan49-jn66b-worker-westus-k74gf   Running   Standard_D4s_v3   westus          6h21m
liuhuali@Lius-MacBook-Pro huali-test % oc get machine jfan49-jn66b-worker-1-tdpdt  -o yaml
apiVersion: machine.openshift.io/v1beta1
kind: Machine
metadata:
  annotations:
    machine.openshift.io/instance-state: Unknown
  creationTimestamp: "2022-09-29T08:50:13Z"
  finalizers:
  - machine.machine.openshift.io
  generateName: jfan49-jn66b-worker-1-
  generation: 2
  labels:
    machine.openshift.io/cluster-api-cluster: jfan49-jn66b
    machine.openshift.io/cluster-api-machine-role: worker
    machine.openshift.io/cluster-api-machine-type: worker
    machine.openshift.io/cluster-api-machineset: jfan49-jn66b-worker-1
  name: jfan49-jn66b-worker-1-tdpdt
  namespace: openshift-machine-api
  ownerReferences:
  - apiVersion: machine.openshift.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: MachineSet
    name: jfan49-jn66b-worker-1
    uid: 4319d2e2-3ee2-4cb2-a7b4-5a0d4e1ea3d7
  resourceVersion: "128119"
  uid: 7d9e4bbe-7c37-416e-a133-577476937b7a
spec:
  metadata: {}
  providerSpec:
    value:
      apiVersion: azureproviderconfig.openshift.io/v1beta1
      credentialsSecret:
        name: azure-cloud-credentials
        namespace: openshift-machine-api
      image:
        offer: ""
        publisher: ""
        resourceID: /resourceGroups/jfan49-jn66b-rg/providers/Microsoft.Compute/images/jfan49-jn66b
        sku: ""
        version: ""
      kind: AzureMachineProviderSpec
      location: westus
      managedIdentity: jfan49-jn66b-identity
      metadata:
        creationTimestamp: null
        name: jfan49-jn66b
      networkResourceGroup: jfan49-jn66b-rg
      osDisk:
        diskSizeGB: 128
        managedDisk:
          storageAccountType: Premium_LRS
        osType: Linux
      publicIP: false
      publicLoadBalancer: jfan49-jn66b
      resourceGroup: jfan49-jn66b-rg
      subnet: jfan49-jn66b-worker-subnet
      userDataSecret:
        name: worker-user-data
      vmSize: invalid
      vnet: jfan49-jn66b-vnet
      zone: ""
status:
  conditions:
  - lastTransitionTime: "2022-09-29T08:50:13Z"
    message: Instance has not been created
    reason: InstanceNotCreated
    severity: Warning
    status: "False"
    type: InstanceExists
  errorMessage: 'failed to reconcile machine "jfan49-jn66b-worker-1-tdpdt": failed
    to create vm jfan49-jn66b-worker-1-tdpdt: failure sending request for machine
    jfan49-jn66b-worker-1-tdpdt: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate:
    Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter"
    Message="The value invalid provided for the VM size is not valid. The valid sizes
    in the current region are: Standard_B1ls,Standard_B1ms,Standard_B1s,Standard_B2ms,Standard_B2s,Standard_B4ms,Standard_B8ms,Standard_B12ms,Standard_B16ms,Standard_B20ms,Standard_E2_v4,Standard_E4_v4,Standard_E8_v4,Standard_E16_v4,Standard_E20_v4,Standard_E32_v4,Standard_E2d_v4,Standard_E4d_v4,Standard_E8d_v4,Standard_E16d_v4,Standard_E20d_v4,Standard_E32d_v4,Standard_E2s_v4,Standard_E4-2s_v4,Standard_E4s_v4,Standard_E8-2s_v4,Standard_E8-4s_v4,Standard_E8s_v4,Standard_E16-4s_v4,Standard_E16-8s_v4,Standard_E16s_v4,Standard_E20s_v4,Standard_E32-8s_v4,Standard_E32-16s_v4,Standard_E32s_v4,Standard_E2ds_v4,Standard_E4-2ds_v4,Standard_E4ds_v4,Standard_E8-2ds_v4,Standard_E8-4ds_v4,Standard_E8ds_v4,Standard_E16-4ds_v4,Standard_E16-8ds_v4,Standard_E16ds_v4,Standard_E20ds_v4,Standard_E32-8ds_v4,Standard_E32-16ds_v4,Standard_E32ds_v4,Standard_D2d_v4,Standard_D4d_v4,Standard_D8d_v4,Standard_D16d_v4,Standard_D32d_v4,Standard_D48d_v4,Standard_D64d_v4,Standard_D2_v4,Standard_D4_v4,Standard_D8_v4,Standard_D16_v4,Standard_D32_v4,Standard_D48_v4,Standard_D64_v4,Standard_D2ds_v4,Standard_D4ds_v4,Standard_D8ds_v4,Standard_D16ds_v4,Standard_D32ds_v4,Standard_D48ds_v4,Standard_D64ds_v4,Standard_D2s_v4,Standard_D4s_v4,Standard_D8s_v4,Standard_D16s_v4,Standard_D32s_v4,Standard_D48s_v4,Standard_D64s_v4,Standard_D1_v2,Standard_D2_v2,Standard_D3_v2,Standard_D4_v2,Standard_D5_v2,Standard_D11_v2,Standard_D12_v2,Standard_D13_v2,Standard_D14_v2,Standard_D15_v2,Standard_D2_v2_Promo,Standard_D3_v2_Promo,Standard_D4_v2_Promo,Standard_D5_v2_Promo,Standard_D11_v2_Promo,Standard_D12_v2_Promo,Standard_D13_v2_Promo,Standard_D14_v2_Promo,Standard_F1,Standard_F2,Standard_F4,Standard_F8,Standard_F16,Standard_DS1_v2,Standard_DS2_v2,Standard_DS3_v2,Standard_DS4_v2,Standard_DS5_v2,Standard_DS11-1_v2,Standard_DS11_v2,Standard_DS12-1_v2,Standard_DS12-2_v2,Standard_DS12_v2,Standard_DS13-2_v2,Standard_DS13-4_v2,Standard_DS13_v2,Standard_DS14-4_v2,Standard_DS14-8_v2,Standard_DS14_v2,Standard_DS15_v2,Standard_DS2_v2_Promo,Standard_DS3_v2_Promo,Standard_DS4_v2_Promo,Standard_DS5_v2_Promo,Standard_DS11_v2_Promo,Standard_DS12_v2_Promo,Standard_DS13_v2_Promo,Standard_DS14_v2_Promo,Standard_F1s,Standard_F2s,Standard_F4s,Standard_F8s,Standard_F16s,Standard_A1_v2,Standard_A2m_v2,Standard_A2_v2,Standard_A4m_v2,Standard_A4_v2,Standard_A8m_v2,Standard_A8_v2,Standard_D2_v3,Standard_D4_v3,Standard_D8_v3,Standard_D16_v3,Standard_D32_v3,Standard_D48_v3,Standard_D64_v3,Standard_D2s_v3,Standard_D4s_v3,Standard_D8s_v3,Standard_D16s_v3,Standard_D32s_v3,Standard_D48s_v3,Standard_D64s_v3,Standard_E2_v3,Standard_E4_v3,Standard_E8_v3,Standard_E16_v3,Standard_E20_v3,Standard_E32_v3,Standard_E2s_v3,Standard_E4-2s_v3,Standard_E4s_v3,Standard_E8-2s_v3,Standard_E8-4s_v3,Standard_E8s_v3,Standard_E16-4s_v3,Standard_E16-8s_v3,Standard_E16s_v3,Standard_E20s_v3,Standard_E32-8s_v3,Standard_E32-16s_v3,Standard_E32s_v3,Standard_F2s_v2,Standard_F4s_v2,Standard_F8s_v2,Standard_F16s_v2,Standard_F32s_v2,Standard_F48s_v2,Standard_F64s_v2,Standard_F72s_v2,Standard_E48_v4,Standard_E64_v4,Standard_E48d_v4,Standard_E64d_v4,Standard_E48s_v4,Standard_E64-16s_v4,Standard_E64-32s_v4,Standard_E64s_v4,Standard_E80is_v4,Standard_E48ds_v4,Standard_E64-16ds_v4,Standard_E64-32ds_v4,Standard_E64ds_v4,Standard_E80ids_v4,Standard_E48_v3,Standard_E64_v3,Standard_E48s_v3,Standard_E64-16s_v3,Standard_E64-32s_v3,Standard_E64s_v3,Standard_A0,Standard_A1,Standard_A2,Standard_A3,Standard_A5,Standard_A4,Standard_A6,Standard_A7,Basic_A0,Basic_A1,Basic_A2,Basic_A3,Basic_A4,Standard_NC4as_T4_v3,Standard_NC8as_T4_v3,Standard_NC16as_T4_v3,Standard_NC64as_T4_v3,Standard_M64,Standard_M64m,Standard_M128,Standard_M128m,Standard_M8-2ms,Standard_M8-4ms,Standard_M8ms,Standard_M16-4ms,Standard_M16-8ms,Standard_M16ms,Standard_M32-8ms,Standard_M32-16ms,Standard_M32ls,Standard_M32ms,Standard_M32ts,Standard_M64-16ms,Standard_M64-32ms,Standard_M64ls,Standard_M64ms,Standard_M64s,Standard_M128-32ms,Standard_M128-64ms,Standard_M128ms,Standard_M128s,Standard_M32ms_v2,Standard_M64ms_v2,Standard_M64s_v2,Standard_M128ms_v2,Standard_M128s_v2,Standard_M192ims_v2,Standard_M192is_v2,Standard_M32dms_v2,Standard_M64dms_v2,Standard_M64ds_v2,Standard_M128dms_v2,Standard_M128ds_v2,Standard_M192idms_v2,Standard_M192ids_v2,Standard_E64i_v3,Standard_E64is_v3,Standard_D1,Standard_D2,Standard_D3,Standard_D4,Standard_D11,Standard_D12,Standard_D13,Standard_D14,Standard_DS1,Standard_DS2,Standard_DS3,Standard_DS4,Standard_DS11,Standard_DS12,Standard_DS13,Standard_DS14,Standard_DC8_v2,Standard_DC1s_v2,Standard_DC2s_v2,Standard_DC4s_v2,Standard_L8s_v2,Standard_L16s_v2,Standard_L32s_v2,Standard_L48s_v2,Standard_L64s_v2,Standard_L80s_v2,Standard_NV4as_v4,Standard_NV8as_v4,Standard_NV16as_v4,Standard_NV32as_v4,Standard_G1,Standard_G2,Standard_G3,Standard_G4,Standard_G5,Standard_GS1,Standard_GS2,Standard_GS3,Standard_GS4,Standard_GS4-4,Standard_GS4-8,Standard_GS5,Standard_GS5-8,Standard_GS5-16,Standard_L4s,Standard_L8s,Standard_L16s,Standard_L32s,Standard_DC2as_v5,Standard_DC4as_v5,Standard_DC8as_v5,Standard_DC16as_v5,Standard_DC32as_v5,Standard_DC48as_v5,Standard_DC64as_v5,Standard_DC96as_v5,Standard_DC2ads_v5,Standard_DC4ads_v5,Standard_DC8ads_v5,Standard_DC16ads_v5,Standard_DC32ads_v5,Standard_DC48ads_v5,Standard_DC64ads_v5,Standard_DC96ads_v5,Standard_EC2as_v5,Standard_EC4as_v5,Standard_EC8as_v5,Standard_EC16as_v5,Standard_EC20as_v5,Standard_EC32as_v5,Standard_EC48as_v5,Standard_EC64as_v5,Standard_EC96as_v5,Standard_EC96ias_v5,Standard_EC2ads_v5,Standard_EC4ads_v5,Standard_EC8ads_v5,Standard_EC16ads_v5,Standard_EC20ads_v5,Standard_EC32ads_v5,Standard_EC48ads_v5,Standard_EC64ads_v5,Standard_EC96ads_v5,Standard_EC96iads_v5,Standard_D2ds_v5,Standard_D4ds_v5,Standard_D8ds_v5,Standard_D16ds_v5,Standard_D32ds_v5,Standard_D48ds_v5,Standard_D64ds_v5,Standard_D96ds_v5,Standard_D2d_v5,Standard_D4d_v5,Standard_D8d_v5,Standard_D16d_v5,Standard_D32d_v5,Standard_D48d_v5,Standard_D64d_v5,Standard_D96d_v5,Standard_D2s_v5,Standard_D4s_v5,Standard_D8s_v5,Standard_D16s_v5,Standard_D32s_v5,Standard_D48s_v5,Standard_D64s_v5,Standard_D96s_v5,Standard_D2_v5,Standard_D4_v5,Standard_D8_v5,Standard_D16_v5,Standard_D32_v5,Standard_D48_v5,Standard_D64_v5,Standard_D96_v5,Standard_E2ds_v5,Standard_E4-2ds_v5,Standard_E4ds_v5,Standard_E8-2ds_v5,Standard_E8-4ds_v5,Standard_E8ds_v5,Standard_E16-4ds_v5,Standard_E16-8ds_v5,Standard_E16ds_v5,Standard_E20ds_v5,Standard_E32-8ds_v5,Standard_E32-16ds_v5,Standard_E32ds_v5,Standard_E48ds_v5,Standard_E64-16ds_v5,Standard_E64-32ds_v5,Standard_E64ds_v5,Standard_E96-24ds_v5,Standard_E96-48ds_v5,Standard_E96ds_v5,Standard_E104ids_v5,Standard_E2d_v5,Standard_E4d_v5,Standard_E8d_v5,Standard_E16d_v5,Standard_E20d_v5,Standard_E32d_v5,Standard_E48d_v5,Standard_E64d_v5,Standard_E96d_v5,Standard_E104id_v5,Standard_E2s_v5,Standard_E4-2s_v5,Standard_E4s_v5,Standard_E8-2s_v5,Standard_E8-4s_v5,Standard_E8s_v5,Standard_E16-4s_v5,Standard_E16-8s_v5,Standard_E16s_v5,Standard_E20s_v5,Standard_E32-8s_v5,Standard_E32-16s_v5,Standard_E32s_v5,Standard_E48s_v5,Standard_E64-16s_v5,Standard_E64-32s_v5,Standard_E64s_v5,Standard_E96-24s_v5,Standard_E96-48s_v5,Standard_E96s_v5,Standard_E104is_v5,Standard_E2_v5,Standard_E4_v5,Standard_E8_v5,Standard_E16_v5,Standard_E20_v5,Standard_E32_v5,Standard_E48_v5,Standard_E64_v5,Standard_E96_v5,Standard_E104i_v5,Standard_E2bs_v5,Standard_E4bs_v5,Standard_E8bs_v5,Standard_E16bs_v5,Standard_E32bs_v5,Standard_E48bs_v5,Standard_E64bs_v5,Standard_E2bds_v5,Standard_E4bds_v5,Standard_E8bds_v5,Standard_E16bds_v5,Standard_E32bds_v5,Standard_E48bds_v5,Standard_E64bds_v5,Standard_D2a_v4,Standard_D4a_v4,Standard_D8a_v4,Standard_D16a_v4,Standard_D32a_v4,Standard_D48a_v4,Standard_D64a_v4,Standard_D96a_v4,Standard_D2as_v4,Standard_D4as_v4,Standard_D8as_v4,Standard_D16as_v4,Standard_D32as_v4,Standard_D48as_v4,Standard_D64as_v4,Standard_D96as_v4,Standard_E2a_v4,Standard_E4a_v4,Standard_E8a_v4,Standard_E16a_v4,Standard_E20a_v4,Standard_E32a_v4,Standard_E48a_v4,Standard_E64a_v4,Standard_E96a_v4,Standard_E2as_v4,Standard_E4-2as_v4,Standard_E4as_v4,Standard_E8-2as_v4,Standard_E8-4as_v4,Standard_E8as_v4,Standard_E16-4as_v4,Standard_E16-8as_v4,Standard_E16as_v4,Standard_E20as_v4,Standard_E32-8as_v4,Standard_E32-16as_v4,Standard_E32as_v4,Standard_E48as_v4,Standard_E64-16as_v4,Standard_E64-32as_v4,Standard_E64as_v4,Standard_E96-24as_v4,Standard_E96-48as_v4,Standard_E96as_v4,Standard_E96ias_v4,Standard_NC6s_v3,Standard_NC12s_v3,Standard_NC24rs_v3,Standard_NC24s_v3,Standard_NV6s_v2,Standard_NV12s_v2,Standard_NV24s_v2,Standard_NV12s_v3,Standard_NV24s_v3,Standard_NV48s_v3,Standard_H8,Standard_H8_Promo,Standard_H16,Standard_H16_Promo,Standard_H8m,Standard_H8m_Promo,Standard_H16m,Standard_H16m_Promo,Standard_H16r,Standard_H16r_Promo,Standard_H16mr,Standard_H16mr_Promo,Standard_M208ms_v2,Standard_M208s_v2,Standard_M416-208s_v2,Standard_M416s_v2,Standard_M416-208ms_v2,Standard_M416ms_v2,Standard_DC1s_v3,Standard_DC2s_v3,Standard_DC4s_v3,Standard_DC8s_v3,Standard_DC16s_v3,Standard_DC24s_v3,Standard_DC32s_v3,Standard_DC48s_v3,Standard_DC1ds_v3,Standard_DC2ds_v3,Standard_DC4ds_v3,Standard_DC8ds_v3,Standard_DC16ds_v3,Standard_DC24ds_v3,Standard_DC32ds_v3,Standard_DC48ds_v3,Standard_NC24ads_A100_v4,Standard_NC48ads_A100_v4,Standard_NC96ads_A100_v4,Standard_D2as_v5,Standard_D4as_v5,Standard_D8as_v5,Standard_D16as_v5,Standard_D32as_v5,Standard_D48as_v5,Standard_D64as_v5,Standard_D96as_v5,Standard_E2as_v5,Standard_E4-2as_v5,Standard_E4as_v5,Standard_E8-2as_v5,Standard_E8-4as_v5,Standard_E8as_v5,Standard_E16-4as_v5,Standard_E16-8as_v5,Standard_E16as_v5,Standard_E20as_v5,Standard_E32-8as_v5,Standard_E32-16as_v5,Standard_E32as_v5,Standard_E48as_v5,Standard_E64-16as_v5,Standard_E64-32as_v5,Standard_E64as_v5,Standard_E96-24as_v5,Standard_E96-48as_v5,Standard_E96as_v5,Standard_E112ias_v5,Standard_D2ads_v5,Standard_D4ads_v5,Standard_D8ads_v5,Standard_D16ads_v5,Standard_D32ads_v5,Standard_D48ads_v5,Standard_D64ads_v5,Standard_D96ads_v5,Standard_E2ads_v5,Standard_E4-2ads_v5,Standard_E4ads_v5,Standard_E8-2ads_v5,Standard_E8-4ads_v5,Standard_E8ads_v5,Standard_E16-4ads_v5,Standard_E16-8ads_v5,Standard_E16ads_v5,Standard_E20ads_v5,Standard_E32-8ads_v5,Standard_E32-16ads_v5,Standard_E32ads_v5,Standard_E48ads_v5,Standard_E64-16ads_v5,Standard_E64-32ads_v5,Standard_E64ads_v5,Standard_E96-24ads_v5,Standard_E96-48ads_v5,Standard_E96ads_v5,Standard_E112iads_v5,Standard_L8s_v3,Standard_L16s_v3,Standard_L32s_v3,Standard_L48s_v3,Standard_L64s_v3,Standard_L80s_v3.
    Find out more on the valid VM sizes in each region at https://aka.ms/azure-regionservices."
    Target="vmSize"'
  errorReason: InvalidConfiguration
  lastUpdated: "2022-09-29T08:50:19Z"
  phase: Failed
  providerStatus:
    conditions:
    - lastProbeTime: "2022-09-29T08:50:19Z"
      lastTransitionTime: "2022-09-29T08:50:19Z"
      message: 'failed to create vm jfan49-jn66b-worker-1-tdpdt: failure sending request
        for machine jfan49-jn66b-worker-1-tdpdt: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate:
        Failure sending request: StatusCode=400 -- Original Error: Code="InvalidParameter"
        Message="The value invalid provided for the VM size is not valid. The valid
        sizes in the current region are: Standard_B1ls,Standard_B1ms,Standard_B1s,Standard_B2ms,Standard_B2s,Standard_B4ms,Standard_B8ms,Standard_B12ms,Standard_B16ms,Standard_B20ms,Standard_E2_v4,Standard_E4_v4,Standard_E8_v4,Standard_E16_v4,Standard_E20_v4,Standard_E32_v4,Standard_E2d_v4,Standard_E4d_v4,Standard_E8d_v4,Standard_E16d_v4,Standard_E20d_v4,Standard_E32d_v4,Standard_E2s_v4,Standard_E4-2s_v4,Standard_E4s_v4,Standard_E8-2s_v4,Standard_E8-4s_v4,Standard_E8s_v4,Standard_E16-4s_v4,Standard_E16-8s_v4,Standard_E16s_v4,Standard_E20s_v4,Standard_E32-8s_v4,Standard_E32-16s_v4,Standard_E32s_v4,Standard_E2ds_v4,Standard_E4-2ds_v4,Standard_E4ds_v4,Standard_E8-2ds_v4,Standard_E8-4ds_v4,Standard_E8ds_v4,Standard_E16-4ds_v4,Standard_E16-8ds_v4,Standard_E16ds_v4,Standard_E20ds_v4,Standard_E32-8ds_v4,Standard_E32-16ds_v4,Standard_E32ds_v4,Standard_D2d_v4,Standard_D4d_v4,Standard_D8d_v4,Standard_D16d_v4,Standard_D32d_v4,Standard_D48d_v4,Standard_D64d_v4,Standard_D2_v4,Standard_D4_v4,Standard_D8_v4,Standard_D16_v4,Standard_D32_v4,Standard_D48_v4,Standard_D64_v4,Standard_D2ds_v4,Standard_D4ds_v4,Standard_D8ds_v4,Standard_D16ds_v4,Standard_D32ds_v4,Standard_D48ds_v4,Standard_D64ds_v4,Standard_D2s_v4,Standard_D4s_v4,Standard_D8s_v4,Standard_D16s_v4,Standard_D32s_v4,Standard_D48s_v4,Standard_D64s_v4,Standard_D1_v2,Standard_D2_v2,Standard_D3_v2,Standard_D4_v2,Standard_D5_v2,Standard_D11_v2,Standard_D12_v2,Standard_D13_v2,Standard_D14_v2,Standard_D15_v2,Standard_D2_v2_Promo,Standard_D3_v2_Promo,Standard_D4_v2_Promo,Standard_D5_v2_Promo,Standard_D11_v2_Promo,Standard_D12_v2_Promo,Standard_D13_v2_Promo,Standard_D14_v2_Promo,Standard_F1,Standard_F2,Standard_F4,Standard_F8,Standard_F16,Standard_DS1_v2,Standard_DS2_v2,Standard_DS3_v2,Standard_DS4_v2,Standard_DS5_v2,Standard_DS11-1_v2,Standard_DS11_v2,Standard_DS12-1_v2,Standard_DS12-2_v2,Standard_DS12_v2,Standard_DS13-2_v2,Standard_DS13-4_v2,Standard_DS13_v2,Standard_DS14-4_v2,Standard_DS14-8_v2,Standard_DS14_v2,Standard_DS15_v2,Standard_DS2_v2_Promo,Standard_DS3_v2_Promo,Standard_DS4_v2_Promo,Standard_DS5_v2_Promo,Standard_DS11_v2_Promo,Standard_DS12_v2_Promo,Standard_DS13_v2_Promo,Standard_DS14_v2_Promo,Standard_F1s,Standard_F2s,Standard_F4s,Standard_F8s,Standard_F16s,Standard_A1_v2,Standard_A2m_v2,Standard_A2_v2,Standard_A4m_v2,Standard_A4_v2,Standard_A8m_v2,Standard_A8_v2,Standard_D2_v3,Standard_D4_v3,Standard_D8_v3,Standard_D16_v3,Standard_D32_v3,Standard_D48_v3,Standard_D64_v3,Standard_D2s_v3,Standard_D4s_v3,Standard_D8s_v3,Standard_D16s_v3,Standard_D32s_v3,Standard_D48s_v3,Standard_D64s_v3,Standard_E2_v3,Standard_E4_v3,Standard_E8_v3,Standard_E16_v3,Standard_E20_v3,Standard_E32_v3,Standard_E2s_v3,Standard_E4-2s_v3,Standard_E4s_v3,Standard_E8-2s_v3,Standard_E8-4s_v3,Standard_E8s_v3,Standard_E16-4s_v3,Standard_E16-8s_v3,Standard_E16s_v3,Standard_E20s_v3,Standard_E32-8s_v3,Standard_E32-16s_v3,Standard_E32s_v3,Standard_F2s_v2,Standard_F4s_v2,Standard_F8s_v2,Standard_F16s_v2,Standard_F32s_v2,Standard_F48s_v2,Standard_F64s_v2,Standard_F72s_v2,Standard_E48_v4,Standard_E64_v4,Standard_E48d_v4,Standard_E64d_v4,Standard_E48s_v4,Standard_E64-16s_v4,Standard_E64-32s_v4,Standard_E64s_v4,Standard_E80is_v4,Standard_E48ds_v4,Standard_E64-16ds_v4,Standard_E64-32ds_v4,Standard_E64ds_v4,Standard_E80ids_v4,Standard_E48_v3,Standard_E64_v3,Standard_E48s_v3,Standard_E64-16s_v3,Standard_E64-32s_v3,Standard_E64s_v3,Standard_A0,Standard_A1,Standard_A2,Standard_A3,Standard_A5,Standard_A4,Standard_A6,Standard_A7,Basic_A0,Basic_A1,Basic_A2,Basic_A3,Basic_A4,Standard_NC4as_T4_v3,Standard_NC8as_T4_v3,Standard_NC16as_T4_v3,Standard_NC64as_T4_v3,Standard_M64,Standard_M64m,Standard_M128,Standard_M128m,Standard_M8-2ms,Standard_M8-4ms,Standard_M8ms,Standard_M16-4ms,Standard_M16-8ms,Standard_M16ms,Standard_M32-8ms,Standard_M32-16ms,Standard_M32ls,Standard_M32ms,Standard_M32ts,Standard_M64-16ms,Standard_M64-32ms,Standard_M64ls,Standard_M64ms,Standard_M64s,Standard_M128-32ms,Standard_M128-64ms,Standard_M128ms,Standard_M128s,Standard_M32ms_v2,Standard_M64ms_v2,Standard_M64s_v2,Standard_M128ms_v2,Standard_M128s_v2,Standard_M192ims_v2,Standard_M192is_v2,Standard_M32dms_v2,Standard_M64dms_v2,Standard_M64ds_v2,Standard_M128dms_v2,Standard_M128ds_v2,Standard_M192idms_v2,Standard_M192ids_v2,Standard_E64i_v3,Standard_E64is_v3,Standard_D1,Standard_D2,Standard_D3,Standard_D4,Standard_D11,Standard_D12,Standard_D13,Standard_D14,Standard_DS1,Standard_DS2,Standard_DS3,Standard_DS4,Standard_DS11,Standard_DS12,Standard_DS13,Standard_DS14,Standard_DC8_v2,Standard_DC1s_v2,Standard_DC2s_v2,Standard_DC4s_v2,Standard_L8s_v2,Standard_L16s_v2,Standard_L32s_v2,Standard_L48s_v2,Standard_L64s_v2,Standard_L80s_v2,Standard_NV4as_v4,Standard_NV8as_v4,Standard_NV16as_v4,Standard_NV32as_v4,Standard_G1,Standard_G2,Standard_G3,Standard_G4,Standard_G5,Standard_GS1,Standard_GS2,Standard_GS3,Standard_GS4,Standard_GS4-4,Standard_GS4-8,Standard_GS5,Standard_GS5-8,Standard_GS5-16,Standard_L4s,Standard_L8s,Standard_L16s,Standard_L32s,Standard_DC2as_v5,Standard_DC4as_v5,Standard_DC8as_v5,Standard_DC16as_v5,Standard_DC32as_v5,Standard_DC48as_v5,Standard_DC64as_v5,Standard_DC96as_v5,Standard_DC2ads_v5,Standard_DC4ads_v5,Standard_DC8ads_v5,Standard_DC16ads_v5,Standard_DC32ads_v5,Standard_DC48ads_v5,Standard_DC64ads_v5,Standard_DC96ads_v5,Standard_EC2as_v5,Standard_EC4as_v5,Standard_EC8as_v5,Standard_EC16as_v5,Standard_EC20as_v5,Standard_EC32as_v5,Standard_EC48as_v5,Standard_EC64as_v5,Standard_EC96as_v5,Standard_EC96ias_v5,Standard_EC2ads_v5,Standard_EC4ads_v5,Standard_EC8ads_v5,Standard_EC16ads_v5,Standard_EC20ads_v5,Standard_EC32ads_v5,Standard_EC48ads_v5,Standard_EC64ads_v5,Standard_EC96ads_v5,Standard_EC96iads_v5,Standard_D2ds_v5,Standard_D4ds_v5,Standard_D8ds_v5,Standard_D16ds_v5,Standard_D32ds_v5,Standard_D48ds_v5,Standard_D64ds_v5,Standard_D96ds_v5,Standard_D2d_v5,Standard_D4d_v5,Standard_D8d_v5,Standard_D16d_v5,Standard_D32d_v5,Standard_D48d_v5,Standard_D64d_v5,Standard_D96d_v5,Standard_D2s_v5,Standard_D4s_v5,Standard_D8s_v5,Standard_D16s_v5,Standard_D32s_v5,Standard_D48s_v5,Standard_D64s_v5,Standard_D96s_v5,Standard_D2_v5,Standard_D4_v5,Standard_D8_v5,Standard_D16_v5,Standard_D32_v5,Standard_D48_v5,Standard_D64_v5,Standard_D96_v5,Standard_E2ds_v5,Standard_E4-2ds_v5,Standard_E4ds_v5,Standard_E8-2ds_v5,Standard_E8-4ds_v5,Standard_E8ds_v5,Standard_E16-4ds_v5,Standard_E16-8ds_v5,Standard_E16ds_v5,Standard_E20ds_v5,Standard_E32-8ds_v5,Standard_E32-16ds_v5,Standard_E32ds_v5,Standard_E48ds_v5,Standard_E64-16ds_v5,Standard_E64-32ds_v5,Standard_E64ds_v5,Standard_E96-24ds_v5,Standard_E96-48ds_v5,Standard_E96ds_v5,Standard_E104ids_v5,Standard_E2d_v5,Standard_E4d_v5,Standard_E8d_v5,Standard_E16d_v5,Standard_E20d_v5,Standard_E32d_v5,Standard_E48d_v5,Standard_E64d_v5,Standard_E96d_v5,Standard_E104id_v5,Standard_E2s_v5,Standard_E4-2s_v5,Standard_E4s_v5,Standard_E8-2s_v5,Standard_E8-4s_v5,Standard_E8s_v5,Standard_E16-4s_v5,Standard_E16-8s_v5,Standard_E16s_v5,Standard_E20s_v5,Standard_E32-8s_v5,Standard_E32-16s_v5,Standard_E32s_v5,Standard_E48s_v5,Standard_E64-16s_v5,Standard_E64-32s_v5,Standard_E64s_v5,Standard_E96-24s_v5,Standard_E96-48s_v5,Standard_E96s_v5,Standard_E104is_v5,Standard_E2_v5,Standard_E4_v5,Standard_E8_v5,Standard_E16_v5,Standard_E20_v5,Standard_E32_v5,Standard_E48_v5,Standard_E64_v5,Standard_E96_v5,Standard_E104i_v5,Standard_E2bs_v5,Standard_E4bs_v5,Standard_E8bs_v5,Standard_E16bs_v5,Standard_E32bs_v5,Standard_E48bs_v5,Standard_E64bs_v5,Standard_E2bds_v5,Standard_E4bds_v5,Standard_E8bds_v5,Standard_E16bds_v5,Standard_E32bds_v5,Standard_E48bds_v5,Standard_E64bds_v5,Standard_D2a_v4,Standard_D4a_v4,Standard_D8a_v4,Standard_D16a_v4,Standard_D32a_v4,Standard_D48a_v4,Standard_D64a_v4,Standard_D96a_v4,Standard_D2as_v4,Standard_D4as_v4,Standard_D8as_v4,Standard_D16as_v4,Standard_D32as_v4,Standard_D48as_v4,Standard_D64as_v4,Standard_D96as_v4,Standard_E2a_v4,Standard_E4a_v4,Standard_E8a_v4,Standard_E16a_v4,Standard_E20a_v4,Standard_E32a_v4,Standard_E48a_v4,Standard_E64a_v4,Standard_E96a_v4,Standard_E2as_v4,Standard_E4-2as_v4,Standard_E4as_v4,Standard_E8-2as_v4,Standard_E8-4as_v4,Standard_E8as_v4,Standard_E16-4as_v4,Standard_E16-8as_v4,Standard_E16as_v4,Standard_E20as_v4,Standard_E32-8as_v4,Standard_E32-16as_v4,Standard_E32as_v4,Standard_E48as_v4,Standard_E64-16as_v4,Standard_E64-32as_v4,Standard_E64as_v4,Standard_E96-24as_v4,Standard_E96-48as_v4,Standard_E96as_v4,Standard_E96ias_v4,Standard_NC6s_v3,Standard_NC12s_v3,Standard_NC24rs_v3,Standard_NC24s_v3,Standard_NV6s_v2,Standard_NV12s_v2,Standard_NV24s_v2,Standard_NV12s_v3,Standard_NV24s_v3,Standard_NV48s_v3,Standard_H8,Standard_H8_Promo,Standard_H16,Standard_H16_Promo,Standard_H8m,Standard_H8m_Promo,Standard_H16m,Standard_H16m_Promo,Standard_H16r,Standard_H16r_Promo,Standard_H16mr,Standard_H16mr_Promo,Standard_M208ms_v2,Standard_M208s_v2,Standard_M416-208s_v2,Standard_M416s_v2,Standard_M416-208ms_v2,Standard_M416ms_v2,Standard_DC1s_v3,Standard_DC2s_v3,Standard_DC4s_v3,Standard_DC8s_v3,Standard_DC16s_v3,Standard_DC24s_v3,Standard_DC32s_v3,Standard_DC48s_v3,Standard_DC1ds_v3,Standard_DC2ds_v3,Standard_DC4ds_v3,Standard_DC8ds_v3,Standard_DC16ds_v3,Standard_DC24ds_v3,Standard_DC32ds_v3,Standard_DC48ds_v3,Standard_NC24ads_A100_v4,Standard_NC48ads_A100_v4,Standard_NC96ads_A100_v4,Standard_D2as_v5,Standard_D4as_v5,Standard_D8as_v5,Standard_D16as_v5,Standard_D32as_v5,Standard_D48as_v5,Standard_D64as_v5,Standard_D96as_v5,Standard_E2as_v5,Standard_E4-2as_v5,Standard_E4as_v5,Standard_E8-2as_v5,Standard_E8-4as_v5,Standard_E8as_v5,Standard_E16-4as_v5,Standard_E16-8as_v5,Standard_E16as_v5,Standard_E20as_v5,Standard_E32-8as_v5,Standard_E32-16as_v5,Standard_E32as_v5,Standard_E48as_v5,Standard_E64-16as_v5,Standard_E64-32as_v5,Standard_E64as_v5,Standard_E96-24as_v5,Standard_E96-48as_v5,Standard_E96as_v5,Standard_E112ias_v5,Standard_D2ads_v5,Standard_D4ads_v5,Standard_D8ads_v5,Standard_D16ads_v5,Standard_D32ads_v5,Standard_D48ads_v5,Standard_D64ads_v5,Standard_D96ads_v5,Standard_E2ads_v5,Standard_E4-2ads_v5,Standard_E4ads_v5,Standard_E8-2ads_v5,Standard_E8-4ads_v5,Standard_E8ads_v5,Standard_E16-4ads_v5,Standard_E16-8ads_v5,Standard_E16ads_v5,Standard_E20ads_v5,Standard_E32-8ads_v5,Standard_E32-16ads_v5,Standard_E32ads_v5,Standard_E48ads_v5,Standard_E64-16ads_v5,Standard_E64-32ads_v5,Standard_E64ads_v5,Standard_E96-24ads_v5,Standard_E96-48ads_v5,Standard_E96ads_v5,Standard_E112iads_v5,Standard_L8s_v3,Standard_L16s_v3,Standard_L32s_v3,Standard_L48s_v3,Standard_L64s_v3,Standard_L80s_v3.
        Find out more on the valid VM sizes in each region at https://aka.ms/azure-regionservices."
        Target="vmSize"'
      reason: MachineCreationFailed
      status: "True"
      type: MachineCreated
    metadata: {}

Description of problem:

The setting of systemReserved: ephemeral-storage in KubeletConfig is not working as expected. 

Version-Release number of selected component (if applicable):

4.10.z, may exist on other OCP versions as well. 

How reproducible:

always

Steps to Reproduce:

1. Create a KubeletConfig on the node:

apiVersion: machineconfiguration.openshift.io/v1
kind: KubeletConfig
metadata:
  name: system-reserved-config
spec:
  machineConfigPoolSelector:
    matchLabels:
      pools.operator.machineconfiguration.openshift.io/master: ""
  kubeletConfig:
    systemReserved:
      cpu: 500m
      memory: 500Mi
      ephemeral-storage: 10Gi


2. Check node allocatable storage with command: oc describe node |grep -C 5 ephemeral-storage

Actual results:

The Allocatable:ephemeral-storage on the node is not capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default)  

Expected results:

The Allocatable:ephemeral-storage on the node should be capacity.ephemeral-storage - systemReserved.ephemeral-storage - eviction-thresholds (10% of the capacity.ephemeral-storage by default) 

Additional info:

The root cause might be: process argument '--system-reserved=cpu=500m,memory=500Mi' overwrote the setting in /etc/kubernetes/kubelet.conf, one example:

root        6824       1 27 Sep30 ?        1-09:00:24 kubelet --config=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/kubeconfig --kubeconfig=/var/lib/kubelet/kubeconfig --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --runtime-cgroups=/system.slice/crio.service --node-labels=node-role.kubernetes.io/master,node.openshift.io/os_id=rhcos --node-ip=192.168.58.47 --minimum-container-ttl-duration=6m0s --cloud-provider= --volume-plugin-dir=/etc/kubernetes/kubelet-plugins/volume/exec --hostname-override= --register-with-taints=node-role.kubernetes.io/master=:NoSchedule --pod-infra-container-image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a7b6408460148cb73c59677dbc2c261076bc07226c43b0c9192cc70aef5ba62 --system-reserved=cpu=500m,memory=500Mi --v=2 --housekeeping-interval=30s


 

Description of problem:

Pod and PDB list page just report "Not found" when no resources found 

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-15-094115

How reproducible:

Always

Steps to Reproduce:

1. normal user has a new empty project
2. normal user visit PDB list page via Workloads ->  PodDisruptionBudgets 
3.

Actual results:

2. it just reports 'Not found'

Expected results:

2. for other workloads, it will report "No <resource> found", for example
No HorizontalPodAutoscalers found
No StatefulSets found
No Deployments found

so for Pods and PodDisruptionBudgets list page, when no resource can be found, it's better that we also reports "No pods found" and "No PodDisruptionBudgets found"

Additional info:

 

This is a clone of issue OCPBUGS-4367. The following is the description of the original issue:

Description of problem:

The calls to log.Debugf() from image/baseiso.go and image/oc.go are not being output when the "image create" command is run.

Version-Release number of selected component (if applicable):

4.12.0

How reproducible:

Every time

Steps to Reproduce:

1. Run ../bin/openshift-install agent create image --dir ./cluster-manifests/ --log-level debug

Actual results:

No debug log messages from log.Debugf() calls in pkg/asset/agent/image/oc.go

Expected results:

Debug log messages are output

Additional info:

Note from Zane: We should probably also use the real global logger instead of [creating a new one](https://github.com/openshift/installer/blob/2698cbb0ec7e96433a958ab6b864786c0c503c0b/pkg/asset/agent/image/baseiso.go#L109) with the default config that ignores the --log-level flag and prints weird `[0001]` stuff in the output for some reason. (The NMStateConfig manifests logging suffers from the same problem.)

 

 

 

Description of problem:

When using the agent based instller to zero-touch provision the cluster. If the network bandwidth is low, and the assisted-service or the assisted-service fails to pull the docker image within the timeout. The create-cluster-and-infraenv, apply-host-config, and start-cluster-installation services will be deactivated due to dependency failed. The process will be blocked, and require enable & start the service manually.

Version-Release number of selected component (if applicable):

openshift-install 4.11.0
built from commit 863cd1ea823559116e26de327705ed72ccdede8f
release image quay.io/openshift-release-dev/ocp-release@sha256:300bce8246cf880e792e106607925de0a404484637627edf5f517375517d54a4
release architecture amd64

How reproducible:

Install Openshift with agent based installer with local mirror.

Steps to Reproduce:

1.Stop the local registry or limit the network bandwidth to make assisted-service-pod.service or assisted-service.service fails to started within the 90s timeout.
2.Start the local registry or mannully pull the image on the node0. 3.

Actual results:

When using the agent based instller to zero-touch pprovision  the cluster. If the network bandwidth is low, and the assisted-service or the assisted-service fails to pull the docker image within the timeout. The create-cluster-and-infraenv, apply-host-config, and start-cluster-installation services will be deactivated due to dependency failed. The process will be blocked, and require enable & start the service manually.

Expected results:

Provision start after the assisted-service started.

Additional info:

Given:
assisted-service-pod.service requires assisted-service-db.service assisted-service.service
assisted-service.service BindsTo=assisted-service-pod.service
create-cluster-and-infraenv.service Requires=assisted-service.service and PartOf=assisted-service-pod.service
apply-host-config.service Requires=create-cluster-and-infraenv.service
start-cluster-installation.service Requires=apply-host-config.service
Requires= "Configures requirement dependencies on other units. If this unit gets activated, the units listed here will be activated as well. If one of the other units gets deactivated or its activation fails, this unit will be deactivated."When assisted-service-pod.service starts, assisted-service-db.service and assisted-service.service also be started,
Once assisted-service-pod.service fails to be started, assisted-service.service also fail to be started due to "BindsTo=assisted-service-pod.service".
Then dependency failed for create-cluster-and-infraenv.service due to Requires=assisted-service.service which activation fails, Therefore it will be deactived.
Then dependency failed for apply-host-config.service, due to Requires=create-cluster-and-infraenv.service which activation fails, Therefore it will be deactived.
Then dependency failed for start-cluster-installation.service, due to Requires=apply-host-config.service which activation fails, Therefore it will be deactived.Then assisted-service-pod.service restarts, assisted-service.service and assisted-service-db.service restarts as well, since they are binded to assisted-service-pod.service.
However, create-cluster-and-infraenv.service apply-host-config.service and start-cluster-installation.service was be deactivated, they requires to be activate mannully.Eventually, assisted-service started and hang with waitting for create infraenv. The provisioning is blocked.

Description of problem:

failed even trying to "create install-config" in the epic's scenario

Version-Release number of selected component (if applicable):

$ ./openshift-install version
./openshift-install 4.12.0-0.nightly-2022-09-28-204419
built from commit 9eb0224926982cdd6cae53b872326292133e532d
release image registry.ci.openshift.org/ocp/release@sha256:2c8e617830f84ac1ee1bfcc3581010dec4ae5d9cad7a54271574e8d91ef5ecbc
release architecture amd64

How reproducible:

Always

Steps to Reproduce:

1. create vpc network, subnets, and a firewall-rule to allow ssh access to the bastion host
2. create the bastion host, with setting a valid service-account and scopes of "https://www.googleapis.com/auth/cloud-platform"
3. scp pull secret to the bastion host
4. ssh to the bastion host (subsequent steps would be on the bastion host, except told explicitly)
5. get "oc", e.g. curl https://mirror2.openshift.com/pub/openshift-v4/clients/ocp/4.9.9/openshift-client-linux-4.9.9.tar.gz -o openshift-client-linux-4.9.9.tar.gz; tar zxvf openshift-client-linux-4.9.9.tar.gz
6. obtain the installation program
7. try "create install-config" of platform "gcp" 

Actual results:

[cloud-user@jiwei-0930-02-rhel8-mirror ~]$ ./openshift-install create install-config --dir work                                         
? SSH Public Key /home/cloud-user/.ssh/id_rsa.pub                                                                                       
? Platform gcp                                                                                                                          
INFO Credentials loaded from gcloud CLI defaults                                                                                        
? Project ID OpenShift QE Shared VPC (openshift-qe-shared-vpc)                                                                          
? Region us-west1                                                                                                                       
? Base Domain qe-shared-vpc.qe.gcp.devcluster.openshift.com                                                                             
? Cluster Name jiwei-0930-03                                                                                                            
? Pull Secret [? for help] ******
FATAL failed to fetch Install Config: failed to generate asset "Install Config": credentialsMode: Forbidden: environmental authentication is only supported with Manual credentials mode 
[cloud-user@jiwei-0930-02-rhel8-mirror ~]$ 

Expected results:

"create install-config" should succeed.

Additional info:

 

 

 

 

 

Description of problem:

scale up more worker nodes but they are not added to the Load Balancer instances (backend pool), if moving the router pod to the new worker nodes then co/ingress becomes degraded

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-23-204408

How reproducible:

100%

Steps to Reproduce:

1. ensure the fresh install cluster works well.
2. scale up worker nodes.
$ oc -n openshift-machine-api get machineset
NAME                                  DESIRED   CURRENT   READY   AVAILABLE   AGE
hongli-1024-hnkrm-worker-us-east-2a   1         1         1       1           5h21m
hongli-1024-hnkrm-worker-us-east-2b   1         1         1       1           5h21m
hongli-1024-hnkrm-worker-us-east-2c   1         1         1       1           5h21m

$ oc -n openshift-machine-api scale machineset hongli-1024-hnkrm-worker-us-east-2a --replicas=2
machineset.machine.openshift.io/hongli-1024-hnkrm-worker-us-east-2a scaled

$ oc -n openshift-machine-api scale machineset hongli-1024-hnkrm-worker-us-east-2b --replicas=2
machineset.machine.openshift.io/hongli-1024-hnkrm-worker-us-east-2b scaled

(about 5 minutes later)
$ oc -n openshift-machine-api get machineset
NAME                                  DESIRED   CURRENT   READY   AVAILABLE   AGE
hongli-1024-hnkrm-worker-us-east-2a   2         2         2       2           5h29m
hongli-1024-hnkrm-worker-us-east-2b   2         2         2       2           5h29m
hongli-1024-hnkrm-worker-us-east-2c   1         1         1       1           5h29m


3. delete router pods and to make new ones running on new workers

$ oc get node
NAME                                         STATUS   ROLES                  AGE     VERSION
ip-10-0-128-45.us-east-2.compute.internal    Ready    worker                 71m     v1.25.2+4bd0702
ip-10-0-131-192.us-east-2.compute.internal   Ready    control-plane,master   6h35m   v1.25.2+4bd0702
ip-10-0-139-51.us-east-2.compute.internal    Ready    worker                 6h29m   v1.25.2+4bd0702
ip-10-0-162-228.us-east-2.compute.internal   Ready    worker                 71m     v1.25.2+4bd0702
ip-10-0-172-216.us-east-2.compute.internal   Ready    control-plane,master   6h35m   v1.25.2+4bd0702
ip-10-0-190-82.us-east-2.compute.internal    Ready    worker                 6h25m   v1.25.2+4bd0702
ip-10-0-196-26.us-east-2.compute.internal    Ready    control-plane,master   6h35m   v1.25.2+4bd0702
ip-10-0-199-158.us-east-2.compute.internal   Ready    worker                 6h28m   v1.25.2+4bd0702

$ oc -n openshift-ingress get pod -owide
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE                                         NOMINATED NODE   READINESS GATES
router-default-86444dcd84-cm96l   1/1     Running   0          65m   10.130.2.7   ip-10-0-128-45.us-east-2.compute.internal    <none>           <none>
router-default-86444dcd84-vpnjz   1/1     Running   0          65m   10.131.2.7   ip-10-0-162-228.us-east-2.compute.internal   <none>           <none>


Actual results:

$ oc get co ingress console authentication
NAME             VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
ingress          4.12.0-0.nightly-2022-10-23-204408   True        False         True       66m     The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)
console          4.12.0-0.nightly-2022-10-23-204408   False       False         False      66m     RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.hongli-1024.qe.devcluster.openshift.com): Get "https://console-openshift-console.apps.hongli-1024.qe.devcluster.openshift.com": EOF
authentication   4.12.0-0.nightly-2022-10-23-204408   False       False         True       66m     OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.hongli-1024.qe.devcluster.openshift.com/healthz": EOF


checked the Load Balancer on AWS console and found that new created nodes are not added to load balancer. see the snapshot attached.

Expected results:

the LB should added new created instances automatically and ingress should work with new workers.

Additional info:

1. this is also reproducible with common user created LoadBalancer service.
2. if the LB service is created after adding the new nodes then it works well, we can see that all nodes are added to LB on AWS console.  

 

This is a clone of issue OCPBUGS-3508. The following is the description of the original issue:

Exposed via the fact that the periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4 job is at 0% for at least the past two weeks over approximatesly 65 runs.

Testgrid shows that this job started failing in a very consistent way on Oct 25th at about 8am UTC: https://testgrid.k8s.io/redhat-openshift-ocp-release-4.12-informing#periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4

6 disruption tests fail, all with alarming consistency virtually always claiming exactly 8s of disruption, max allowed 1s.

And then openshift-tests.[sig-arch] events should not repeat pathologically fails with an odd signature:

{  6 events happened too frequently

event happened 35 times, something is wrong: node/master-2 - reason/NodeHasNoDiskPressure roles/control-plane,master Node master-2 status is now: NodeHasNoDiskPressure
event happened 35 times, something is wrong: node/master-2 - reason/NodeHasSufficientMemory roles/control-plane,master Node master-2 status is now: NodeHasSufficientMemory
event happened 35 times, something is wrong: node/master-2 - reason/NodeHasSufficientPID roles/control-plane,master Node master-2 status is now: NodeHasSufficientPID
event happened 35 times, something is wrong: node/master-1 - reason/NodeHasNoDiskPressure roles/control-plane,master Node master-1 status is now: NodeHasNoDiskPressure
event happened 35 times, something is wrong: node/master-1 - reason/NodeHasSufficientMemory roles/control-plane,master Node master-1 status is now: NodeHasSufficientMemory
event happened 35 times, something is wrong: node/master-1 - reason/NodeHasSufficientPID roles/control-plane,master Node master-1 status is now: NodeHasSufficientPID}

The two types of tests started failing together exactly, and the disruption measurements are bizzarely consistent, every single time we see precisely 8s for kube-api, cache-kube-api, openshift-api, cache-openshift-api, oauth-api, cache-oauth-api. It's always these 6, and it seems to be always exactly 8 seconds. I cannot state enough how strange this is. It almost implies that something is happening on a very consistent schedule.

Occasionally these are accompanied by 1-2s of disruption for those backends with new connections, but sometimes not as well.

It looks like all of the disruption consistently happens within two very long tests:

4s within: [sig-network] services when running openshift ipv4 cluster ensures external ip policy is configured correctly on the cluster [Serial] [Suite:openshift/conformance/serial]

4s within: [sig-network] services when running openshift ipv4 cluster on bare metal [apigroup:config.openshift.io] ensures external auto assign cidr is configured correctly on the cluster [Serial] [Suite:openshift/conformance/serial]

Both tests appear to have run prior to oct 25, so I don't think it's a matter of new tests breaking something or getting unskipped. Both tests also always pass, but appear to be impacting the cluster?

The master's going NotReady also appears to fall within the above two tests as well, though it does not seem to directly match with when we measure disruption, but bear in mind there's a 40s delay before the node goes NotReady.

Focusing on https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-sdn-serial-ipv4/1590640492373086208 where the above are from:

Two of the three master nodes appear to be going NodeNotReady a couple times throughout the run, as visible in the spyglass chart under the node state row on the left. master-0 does not appear here, but it does exist. (I suspect it has leader and thus is the node reporting the others going not ready)

From the master-0 kubelet log in must-gather we can see one of these examples where it reports that master-2 has not checked in:

2022-11-10T10:38:35.874090961Z I1110 10:38:35.873975       1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.00700561s. Last Ready is: &NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-10 1
0:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,}
2022-11-10T10:38:35.874090961Z I1110 10:38:35.874056       1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007097549s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:False,LastHeartb
eatTime:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,}
2022-11-10T10:38:35.874090961Z I1110 10:38:35.874067       1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007110285s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatT
ime:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,}
2022-11-10T10:38:35.874090961Z I1110 10:38:35.874076       1 node_lifecycle_controller.go:1137] node master-2 hasn't been updated for 40.007119541s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTim
e:2022-11-10 10:36:10 +0000 UTC,LastTransitionTime:2022-11-10 10:29:11 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,}
2022-11-10T10:38:35.881749410Z I1110 10:38:35.881705       1 controller_utils.go:181] "Recording status change event message for node" status="NodeNotReady" node="master-2"
2022-11-10T10:38:35.881749410Z I1110 10:38:35.881733       1 controller_utils.go:120] "Update ready status of pods on node" node="master-2"
2022-11-10T10:38:35.881820988Z I1110 10:38:35.881799       1 controller_utils.go:138] "Updating ready status of pod to false" pod="metal3-b7b69fdbb-rfbdj"
2022-11-10T10:38:35.881893234Z I1110 10:38:35.881858       1 topologycache.go:179] Ignoring node master-2 because it has an excluded label
2022-11-10T10:38:35.881893234Z W1110 10:38:35.881886       1 topologycache.go:199] Can't get CPU or zone information for worker-0 node
2022-11-10T10:38:35.881903023Z I1110 10:38:35.881892       1 topologycache.go:215] Insufficient node info for topology hints (0 zones, %!s(int64=0) CPU, false)
2022-11-10T10:38:35.881932172Z I1110 10:38:35.881917       1 controller.go:271] Node changes detected, triggering a full node sync on all loadbalancer services
2022-11-10T10:38:35.882290428Z I1110 10:38:35.882270       1 event.go:294] "Event occurred" object="master-2" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node master-2 status is now: NodeNotReady"

Now from master-2's kubelet log around that time, 40 seconds earlier puts us at 10:37:55, so we'd be looking for something odd around there.

A few potential lines:

Nov 10 10:37:55.232537 master-2 kubenswrapper[1930]: I1110 10:37:55.232495    1930 patch_prober.go:29] interesting pod/kube-controller-manager-guard-master-2 container/guard namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.111.22:10257/healthz\": dial tcp 192.168.111.22:10257: connect: connection refused" start-of-body=

Nov 10 10:37:55.232537 master-2 kubenswrapper[1930]: I1110 10:37:55.232549    1930 prober.go:114] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-guard-master-2" podUID=8be2c6c1-f8f6-4bf0-b26d-53ce487354bd containerName="guard" probeResult=failure output="Get \"https://192.168.111.22:10257/healthz\": dial tcp 192.168.111.22:10257: connect: connection refused"

Nov 10 10:38:12.238273 master-2 kubenswrapper[1930]: E1110 10:38:12.238229    1930 controller.go:187] failed to update lease, error: Put "https://api-int.ostest.test.metalkube.org:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-2?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Nov 10 10:38:13.034109 master-2 kubenswrapper[1930]: E1110 10:38:13.034077    1930 kubelet_node_status.go:487] "Error updating node status, will retry" err="error getting node \"master-2\": Get \"https://api-int.ostest.test.metalkube.org:6443/api/v1/nodes/master-2?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"

At 10:38:40 all kinds of master-2 watches time out with messages like:

Nov 10 10:38:40.244399 master-2 kubenswrapper[1930]: W1110 10:38:40.244272    1930 reflector.go:347] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding

And then suddenly we're back online:

Nov 10 10:38:40.252149 master-2 kubenswrapper[1930]: I1110 10:38:40.252131    1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasSufficientMemory"
Nov 10 10:38:40.252149 master-2 kubenswrapper[1930]: I1110 10:38:40.252156    1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasNoDiskPressure"
Nov 10 10:38:40.252268 master-2 kubenswrapper[1930]: I1110 10:38:40.252165    1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeHasSufficientPID"
Nov 10 10:38:40.252268 master-2 kubenswrapper[1930]: I1110 10:38:40.252177    1930 kubelet_node_status.go:590] "Recording event message for node" node="master-2" event="NodeReady"
Nov 10 10:38:47.904430 master-2 kubenswrapper[1930]: I1110 10:38:47.904373    1930 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-2"
Nov 10 10:38:47.904842 master-2 kubenswrapper[1930]: I1110 10:38:47.904662    1930 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-2"
Nov 10 10:38:47.907900 master-2 kubenswrapper[1930]: I1110 10:38:47.907872    1930 kubelet.go:2229] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-2"
Nov 10 10:38:48.431448 master-2 kubenswrapper[1930]: I1110 10:38:48.431414    1930 kubelet.go:2229] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-2"
Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764029    1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-2" status=Running
Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764059    1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/keepalived-master-2" status=Running
Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764077    1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/coredns-master-2" status=Running
Nov 10 10:38:54.764069 master-2 kubenswrapper[1930]: I1110 10:38:54.764086    1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kni-infra/haproxy-master-2" status=Running
Nov 10 10:38:54.764492 master-2 kubenswrapper[1930]: I1110 10:38:54.764106    1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-etcd/etcd-master-2" status=Running
Nov 10 10:38:54.764492 master-2 kubenswrapper[1930]: I1110 10:38:54.764113    1930 kubelet_getters.go:182] "Pod status updated" pod="openshift-kube-controller-manager/kube-controller-manager-master-2" status=Running

Also curious:

Nov 10 10:37:50.318237 master-2 ovs-vswitchd[1324]: ovs|00251|connmgr|INFO|br0<->unix#468: 2 flow_mods in the last 0 s (2 deletes)
Nov 10 10:37:50.342965 master-2 ovs-vswitchd[1324]: ovs|00252|connmgr|INFO|br0<->unix#471: 4 flow_mods in the last 0 s (4 deletes)
Nov 10 10:37:50.364271 master-2 ovs-vswitchd[1324]: ovs|00253|bridge|INFO|bridge br0: deleted interface vethcb8d36e6 on port 41

Nov 10 10:37:53.579562 master-2 NetworkManager[1336]: <info>  [1668076673.5795] dhcp4 (enp2s0): state changed new lease, address=192.168.111.22

These look like they could be related to the tests these problems appear to coincide with?

Grafana has been removed in 4.11 and we can safely remove any logic in CMO that deals with Grafana (except dashboards since they are used by OCP console).

Another point to clarify is to communicate to ProdSec and ART that Grafana isn't part of OCP anymore.

This is a clone of issue OCPBUGS-2895. The following is the description of the original issue:

Description of problem:

Current validation will not accept Resource Groups or DiskEncryptionSets which have upper-case letters.

Version-Release number of selected component (if applicable):

4.11

How reproducible:

Attempt to create a cluster/machineset using a DiskEncryptionSet with an RG or Name with upper-case letters

Steps to Reproduce:

1. Create cluster with DiskEncryptionSet with upper-case letters in DES name or in Resource Group name

Actual results:

See error message:

encountered error: [controlPlane.platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup: Invalid value: \"v4-e2e-V62447568-eastus\": invalid resource group format, compute[0].platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup: Invalid value: \"v4-e2e-V62447568-eastus\": invalid resource group format]

Expected results:

Create a cluster/machineset using the existing and valid DiskEncryptionSet

Additional info:

I have submitted a PR for this already, but it needs to be reviewed and backported to 4.11: https://github.com/openshift/installer/pull/6513

Description of problem:

For example, "openshift-install explain installconfig.platform.gcp.publicDNSZone" tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created", but in fact it's for specifying an existing zone where the Public DNS zone records will be put in.

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-10-10-015203

How reproducible:

Always

Steps to Reproduce:

1. openshift-install explain installconfig.platform.gcp.publicDNSZone
2. openshift-install explain installconfig.platform.gcp.privateDNSZone
3.

Actual results:

For example, it tells "PublicDNSZone contains the zone ID and project where the Public DNS zone will be created."

Expected results:

It should be like "PublicDNSZone contains the zone ID and project where the Public DNS zone records will be created."

Additional info:

$ openshift-install version
openshift-install 4.12.0-0.nightly-2022-10-10-015203
built from commit 02102a96b3f7c78337b32dcafe2e28be6fb67a0f
release image registry.ci.openshift.org/ocp/release@sha256:00806cf7faaa86981e73b478a72c1b7a838cd08b215f3a9ab9b278ae94d9a794
release architecture amd64
$ 
$ openshift-install explain installconfig.platform.gcp.publicDNSZone
KIND:     InstallConfig
VERSION:  v1RESOURCE: <object>
  PublicDNSZone Technology Preview. PublicDNSZone contains the zone ID and project where the Public DNS zone will be created.FIELDS:
    id <string>
      ID Technology Preview. ID or name of the zone.
    project <string>   
      ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID).
$ 
$ openshift-install explain installconfig.platform.gcp.privateDNSZone
KIND:     InstallConfig
VERSION:  v1RESOURCE: <object>
  PrivateDNSZone Technology Preview. PrivateDNSZone contains the zone ID and project where the Private DNS zone will be created.FIELDS:
    id <string>
      ID Technology Preview. ID or name of the zone.
    project <string>
      ProjectID Technology Preview When the ProjectID is provided, the zone will be created in this project. When the ProjectID is empty, the DNS zone with this ID will be created and managed in the Service Project (GCP.ProjectID).
$ 

 

 

 

 

Description of problem:

  intra namespace allow network policy doesn't work after applying ingress&egress deny all network policy

Version-Release number of selected component (if applicable):

  OpenShift 4.10.12

How reproducible:

Always

Steps to Reproduce:
  1. Define deny all network policy for egress an ingress in a namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

2. Define the following network policy to allow the traffic between the pods in the namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-intra-namespace-001
spec:
  egress:
  - to:
    - podSelector: {}
  ingress:
  - from:
    - podSelector: {}
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress 

3. Test the connectivity between two pods from the namespace.

Actual results:

   The connectivity is not allowed

Expected results:

  The connectivity should be allowed between pods from the same namespace.

Additional info:

  After performing a test and analyzing SDN flows for the namespace: 

sh-4.4# ovs-ofctl dump-flows -O OpenFlow13 br0 | grep --color 0x964376 
 cookie=0x0, duration=99375.342s, table=20, n_packets=14, n_bytes=588, priority=100,arp,in_port=21,arp_spa=10.128.2.20,arp_sha=00:00:0a:80:02:14/00:00:ff:ff:ff:ff actions=load:0x964376->NXM_NX_REG0[],goto_table:30
 cookie=0x0, duration=1681.845s, table=20, n_packets=11, n_bytes=462, priority=100,arp,in_port=24,arp_spa=10.128.2.23,arp_sha=00:00:0a:80:02:17/00:00:ff:ff:ff:ff actions=load:0x964376->NXM_NX_REG0[],goto_table:30
 cookie=0x0, duration=99375.342s, table=20, n_packets=135610, n_bytes=759239814, priority=100,ip,in_port=21,nw_src=10.128.2.20 actions=load:0x964376->NXM_NX_REG0[],goto_table:27
 cookie=0x0, duration=1681.845s, table=20, n_packets=2006, n_bytes=12684967, priority=100,ip,in_port=24,nw_src=10.128.2.23 actions=load:0x964376->NXM_NX_REG0[],goto_table:27
 cookie=0x0, duration=99375.342s, table=25, n_packets=0, n_bytes=0, priority=100,ip,nw_src=10.128.2.20 actions=load:0x964376->NXM_NX_REG0[],goto_table:27
 cookie=0x0, duration=1681.845s, table=25, n_packets=0, n_bytes=0, priority=100,ip,nw_src=10.128.2.23 actions=load:0x964376->NXM_NX_REG0[],goto_table:27
 cookie=0x0, duration=975.129s, table=27, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=goto_table:30
 cookie=0x0, duration=99375.342s, table=70, n_packets=145260, n_bytes=11722173, priority=100,ip,nw_dst=10.128.2.20 actions=load:0x964376->NXM_NX_REG1[],load:0x15->NXM_NX_REG2[],goto_table:80
 cookie=0x0, duration=1681.845s, table=70, n_packets=2336, n_bytes=191079, priority=100,ip,nw_dst=10.128.2.23 actions=load:0x964376->NXM_NX_REG1[],load:0x18->NXM_NX_REG2[],goto_table:80
 cookie=0x0, duration=975.129s, table=80, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=output:NXM_NX_REG2[]

We see that the following rule doesn't match because `reg1` hasn't been defined:

 cookie=0x0, duration=975.129s, table=27, n_packets=0, n_bytes=0, priority=150,reg0=0x964376,reg1=0x964376 actions=goto_table:30 

 

As OpenShift user, I want ClusterCSIDriver.Spec.LogLevel to affect the vSphere CSI driver logs, so I can capture the logs with all details and send it to Red Hat for investigation.

As OpenShift developer, I want ClusterCSIDriver.Spec.LogLevel to affect the vShere CSI CSI driver logs, so I can debug the driver with all logs.

Exit criteria:

  • When ClusterCSIDriver.Spec.LogLevel is set to Debug or higher, vSphere CSI driver logs include DEBUG messages like:

2022-08-05T11:54:10.808Z DEBUG commonco/utils.go:102 Container Orchestrator init params:

Unknown macro: {InternalFeatureStatesConfigInfo}

ServiceMode:controller}

Because the agent ISO is ephemeral, it is probably safe to allow a user to log in to it with a password. If the network configuration is broken, a user may have no other way to debug it other than to log in through the console, which is currently not possible.

The best password to set would be the kubeadmin password used for the OpenShift GUI, since we'll have generated that already.

We must take care to test that this does not result in the installed nodes on disk allowing login with a password.

OVS 2.17+ introduced an optimization of "weak references" to substantially speed up database snapshots. in some cases weak references may leak memory; to aforementioned commit fixes that and has been pulled into ovs2.17-62 and later.

Description of problem:

The name of "Role" on Compute -> Nodes page should update to "Roles" to match the name in the CLI

Compare with other resources, the title of the column should keep pace with the name in CLI

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-08-15-150248

How reproducible:

Always

Steps to Reproduce:
1.  Login OCP with CLI, use below command to get nodes information

     $ oc get nodes
2. Go to Compute -> nodes page, check the column name of "Role"
3.

Actual results:

CLI will return information as below shown, and the title of the column is "ROLES"

NAME                                         STATUS   ROLES    AGE   VERSION
ip-10-0-145-18.us-east-2.compute.internal    Ready    worker   9h    v1.24.0+4f0dd4d
ip-10-0-145-203.us-east-2.compute.internal   Ready    master   9h    v1.24.0+4f0dd4d
ip-10-0-163-205.us-east-2.compute.internal   Ready    master   9h    v1.24.0+4f0dd4d
ip-10-0-169-118.us-east-2.compute.internal   Ready    worker   9h    v1.24.0+4f0dd4d
ip-10-0-198-234.us-east-2.compute.internal   Ready    master   9h    v1.24.0+4f0dd4d
ip-10-0-212-34.us-east-2.compute.internal    Ready    worker   9h    v1.24.0+4f0dd4d

But in UI, the name of ROLES is "Role" which is incorrect. (Attached)

Expected results:

The title of "Role" should update to "Roles"

Additional info:

The DVO metrics gatherer in the Insights operator relies on the "deployment-validation-operator" namespace name, but this is not very good, because the DVO can be installed in other namespaces (e.g it's installed in the "openshift-operators" namespace when installing through OperatorHub)

Description of problem:

Using a daemonset that causes failures during draining as leases are not gracefully released and instead age out as pods are killed after potentially losing network access due to daemonset pods not being terminated. 

As pointed out in https://github.com/openshift/origin/pull/27394#discussion_r964002900 


This should be fixed when moving to a deployment and is also tracked here https://issues.redhat.com/browse/BUILD-495 

Version-Release number of selected component (if applicable):

 

How reproducible:

100

Steps to Reproduce:

1. 
2. 
3.

Actual results:

 

Expected results:

 

Additional info:

 

Our CMO e2e tests create several containers besides the standard CMO deployment. These pods do currently not set any security context capabilities. Currently this creates a warning like so:

W0705 08:35:38.590283 15206 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "alertmanager-webhook-e2e-testutil" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "alertmanager-webhook-e2e-testutil" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "alertmanager-webhook-e2e-testutil" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

We should be proactive and set security capability contraints. From this run this seems to impact the following pods/containers:

  • alertmanager-webhook-e2e-testutil
  • prometheus-example-app

Both are used more then once.

Relevant docs: https://docs.openshift.com/container-platform/4.10/authentication/managing-security-context-constraints.html#security-context-constraints-about_configuring-internal-oauth