Back to index

4.11.45

Jump to: Complete Features | Incomplete Features | Complete Epics | Incomplete Epics | Other Complete | Other Incomplete |

Changes from 4.10.67

Note: this page shows the Feature-Based Change Log for a release

Complete Features

These features were completed when this image was assembled

Problem:

Certain Insights Advisor features differentiate between RHEL and OCP advisor

Goal:

Address top priority UI misalignments between RHEL and OCP advisor. Address UI features dropped from Insights ADvisor for OCP GA.

 

Scope:

Specific tasks and priority of them tracked in https://issues.redhat.com/browse/CCXDEV-7432

 
 
 
 

 

This contains all the Insights Advisor widget deliverables for the OCP release 4.11.

Scope
It covers only minor bug fixes and improvements:

  • better error handling during internal outages in data processing
  • add "last refresh" timestamp in the Advisor widget

Show the error message (mocked in CCXDEV-5868) if the Prometheus metrics `cluster_operator_conditions{name="insights"}` contain two true conditions: UploadDegraded and Degraded at the same time. This state occurs if there was an IO archive upload error = problems with the pipeline.

Expected for 4.11 OCP release.

Scenario: Check if the Insights Advisor widget in the OCP WebConsole UI shows the time of the last data analysis
Given: OCP WebConsole UI and the cluster dashboard is accessible
And: CCX external data pipeline is in a working state
And: administrator A1 has access to his cluster's dashboard
And: Insights Operator for this cluster is sending archives
When: administrator A1 clicks on the Insights Advisor widget
Then: the results of the last analysis are showed in the Insights Advisor widget
And: the time of the last analysis is shown in the Insights Advisor widget 

Acceptance criteria:

  1. The time of the last analysis is shown in the Insights Advisor widget for the scenario above
  2. The way it is presented is defined within the scope of https://issues.redhat.com/browse/CCXDEV-5869 (mockup task)
  3. The source of this timestamp must be a result of running the Prometheus metric (last archive upload time):
    max_over_time(timestamp(changes(insightsclient_request_send_total\{status_code="202"}[1m]) > 0)[24h:1m])
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Epic Goal

  • Allow admin user to create new alerting rules, targeting metrics in any namespace
  • Allow cloning of existing rules to simplify rule creation
  • Allow creation of silences for existing alert rules

Why is this important?

  • Currently, any platform-related metrics (exposed in a openshift-, kube- and default namespace) cannot be used to form a new alerting rule. That makes it very difficult for administrators to enrich our out of the box experience for the OpenShift Container Platform with new rules that may be specific to their environments.
  • Additionally, we had requests from customer to allow modifications of our existing, out of the box alerting rules (for instance tweaking the alert expression or changing the severity label). Unfortunately, that is not easy since most rules come from several open source projects, or other OpenShift components, and any modifications would make a seamless upgrade not really seamless anymore. Imagine K8s changes metrics again (see 1.14) and we have to update our rules. We would not know what modifications have been done (even just the threshold might be difficult if upstream changes that as well) and we would not be able to upgrade these rules.

Scenarios

  • I'd like to modify the query expression of an existing rule (because the threshold value doesn't match with my environment).

Cloning the existing rule should end up with a new rule in the same namespace.
Modifications can now be done to the new rule.
(Optional) You can silence the existing rule.

  • I'd like to create a new rule based on a metric only available to an openshift-* namespace

Create a new PrometheusRule object inside the namespace that includes the metrics you need to form the alerting rule.

  • I'd like to update the label of an existing rule.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Ability to distinguish between rules deployed by us (CMO) and user created rules

Dependencies (internal and external)

Previous Work (Optional):

Open questions::

  1. Distinguish between operator-created rules and user-created rules
    Currently no such mechanism exists. This will need to be added to prometheus-operator or cluster-monitoring-operator.

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

CMO should reconcile the platform Prometheus configuration with the alert-relabel-config resources.

 

DoD

  • Alerts changed via alert-relabel-configs are evaluated by the Platform monitoring stack.
  • Product alerts which are overriden aren't sent to Alertmanager

CMO should reconcile the platform Prometheus configuration with the AlertingRule resources.

 

DoD

  • Alerts added via AlertingRule resources are evaluated by the Platform monitoring stack.

Managing PVs at scale for a fleet creates difficulties where "one size does not fit all". The ability for SRE to deploy prometheus with PVs and have retention based an on a desired size would enable easier management of these volumes across the fleet. 

 

The prometheus-operator exposes retentionSize.

Field Description
retentionSize Maximum amount of disk space used by blocks. Supported units: B, KB, MB, GB, TB, PB, EB. Ex: 512MB.

This is a feature request to enable this configuration option via CMO cluster-monitoring-config ConfigMap.

 

cc Simon Pasquier  

Epic Goal

  • Cluster admins want to configure the retention size for their metrics.

Why is this important?

  • While it is possible to define how long metrics should be retained on disk, it's not possible to tell the cluster monitoring operator how much data it should keep. For OSD/ROSA in particular, it would facilitate the management of the fleet if the retention size could be configured based on the persistent volume size because it would avoid issues with the storage getting full and monitoring being down when too many metrics are produced.

Scenarios

  • As a cluster admin, I want to define the maximum amount of data to be retained on the persistent volume.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • The cluster-monitoring-config config and the user-workload-monitoring-config configmap allow to configure the retention size for
    • Prometheus (Platform and UWM)
    • Thanos Ruler (to be confirmed)
  • Proper validation is in place preventing bad user inputs from breaking the stack.

Dependencies (internal and external)

  1. Thanos ruler doesn't support retention size (only retention time).

Previous Work (Optional):

  1. None

Open questions::

  1. None

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Problem Alignment

The Problem

Today, all configuration for setting individual, for example, routing configuration is done via a single configuration file that only admins have access to. If an environment uses multiple tenants and each tenant, for example, has different systems that they are using to notify teams in case of an issue, then someone needs to file a request w/ an admin to add the required settings.

That can be bothersome for individual teams, since requests like that usually disappear in the backlog of an administrator. At the same time, administrators might get tons of requests that they have to look at and prioritize, which takes them away from more crucial work.

We would like to introduce a more self service approach whereas individual teams can create their own configuration for their needs w/o the administrators involvement.

Last but not least, since Monitoring is deployed as a Core service of OpenShift there are multiple restrictions that the SRE team has to apply to all OSD and ROSA clusters. One restriction is the ability for customers to use the central Alertmanager that is owned and managed by the SRE team. They can't give access to the central managed secret due to security concerns so that users can add their own routing information.

High-Level Approach

Provide a new API (based on the Operator CRD approach) as part of the Prometheus Operator that allows creating a subset of the Alertmanager configuration without touching the central Alertmanager configuration file.

Please note that we do not plan to support additional individual webhooks with this work. Customers will need to deploy their own version of the third party webhooks.

Goal & Success

  • Allow users to deploy individual configurations that allow setting up Alertmanager for their needs without an administrator.

Solution Alignment

Key Capabilities

  • As an OpenShift administrator, I want to control who can CRUD individual configuration so that I can make sure that any unknown third person can touch the central Alertmanager instance shipped within OpenShift Monitoring.
  • As a team owner, I want to deploy a routing configuration to push notifications for alerts to my system of choice.

Key Flows

Team A wants to send all their important notifications to a specific Slack channel.

  • Administrator gives permission to Team A to allow creating a new configuration CR in their individual namespace.
  • Team A creates a new configuration CR.
  • Team A configures what alerts should go into their Slack channel.
  • Open Questions & Key Decisions (optional)
  • Do we want to improve anything inside the developer console to allow configuration?

Epic Goal

  • Allow users to manage Alertmanager for user-defined alerts and have the feature being fully supported.

Why is this important?

  • Users want to configure alert notifications without admin intervention.
  • The feature is currently Tech Preview, it should be generally available to benefit a bigger audience.

Scenarios

  1. As a cluster admin, I can deploy an Alertmanager service dedicated for user-defined alerts (e.g. separated from the existing  Alertmanager already used for platform alerts).
  2. As an application developer, I can silence alerts from the OCP console.
  3. As an application developer, I'm not allowed to configure invalid AlertmanagerConfig objects.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • The AlertmanagerConfig CRD is v1beta1
  • The validating webhook service checking AlertmanagerConfig resources is highly-available.

Dependencies (internal and external)

  1. Prometheus operator upstream should migrate the AlertmanagerConfig CRD from v1alpha1 to v1beta1
  2. Console enhancements likely to be involved (see below).

Previous Work (Optional):

  1. Part of the feature is available as Tech Preview (MON-880).

Open questions:

  1. Coordination with the console team to support the Alertmanager service dedicated for user-defined alerts.
  2. Migration steps for users that are already using the v1alpha1 CRD.

Done Checklist

 * CI - CI is running, tests are automated and merged.
 * Release Enablement <link to Feature Enablement Presentation>
 * DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
 * DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
 * DEV - Downstream build attached to advisory: <link to errata>
 * QE - Test plans in Polarion: <link or reference to Polarion>
 * QE - Automated tests merged: <link or reference to automated tests>
 * DOC - Downstream documentation merged: <link to meaningful PR> 

 

Now that upstream supports AlertmanagerConfig v1beta1 (see MON-2290 and https://github.com/prometheus-operator/prometheus-operator/pull/4709), it should be deployed by CMO.

DoD:

  • Kubernetes API exposes and supports the v1beta1 version for AlertmanagerConfig CRD (in addition to v1alpha1).
  • Users can manage AlertmanagerConfig v1beta1 objects seamlessly.
  • AlertmanagerConfig v1beta1 objects are reconciled in the generated Alertmanager configuration.
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Epic Goal

  • The goal is to support metrics federation for user-defined monitoring via the /federate Prometheus endpoint (both from within and outside of the cluster).

Why is this important?

  • It is already possible to configure remote write for user-defined monitoring to push metrics outside of the cluster but in some cases, the network flow can only go from the outside to the cluster and not the opposite. This makes it impossible to leverage remote write.
  • It is already possible to use the /federate endpoint for the platform Prometheus (via the internal service or via the OpenShift route) so not supporting for UWM doesn't provide a consistent experience.
  • If we don't expose the /federate endpoint for the UWM Prometheus, users would have no supported way to store and query application metrics from a central location.

Scenarios

  1. As a cluster admin, I want to federate user-defined metrics using the Prometheus /federate endpoint.
  2. As a cluster admin, I want that the /federate endpoint to UWM is accessible via an OpenShift route.
  3. As a cluster admin, I want that the access to the /federate endpoint to UWM requires authentication (with bearer token only) & authorization (the required permissions should match the permissions on the /federate endpoint of the Platform Prometheus).

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Documentation - information about the recommendations and limitations/caveats of the federation approach.
  • User can federate user-defined metrics from within the cluster
  • User can federate user-defined metrics from the outside via the OpenShift route.

Dependencies (internal and external)

  1. None

Previous Work (Optional):

  1. None

Open questions:

  1. None

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

DoD

  • User can federate UWM metrics from outside of the cluster via the OpenShift route.
  • E2E test added to the CMO test suite.

DoD

  • User can federate UWM metrics within the cluster from the prometheus-user-workload.openshift-user-workload-monitoring.svc:9092 service
  • The service requires authentication via bearer token and authorization (same permissions as for federating platform metrics)

Copy/paste from [_https://github.com/openshift-cs/managed-openshift/issues/60_]

Which service is this feature request for?
OpenShift Dedicated and Red Hat OpenShift Service on AWS

What are you trying to do?
Allow ROSA/OSD to integrate with AWS Managed Prometheus.

Describe the solution you'd like
Remote-write of metrics is supported in OpenShift but it does not work with AWS Managed Prometheus since AWS Managed Prometheus requires AWS SigV4 auth.

  • Note that Prometheus supports AWS SigV4 since v2.26 and OpenShift 4.9 uses v2.29.

Describe alternatives you've considered
There is the workaround to use the "AWS SigV4 Proxy" but I'd think this is not properly supported by RH.
https://mobb.ninja/docs/rosa/cluster-metrics-to-aws-prometheus/

Additional context
The customer wants to use an open and portable solution to centralize metrics storage and analysis. If they also deploy to other clouds, they don't want to have to re-configure. Since most clouds offer a Prometheus service (or it's easy to self-manage Prometheus), app migration should be simplified.

Epic Goal

The cluster monitoring operator should allow OpenShift customers to configure remote write with all authentication methods supported by upstream Prometheus.

We will extend CMO's configuration API to support the following authentications with remote write:

  • Sigv4
  • Authorization
  • OAuth2

Why is this important?

Customers want to send metrics to AWS Managed Prometheus that require sigv4 authentication (see https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-secure-metric-ingestion.html#AMP-secure-auth).

Scenarios

  1. As a cluster admin, I want to forward platform/user metrics to remote write systems requiring Sigv4 authentication.
  2. As a cluster admin, I want to forward platform/user metrics to remote write systems requiring OAuth2 authentication.
  3. As a cluster admin, I want to forward platform/user metrics to remote write systems requiring custom Authorization header for authentication (e.g. API key).

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • It is possible for a cluster admin to configure any authentication method that is supported by Prometheus upstream for remote write (both platform and user-defined metrics):
    • Sigv4
    • Authorization
    • OAuth2

Dependencies (internal and external)

  • In theory none because everything is already supported by the Prometheus operator upstream. We may discover bugs in the upstream implementation though that may require upstream involvement.

Previous Work

  • After CMO started exposing the RemoteWrite specification in MON-1069, additional authentication options where added to prometheus and prometheus-operator but CMO didn't catch up on these.

Open Questions

  • None

Prometheus and Prometheus operator already support sigv4 authentication for remote write. This should be possible to configure the same in the CMO configuration:

 

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://remote-write.endpoint"
        sigv4:
          accessKey:
            name: aws-credentialss
            key: access
          secretKey:
            name: aws-credentials
            key: secret

          profile: "SomeProfile"

          roleArn: "SomeRoleArn"

DoD:

  • Ability to configure sigv4 authentication for remote write in the openshift-monitoring/cluster-monitoring-config configmap
  • Ability to configure sigv4 authentication for remote write in the openshift-user-workload-monitoring/user-workload-monitoring-config configmap

Prometheus and Prometheus operator already support custom Authorization for remote write. This should be possible to configure the same in the CMO configuration:

 

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    prometheusK8s:
      remoteWrite:
      - url: "https://remote-write.endpoint"
        Authorization:
          type: Bearer
          credentials:
            name: credentials
            key: token

DoD:

  • Ability to configure custom Authorization for remote write in the openshift-monitoring/cluster-monitoring-config configmap
  • Ability to configure custom Authorization for remote write in the openshift-user-workload-monitoring/user-workload-monitoring-config configmap
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Description

As WMCO user, I want to make sure containerd logging information has been updated in documents and scripts.

Acceptance Criteria

  • update must-gather to collect containerd logs
  • Internal/Customer Documents and log collecting scripts must have containerd specific information (ex: location of logs). 

Summary (PM+lead)

Configure audit logging to capture login, logout and login failure details

Motivation (PM+lead)

TODO(PM): update this

Customer who needs login, logout and login failure details inside the openshift container platform.
I have checked for this on my test cluster but the audit logs do not contain any user name specifying login or logout details. For successful logins or logout, on CLI and openshift console as well we can see 'Login successful' or 'Invalid credentials'.

Expected results: Login, logout and login failures should be captured in audit logging.

Goals (lead)

  1. Login, logout and login failures should be captured in audit logs

Non-Goals (lead)

  1. Don't attempt to log login failures in the IdP login flow that goes beyond timeout, if it the information is not available in explicit oauth-server requests (e.g. github password login error).
  2. Logout does not involve oauth-server (but is a simple API object deletion in oauth-apiserver). Hence, the audit log discussed here won't include logout.

Deliverables

  1. Changes to oauth-server to log into /varLog/oauth-server/audit.log on the master node.
  2. Documentation

Proposal (lead)

The apiserver pods today have ´/var/log/<kube|oauth|openshift>-apiserver` mounted from the host and create audit files there using the upstream audit event format (JSON lines following https://github.com/kubernetes/apiserver/blob/92392ef22153d75b3645b0ae339f89c12767fb52/pkg/apis/audit/v1/types.go#L72). These events are apiserver specific, but as oauth authentication flow events are also requests, we can use the apiserver event format to log logins, login failures and logouts. Hence, we propose to make oauth-server to create /var/log/oauth-server/audit.log files on the master nodes using that format.

When the login flow does not finish within a certain time (e.g. 10min), we can artificially create an event to show a login failure in the audit logs.

User Stories (PM)

Dependencies (internal and external, lead)

Previous Work (lead)

Open questions (lead)

  1. ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

 

🏆 What

Let the Cluster Authentication Operator deliver the policy to OAuthServer.

💖 Why

In order to know if authn events should be logged, OAuthServer needs to be aware of it.

🗒 Notes

Create an observer to deliver the audit policy to the oauth server

Make the authentication-operator react to the new audit field in the oauth.config/cluster object. Write an observer watching this field, such an observer will translate the top-level configuration into oauth-server config and add it to the rest of the observed config.

* Stanislav Láznička

OCP/Telco Definition of Done
Feature Template descriptions and documentation.

Feature Overview.

Early customer feedback is that they see SNO as a great solution covering smaller footprint deployment, but are wondering what is the evolution story OpenShift is going to provide where more capacity or high availability are needed in the future.

While migration tooling (moving workload/config to new cluster) could be a mid-term solution, customer desire is not to include extra hardware to be involved in this process.

 For Telecommunications Providers, at the Far Edge they intend to start small and then grow. Many of these operators will start with a SNO-based DU deployment as an initial investment, but as DUs evolve, different segments of the radio spectrum are added, various radio hardware is provisioned and features delivered to the Far Edge, the Telecommunication Providers desire the ability for their Far Edge deployments to scale up from 1 node to 2 nodes to n nodes. On the opposite side of the spectrum from SNO is MMIMO where there is a robust cluster and workloads use HPA.

Goals

  • Provide the capability to expand a single replica control plane topology to host more workloads capacity - add worker
  • Provide the capability to expand a single replica control plane to be a highly available control plane
  • To satisfy MMIMO Telecommunications providers will want the ability to scale a SNO to a multi-node cluster that can support HPA.
  • Telecommunications providers do not want workload (DU specifically) downtime when migrating from SNO to a multi-node cluster.
  • Telecommunications providers wish to be able to scale from one to two or more nodes to support a variety of radio hardware.
  • Support CP scaling (CP HA) for 2 node cluster, 3 node cluster and n node cluster. As the number of nodes in the cluster increases so does the failure domain of the cluster. The cluster is now supporting more cell sectors and therefore has more of a need for HA and resiliency including the cluster CP.

Requirements

  • TBD
Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

(Optional) Use Cases

This Section:

  • Main success scenarios - high-level user stories
  • Alternate flow/scenarios - high-level user stories
  • ...

Questions to answer…

  • ...

Out of Scope

Background, and strategic fit

This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

Assumptions

  • ...

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

Epic Goal

  • Documented and supported flow for adding 1, 2, 3 or more workers to a Single Node OpenShift (SNO) deployment without requiring cluster downtime and the understanding that this action will not make the cluster itself highly available.

Why is this important?

  • Telecommunications and Edge scenarios where HA is handled via failover to another site but single site capacity may vary or need to be expanded over time.
  • Similar scenarios exist for some ISV vendors where OpenShift is an implementation detail of how they deliver their solution on top of another platform (e.g. VMware).

Scenarios

  1. Adding a worker to a single node openshift cluster.
  2. Adding a second worker to a single node openshift cluster.
  3. Adding a third worker to a single node openshift cluster.
  4. Removing a worker node from a single node openshift cluster that has had 1 or more workers added.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Customer facing documentation of the add worker flow for SNO.

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

  1. Presumably there is a scale limit on how many workers could be added to an SNO control plane, and it is lower than the limit for a "normal" 3 node control plane. It is not anticipated that this limit will be established in this epic. Intent is to focus on small scale sites where adding 1-3 worker nodes would be beneficial.

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Rebase OpenShift components to k8s v1.24

Why is this important?

  • Rebasing ensures components work with the upcoming release of Kubernetes
  • Address tech debt related to upstream deprecations and removals.

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. k8s 1.24 release

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Feature Overview

  • As an infrastructure owner, I want a repeatable method to quickly deploy the initial OpenShift cluster.
  • As an infrastructure owner, I want to install the first (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters.

Goals

  • Enable customers and partners to successfully deploy a single “first” cluster in disconnected, on-premises settings

Requirements

4.11 MVP Requirements

  • Customers and partners needs to be able to download the installer
  • Enable customers and partners to deploy a single “first” cluster (cluster 0) using single node, compact, or highly available topologies in disconnected, on-premises settings
  • Installer must support advanced network settings such as static IP assignments, VLANs and NIC bonding for on-premises metal use cases, as well as DHCP and PXE provisioning environments.
  • Installer needs to support automation, including integration with third-party deployment tools, as well as user-driven deployments.
  • In the MVP automation has higher priority than interactive, user-driven deployments.
  • For bare metal deployments, we cannot assume that users will provide us the credentials to manage hosts via their BMCs.
  • Installer should prioritize support for platforms None, baremetal, and VMware.
  • The installer will focus on a single version of OpenShift, and a different build artifact will be produced for each different version.
  • The installer must not depend on a connected registry; however, the installer can optionally use a previously mirrored registry within the disconnected environment.

Use Cases

  • As a Telco partner engineer (Site Engineer, Specialist, Field Engineer), I want to deploy an OpenShift cluster in production with limited or no additional hardware and don’t intend to deploy more OpenShift clusters [Isolated edge experience].
  • As a Enterprise infrastructure owner, I want to manage the lifecycle of multiple clusters in 1 or more sites by first installing the first  (management, hub, “cluster 0”) cluster to manage other (standalone, hub, spoke, hub of hubs) clusters [Cluster before your cluster].
  • As a Partner, I want to package OpenShift for large scale and/or distributed topology with my own software and/or hardware solution.
  • As a large enterprise customer or Service Provider, I want to install a “HyperShift Tugboat” OpenShift cluster in order to offer a hosted OpenShift control plane at scale to my consumers (DevOps Engineers, tenants) that allows for fleet-level provisioning for low CAPEX and OPEX, much like AKS or GKE [Hypershift].
  • As a new, novice to intermediate user (Enterprise Admin/Consumer, Telco Partner integrator, RH Solution Architect), I want to quickly deploy a small OpenShift cluster for Poc/Demo/Research purposes.

Questions to answer…

  •  

Out of Scope

Out of scope use cases (that are part of the Kubeframe/factory project):

  • As a Partner (OEMs, ISVs), I want to install and pre-configure OpenShift with my hardware/software in my disconnected factory, while allowing further (minimal) reconfiguration of a subset of capabilities later at a different site by different set of users (end customer) [Embedded OpenShift].
  • As an Infrastructure Admin at an Enterprise customer with multiple remote sites, I want to pre-provision OpenShift centrally prior to shipping and activating the clusters in remote sites.

Background, and strategic fit

  • This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

Assumptions

  1. The user has only access to the target nodes that will form the cluster and will boot them with the image presented locally via a USB stick. This scenario is common in sites with restricted access such as government infra where only users with security clearance can interact with the installation, where software is allowed to enter in the premises (in a USB, DVD, SD card, etc.) but never allowed to come back out. Users can't enter supporting devices such as laptops or phones.
  2. The user has access to the target nodes remotely to their BMCs (e.g. iDrac, iLo) and can map an image as virtual media from their computer. This scenario is common in data centers where the customer provides network access to the BMCs of the target nodes.
  3. We cannot assume that we will have access to a computer to run an installer or installer helper software.

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

 

References

 

 

Epic Goal

  • As an OpenShift infrastructure owner, I need to be able to integrate the installation of my first on-premises OpenShift cluster with my automation flows and tools.
  • As an OpenShift infrastructure owner, I must be able to provide the CLI tool with manifests that contain the definition of the cluster I want to deploy
  • As an OpenShift Infrastructure owner, I must be able to get the validation errors in a programmatic way
  • As an OpenShift Infrastructure owner, I must be able to get the events and progress of the installation in a programmatic way
  • As an OpenShift Infrastructure owner, I must be able to retrieve the kubeconfig and OpenShift Console URL in a programmatic way

Why is this important?

  • When deploying clusters with a large number of hosts and when deploying many clusters, it is common to require to automate the installations.
  • Customers and partners usually use third party tools of their own to orchestrate the installation.
  • For Telco RAN deployments, Telco partners need to repeatably deploy multiple OpenShift clusters in parallel to multiple sites at-scale, with no human intervention.

Scenarios

  1. Monitoring flow:
    1. I generate all the manifests for the cluster,
    2. call the CLI tool pointint to the manifests path,
    3. Obtain the installation image from the nodes
    4. Use my infrastructure capabilities to boot the image on the target nodes
    5. Use the tool to connect to assisted service to get validation status and events
    6. Use the tool to retrieve credentials and URL for the deployed cluster

Acceptance Criteria

  • Backward compatibility between OCP releases with automation manifests (they can be applied to a newer version of OCP).
  • Installation progress and events can be tracked programatically
  • Validation errors can be obtained programatically
  • Kubeconfig and console URL can be obtained programatically
  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

References

User Story:

As a deployer, I want to be able to:

  • Get the credentials for the cluster that is going to be deployed

so that I can achieve

  • Checking the installed cluster for installation completion
  • Connect and administer the cluster that gets installed

 

Currently the Assisted Service generates the credentials by running the ignition generation step of the oepnshift-installer. This is why the credentials are only retrievable from the REST API towards the end of the installation.

In the BILLI usage, which takes down assisted service before the installation is complete there is no obvious point at which to alert the user that they should retrieve the credentials. This means that we either need to:

  • Allow the user to pass the admin key that will then get signed by the generated CA and replace the key that is made by openshift-installer (would mean new functionality in AI)
  • Allow the key to be retrieved by SSH with the fleeting command from the node0 (after it has generated). The command should be able to wait until it is possible
  • Have the possibility to POST it somewhere

Acceptance Criteria:

  • The admin key is generated and usable to check for installation completeness

This requires/does not require a design proposal.
This requires/does not require a feature gate.

Feature Overview

The AWS-specific code added in OCPPLAN-6006 needs to become GA and with this we want to introduce a couple of Day2 improvements.
Currently the AWS tags are defined and applied at installation time only and saved in the infrastructure CRD's status field for further operator use, which in turn just add the tags during creation.

Saving in the status field means it's not included in Velero backups, which is a crucial feature for customers and Day2.
Thus the status.resourceTags field should be deprecated in favour of a newly created spec.resourceTags with the same content. The installer should only populate the spec, consumers of the infrastructure CRD must favour the spec over the status definition if both are supplied, otherwise the status should be honored and a warning shall be issued.

Being part of the spec, the behaviour should also tag existing resources that do not have the tags yet and once the tags in the infrastructure CRD are changed all the AWS resources should be updated accordingly.

On AWS this can be done without re-creating any resources (the behaviour is basically an upsert by tag key) and is possible without service interruption as it is a metadata operation.

Tag deletes continue to be out of scope, as the customer can still have custom tags applied to the resources that we do not want to delete.

Due to the ongoing intree/out of tree split on the cloud and CSI providers, this should not apply to clusters with intree providers (!= "external").

Once confident we have all components updated, we should introduce an end2end test that makes sure we never create resources that are untagged.

After that, we can remove the experimental flag and make this a GA feature.

Goals

  • Inclusion in the cluster backups
  • Flexibility of changing tags during cluster lifetime, without recreating the whole cluster

Requirements

  • This Section:* A list of specific needs or objectives that a Feature must deliver to satisfy the Feature.. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.
Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

List any affected packages or components.

  • Installer
  • Cluster Infrastructure
  • Storage
  • Node
  • NetworkEdge
  • Internal Registry
  • CCO

RFE-1101 described user defined tags for AWS resources provisioned by an OCP cluster. Currently user can define tags which are added to the resources during creation. These tags cannot be updated subsequently. The propagation of the tags is controlled using experimental flag. Before this feature goes GA we should define and implement a mechanism to exclude any experimental flags. Day2 operations and deletion of tags is not in the scope.

RFE-2012 aims to make the user-defined resource tags feature GA. This means that user defined tags should be updatable.

Currently the user-defined tags during install are passed directly as parameters of the Machine and Machineset resources for the master and worker. As a result these tags cannot be updated by consulting the Infrastructure resource of the cluster where the user defined tags are written.

The MCO should be changed such that during provisioning the MCO looks up the values of the tags in the Infrastructure resource and adds the tags during creation of the EC2 resources. The MCO should also watch the infrastructure resource for changes and when the resource tags are updated it should update the tags on the EC2 instances without restarts.

Acceptance Criteria:

  • e2e test where the ResourceTags are updated and then the test verifies that the tags on the ec2 instances are updated without restarts. now moved to CFE-179

Feature Overview  

Much like core OpenShift operators, a standardized flow exists for OLM-managed operators to interact with the cluster in a specific way to leverage AWS STS authorization when using AWS APIs as opposed to insecure static, long-lived credentials. OLM-managed operators can implement integration with the CloudCredentialOperator in well-defined way to support this flow.

Goals:

Enable customers to easily leverage OpenShift's capabilities around AWS STS with layered products, for increased security posture. Enable OLM-managed operators to implement support for this in well-defined pattern.

Requirements:

  • CCO gets a new mode in which it can reconcile STS credential request for OLM-managed operators
  • A standardized flow is leveraged to guide users in discovering and preparing their AWS IAM policies and roles with permissions that are required for OLM-managed operators 
  • A standardized flow is defined in which users can configure OLM-managed operators to leverage AWS STS
  • An example operator is used to demonstrate the end2end functionality
  • Clear instructions and documentation for operator development teams to implement the required interaction with the CloudCredentialOperator to support this flow

Use Cases:

See Operators & STS slide deck.

 

Out of Scope:

  • handling OLM-managed operator updates in which AWS IAM permission requirements might change from one version to another (which requires user awareness and intervention)

 

Background:

The CloudCredentialsOperator already provides a powerful API for OpenShift's cluster core operator to request credentials and acquire them via short-lived tokens. This capability should be expanded to OLM-managed operators, specifically to Red Hat layered products that interact with AWS APIs. The process today is cumbersome to none-existent based on the operator in question and seen as an adoption blocker of OpenShift on AWS.

 

Customer Considerations

This is particularly important for ROSA customers. Customers are expected to be asked to pre-create the required IAM roles outside of OpenShift, which is deemed acceptable.

Documentation Considerations

  • Internal documentation needs to exists to guide Red Hat operator developer teams on the requirements and proposed implementation of integration with CCO and the proposed flow
  • External documentation needs to exist to guide users on:
    • how to become aware that the cluster is in STS mode
    • how to become aware of operators that support STS and the proposed CCO flow
    • how to become aware of the IAM permissions requirements of these operators
    • how to configure an operator in the proposed flow to interact with CCO

Interoperability Considerations

  • this needs to work with ROSA
  • this needs to work with self-managed OCP on AWS

Market Problem

This Section: High-Level description of the Market Problem ie: Executive Summary

  • As a customer of OpenShift layered products, I need to be able to fluidly, reliably and consistently install and use OpenShift layered product Kubernetes Operators into my ROSA STS clusters, while keeping a STS workflow throughout.
  •  
  • As a customer of OpenShift on the big cloud providers, overall I expect OpenShift as a platform to function equally well with tokenized cloud auth as it does with "mint-mode" IAM credentials. I expect the same from the Kubernetes Operators under the Red Hat brand (that need to reach cloud APIs) in that tokenized workflows are equally integrated and workable as with "mint-mode" IAM credentials.
  •  
  • As the managed services, including Hypershift teams, offering a downstream opinionated, supported and managed lifecycle of OpenShift (in the forms of ROSA, ARO, OSD on GCP, Hypershift, etc), the OpenShift platform should have as close as possible, native integration with core platform operators when clusters use tokenized cloud auth, driving the use of layered products.
  • .
  • As the Hypershift team, where the only credential mode for clusters/customers is STS (on AWS) , the Red Hat branded Operators that must reach the AWS API, should be enabled to work with STS credentials in a consistent, and automated fashion that allows customer to use those operators as easily as possible, driving the use of layered products.

Why it Matters

  • Adding consistent, automated layered product integrations to OpenShift would provide great added value to OpenShift as a platform, and its downstream offerings in Managed Cloud Services and related offerings.
  • Enabling Kuberenetes Operators (at first, Red Hat ones) on OpenShift for the "big3" cloud providers is a key differentiation and security requirement that our customers have been and continue to demand.
  • HyperShift is an STS-only architecture, which means that if our layered offerings via Operators cannot easily work with STS, then it would be blocking us from our broad product adoption goals.

Illustrative User Stories or Scenarios

  1. Main success scenario - high-level user story
    1. customer creates a ROSA STS or Hypershift cluster (AWS)
    2. customer wants basic (table-stakes) features such as AWS EFS or RHODS or Logging
    3. customer sees necessary tasks for preparing for the operator in OperatorHub from their cluster
    4. customer prepares AWS IAM/STS roles/policies in anticipation of the Operator they want, using what they get from OperatorHub
    5. customer's provides a very minimal set of parameters (AWS ARN of role(s) with policy) to the Operator's OperatorHub page
    6. The cluster can automatically setup the Operator, using the provided tokenized credentials and the Operator functions as expected
    7. Cluster and Operator upgrades are taken into account and automated
    8. The above steps 1-7 should apply similarly for Google Cloud and Microsoft Azure Cloud, with their respective token-based workload identity systems.
  2. Alternate flow/scenarios - high-level user stories
    1. The same as above, but the ROSA CLI would assist with AWS role/policy management
    2. The same as above, but the oc CLI would assist with cloud role/policy management (per respective cloud provider for the cluster)
  3. ...

Expected Outcomes

This Section: Articulates and defines the value proposition from a users point of view

  • See SDE-1868 as an example of what is needed, including design proposed, for current-day ROSA STS and by extension Hypershift.
  • Further research is required to accomodate the AWS STS equivalent systems of GCP and Azure
  • Order of priority at this time is
    • 1. AWS STS for ROSA and ROSA via HyperShift
    • 2. Microsoft Azure for ARO
    • 3. Google Cloud for OpenShift Dedicated on GCP

Effect

This Section: Effect is the expected outcome within the market. There are two dimensions of outcomes; growth or retention. This represents part of the “why” statement for a feature.

  • Growth is the acquisition of net new usage of the platform. This can be new workloads not previously able to be supported, new markets not previously considered, or new end users not previously served.
  • Retention is maintaining and expanding existing use of the platform. This can be more effective use of tools, competitive pressures, and ease of use improvements.
  • Both of growth and retention are the effect of this effort.
    • Customers have strict requirements around using only token-based cloud credential systems for workloads in their cloud accounts, which include OpenShift clusters in all forms.
      • We gain new customers from both those that have waited for token-based auth/auth from OpenShift and from those that are new to OpenShift, with strict requirements around cloud account access
      • We retain customers that are going thru both cloud-native and hybrid-cloud journeys that all inevitably see security requirements driving them towards token-based auth/auth.
      •  

References

As an engineer I want the capability to implement CI test cases that run at different intervals, be it daily, weekly so as to ensure downstream operators that are dependent on certain capabilities are not negatively impacted if changes in systems CCO interacts with change behavior.

Acceptance Criteria:

Create a stubbed out e2e test path in CCO and matching e2e calling code in release such that there exists a path to tests that verify working in an AWS STS workflow.

OCP/Telco Definition of Done
Feature Template descriptions and documentation.
<--- Cut-n-Paste the entire contents of this description into your new Feature --->
<--- Remove the descriptive text as appropriate --->

Feature Overview

  • As RH OpenShift Product Owners, we want to enable new providers/platforms/service with varying levels of capabilities and integration with minimal reliance on OpenShift Engineering.
  • As a new provider/platform partner, I want to enable my solution (hardware and/or software) with OpenShift with minimal effort.

 

Problem

  • It is currently challenging for us to enable new platforms / providers without taking the heavy burden on doing the platform specific development ourselves.

Goals

  • We want to enable the long-tail new platforms/providers to expand our reach into new markets and/or support new use cases.
  • We want to remove strict dependencies we have on Engineering teams to review, support and test new providers.
  • We want to lower the effort required for onboarding new platforms/providers.
  • We want to enable new platform/providers to self-certify.
  • We want to define tiered model for provider/platform integration that delineates ownership and responsibilities throughout new provider/platform development lifecycle and support model.
  • We want to reduce time to onboard new provider/platform – ideally to a single release.
  • We want to maintain consistent customer experience across all providers/platforms.

Requirements

  • Step-by-step guide on how to add a new platform/provider for each tier
  • Certification tool for partner to self-certify
  • Certification tool results for (at least) each Y/minor release submitted by partner to Red Hat for acknowledgement
  • DCI program to enable partners to run CI with OpenShift on their platform
  • Well documented, accessible, and up-to-date test suites for providing the test coverage of the partner
  • CI includes upgrade testing of OpenShift with partner's components
  • Partner component upgrade failure should not block OpenShift upgrade
  • Partner code is available in repositories in the openshift org on github with an open source license compatible with OpenShift

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

(Optional) Use Cases

This Section:

  • Main success scenarios - high-level user stories
  • Alternate flow/scenarios - high-level user stories
  • ...

Questions to answer…

  • ...

Out of Scope

Background, and strategic fit

This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

Assumptions

  • ...

Customer Considerations

  • ...

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?

 

References

The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Running the OPCT with the latest version (v0.1.0) on OCP 4.11.0, the openshift-tests is reporting an incorrect counter for the "total" field.

In the example below, after the 1127th test, the total follows the same counter of executed. I also would assume that the total is incorrect before that point as the test continues the execution increases both counters.

 

openshift-tests output format: [failed/executed/total]

started: (0/1126/1127) "[sig-storage] PersistentVolumes-expansion  loopback local block volume should support online expansion on node [Suite:openshift/conformance/parallel] [Suite:k8s]"

passed: (38s) 2022-08-09T17:12:21 "[sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options [Suite:openshift/conformance/parallel] [Suite:k8s]"

started: (0/1127/1127) "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition [Suite:openshift/conformance/parallel] [Suite:k8s]"

passed: (6.6s) 2022-08-09T17:12:21 "[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"

started: (0/1128/1128) "[sig-storage] In-tree Volumes [Driver: cinder] [Testpattern: Dynamic PV (immediate binding)] topology should fail to schedule a pod which has topologies that conflict with AllowedTopologies [Suite:openshift/conformance/parallel] [Suite:k8s]"

skip [k8s.io/kubernetes@v1.24.0/test/e2e/storage/framework/testsuite.go:116]: Driver local doesn't support GenericEphemeralVolume -- skipping
Ginkgo exit error 3: exit with code 3

skipped: (400ms) 2022-08-09T17:12:21 "[sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Generic Ephemeral-volume (block volmode) (late-binding)] ephemeral should support two pods which have the same volume definition [Suite:openshift/conformance/parallel] [Suite:k8s]"

started: (0/1129/1129) "[sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Dynamic PV (default fs)] capacity provides storage capacity information [Suite:openshift/conformance/parallel] [Suite:k8s]" 

 

OPCT output format [executed/total (failed failures)]

Tue, 09 Aug 2022 14:12:13 -03> Global Status: running
JOB_NAME                         | STATUS     | RESULTS    | PROGRESS                  | MESSAGE                                           
openshift-conformance-validated  | running    |            | 1112/1127 (0 failures)    | status=running                                    
openshift-kube-conformance       | complete   |            | 352/352 (0 failures)      | waiting for post-processor...                     
Tue, 09 Aug 2022 14:12:23 -03> Global Status: running
JOB_NAME                         | STATUS     | RESULTS    | PROGRESS                  | MESSAGE                                           
openshift-conformance-validated  | running    |            | 1120/1127 (0 failures)    | status=running                                    
openshift-kube-conformance       | complete   |            | 352/352 (0 failures)      | waiting for post-processor...                     
Tue, 09 Aug 2022 14:12:33 -03> Global Status: running
JOB_NAME                         | STATUS     | RESULTS    | PROGRESS                  | MESSAGE                                           
openshift-conformance-validated  | running    |            | 1139/1139 (0 failures)    | status=running                                    
openshift-kube-conformance       | complete   |            | 352/352 (0 failures)      | waiting for post-processor...                     
Tue, 09 Aug 2022 14:12:43 -03> Global Status: running
JOB_NAME                         | STATUS     | RESULTS    | PROGRESS                  | MESSAGE                                           
openshift-conformance-validated  | running    |            | 1185/1185 (0 failures)    | status=running                                    
openshift-kube-conformance       | complete   |            | 352/352 (0 failures)      | waiting for post-processor...                     
Tue, 09 Aug 2022 14:12:53 -03> Global Status: running
JOB_NAME                         | STATUS     | RESULTS    | PROGRESS                  | MESSAGE                                           
openshift-conformance-validated  | running    |            | 1188/1188 (0 failures)    | status=running                                    
openshift-kube-conformance       | complete   |            | 352/352 (0 failures)      | waiting for post-processor...      

 

 

 

 

Goal

Increase integration of Shipwright, Tekton, Argo CD in OpenShift GitOps with OpenShift platform and related products such as ACM.

Incomplete Features

When this image was assembled, these features were not yet completed. Therefore, only the Jira Cards included here are part of this release

Feature Overview

We drive OpenShift cross-market customer success and new customer adoption with constant improvements and feature additions to the existing capabilities of our OpenShift Core Networking (SDN and Network Edge). This feature captures that natural progression of the product.

Goals

  • Feature enhancements (performance, scale, configuration, UX, ...)
  • Modernization (incorporation and productization of new technologies)

Requirements

  • Core Networking Stability
  • Core Networking Performance and Scale
  • Core Neworking Extensibility (Multus CNIs)
  • Core Networking UX (Observability)
  • Core Networking Security and Compliance

In Scope

  • Network Edge (ingress, DNS, LB)
  • SDN (CNI plugins, openshift-sdn, OVN, network policy, egressIP, egress Router, ...)
  • Networking Observability

Out of Scope

There are definitely grey areas, but in general:

  • CNV
  • Service Mesh
  • CNF

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?
  • New Content, Updates to existing content, Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Create a PR in openshift/cluster-ingress-operator to implement configurable router probe timeouts.

The PR should include the following:

  • Changes to the ingress operator's ingress controller to allow the user to configure the readiness and liveness probe's timeoutSeconds values.
  • Changes to existing unit tests to verify that the new functionality works properly.
  • Write E2E test to verify that the new functionality works properly.
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

User Story: As a customer in a highly regulated environment, I need the ability to secure DNS traffic when forwarding requests to upstream resolvers so that I can ensure additional DNS traffic and data privacy.

tldr: three basic claims, the rest is explanation and one example

  1. We cannot improve long term maintainability solely by fixing bugs.
  2. Teams should be asked to produce designs for improving maintainability/debugability.
  3. Specific maintenance items (or investigation of maintenance items), should be placed into planning as peer to PM requests and explicitly prioritized against them.

While bugs are an important metric, fixing bugs is different than investing in maintainability and debugability. Investing in fixing bugs will help alleviate immediate problems, but doesn't improve the ability to address future problems. You (may) get a code base with fewer bugs, but when you add a new feature, it will still be hard to debug problems and interactions. This pushes a code base towards stagnation where it gets harder and harder to add features.

One alternative is to ask teams to produce ideas for how they would improve future maintainability and debugability instead of focusing on immediate bugs. This would produce designs that make problem determination, bug resolution, and future feature additions faster over time.

I have a concrete example of one such outcome of focusing on bugs vs quality. We have resolved many bugs about communication failures with ingress by finding problems with point-to-point network communication. We have fixed the individual bugs, but have not improved the code for future debugging. In so doing, we chase many hard to diagnose problem across the stack. The alternative is to create a point-to-point network connectivity capability. this would immediately improve bug resolution and stability (detection) for kuryr, ovs, legacy sdn, network-edge, kube-apiserver, openshift-apiserver, authentication, and console. Bug fixing does not produce the same impact.

We need more investment in our future selves. Saying, "teams should reserve this" doesn't seem to be universally effective. Perhaps an approach that directly asks for designs and impacts and then follows up by placing the items directly in planning and prioritizing against PM feature requests would give teams the confidence to invest in these areas and give broad exposure to systemic problems.


Relevant links:

Per the 4.6.30 Monitoring DNS Post Mortem, we should add E2E tests to openshift/cluster-dns-operator to reduce the risk that changes to our CoreDNS configuration break DNS resolution for clients.  

To begin with, we add E2E DNS testing for 2 or 3 client libraries to establish a framework for testing DNS resolvers; the work of adding additional client libraries to this framework can be left for follow-up stories.  Two common libraries are Go's resolver and glibc's resolver.  A somewhat common library that is known to have quirks is musl libc's resolver, which uses a shorter timeout value than glibc's resolver and reportedly has issues with the EDNS0 protocol extension.  It would also make sense to test Java or other popular languages or runtimes that have their own resolvers. 

Additionally, as talked about in our DNS Issue Retro & Testing Coverage meeting on Feb 28th 2024, we also decided to add a test for testing a non-EDNS0 query for a larger than 512 byte record, as once was an issue in bug OCPBUGS-27397.   

The ultimate goal is that the test will inform us when a change to OpenShift's DNS or networking has an effect that may impact end-user applications. 

In OCP 4.8 the router was changed to use the "random" balancing algorithm for non-passthrough routes by default. It was previously "leastconn".

Bug https://bugzilla.redhat.com/show_bug.cgi?id=2007581 shows that using "random" by default incurs significant memory overhead for each backend that uses it.

PR https://github.com/openshift/cluster-ingress-operator/pull/663
reverted the change and made "leastconn" the default again (OCP 4.8 onwards).

The analysis in https://bugzilla.redhat.com/show_bug.cgi?id=2007581#c40 shows that the default haproxy behaviour is to multiply the weight (specified in the route CR) by 16 as it builds its data structures for each backend. If no weight is specified then openshift-router sets the weight to 256. If you have many, many thousands of routes then this balloons quickly and leads to a significant increase in memory usage, as highlighted by customer cases attached to BZ#2007581.

The purpose of this issue is to both explore changing the openshift-router default weight (i.e., 256) to something smaller, or indeed unset (assuming no explicit weight has been requested), and to measure the memory usage within the context of the existing perf&scale tests that we use for vetting new haproxy releases.

It may be that the low-hanging change is to not default to weight=256 for backends that only have one pod replica (i.e., if no value specified, and there is only 1 pod replica, then don't default to 256 for that single server entry).

Outcome: does changing the [default] weight value make it feasible to switch back to "random" as the default balancing algorithm for a future OCP release.

Revert router to using "random" once again in 4.11 once analysis is done on impact of weight and static memory allocation.

Feature Overview

  • This Section:* High-Level description of the feature ie: Executive Summary
  • Note: A Feature is a capability or a well defined set of functionality that delivers business value. Features can include additions or changes to existing functionality. Features can easily span multiple teams, and multiple releases.

 

Goals

  • This Section:* Provide high-level goal statement, providing user context and expected user outcome(s) for this feature

 

Requirements

  • This Section:* A list of specific needs or objectives that a Feature must deliver to satisfy the Feature.. Some requirements will be flagged as MVP. If an MVP gets shifted, the feature shifts. If a non MVP requirement slips, it does not shift the feature.

 

Requirement Notes isMvp?
CI - MUST be running successfully with test automation This is a requirement for ALL features. YES
Release Technical Enablement Provide necessary release enablement details and documents. YES

 

(Optional) Use Cases

This Section: 

  • Main success scenarios - high-level user stories
  • Alternate flow/scenarios - high-level user stories
  • ...

 

Questions to answer…

  • ...

 

Out of Scope

 

Background, and strategic fit

This Section: What does the person writing code, testing, documenting need to know? What context can be provided to frame this feature.

 

Assumptions

  • ...

 

Customer Considerations

  • ...

 

Documentation Considerations

Questions to be addressed:

  • What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
  • Does this feature have doc impact?  
  • New Content, Updates to existing content,  Release Note, or No Doc Impact
  • If unsure and no Technical Writer is available, please contact Content Strategy.
  • What concepts do customers need to understand to be successful in [action]?
  • How do we expect customers will use the feature? For what purpose(s)?
  • What reference material might a customer want/need to complete [action]?
  • Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
  • What is the doc impact (New Content, Updates to existing content, or Release Note)?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Goal
Add support for PDB (Pod Disruption Budget) to the console.

Requirements:

  • Add a list, detail, and yaml view (with samples) for PDBs. In addition, update the workloads page to support PDBs as well.
  • For the PBD list page include a table with name, namespace, selector, availability, allowed disruptions and created. In addition, to the table provide the main call to action to create a PDB.
  • For the PDB details page provide a Details, YAML and Pods tab. The Pods tab will include a list pods associated with the PBD - make sure to surface the owner column.
  • When users create a PDB from the list page, take them to the YAML and provide samples to enhance the creation experience. Sample 1: Set max unavailable to 0, Sample 2: Set min unavailable to 25% (confirming samples with stakeholders). In the case that a PDB has already been applied, warn users that it is not recommended to add another. Cover use cases as well that keep users from creating poor policies - for example, setting the minimum available to zero.
  • Add the ability to add/edit/view PBDs on a workload. If we edit a PDB applied to multiple workloads, warn users that this change will affect all workloads and not only the one they are currently editing. When a PDB has been applied, add a new filed to the details page with a link to the PDB and policy.

Designs:

Samuel Padgett Colleen Hart

During master nodes upgrade when nodes are getting drained there's currently no protection from two or more operands going down. If your component is required to be available during upgrade or other voluntary disruptions, please consider deploying PDB to protect your operands.

The effort is tracked in https://issues.redhat.com/browse/WRKLDS-293.

Example:

 

Acceptance Criteria:
1. Create PDB controller in console-operator for both console and downloads pods
2. Add e2e tests for PDB in single node and multi node cluster

 

Note: We should consider to backport this to 4.10

When viewing the Installed Operators list set to 'All projects' and then selecting an operator that is available in 'All namespaces' (globally installed,) upon clicking the operator to view its details the user is taken into the details of that operator in installed namespace (project selector will switch to the install namespace.)

This can be disorienting then to look at the lists of custom resource instances and see them all blank, since the lists are showing instances only in the currently selected project (the install namespace) and not across all namespaces the operator is available in.

It is likely that making use of the new Operator resource will improve this experience (CONSOLE-2240,) though that may still be some releases away. it should be considered if it's worth a "short term" fix in the meantime.

Note: The informational alert was not implemented. It was decided that since "All namespaces" is displayed in the radio button, the alert was not needed.

Feature Overview

Customers are asking for improvements to the upgrade experience (both over-the-air and disconnected). This is a feature tracking epics required to get that work done.  

Goals

  1. Have an option to do upgrades in more discrete steps under admin control. Specifically, these steps are: 
    • Control plane upgrade
    • Worker nodes upgrade
    • Workload enabling upgrade (i..e. Router, other components) or infra nodes
  2. Better visibility into any errors during the upgrades and documentation of what they error means and how to recover. 
  3. An user experience around an end-2-end back-up and restore after a failed upgrade 
  4. OTA-810  - Better Documentation: 
    • Backup procedures before upgrades. 
    • More control over worker upgrades (with tagged pools between user Vs admin)
    • The kinds of pre-upgrade tests that are run, the errors that are flagged and what they mean and how to address them. 
    • Better explanation of each discrete step in upgrades, and what each CVO Operator is doing and potential errors, troubleshooting and mitigating actions.

References

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Provide a one click option to perform an upgrade which pauses all non master pools

Why is this important?

  • Customers are increasingly asking that the overall upgrade is broken up into more digestible pieces
  • This is the limit of what's possible today
    • R&D work will be done in the future to allow for further bucketing of upgrades into Control Plane, Worker Nodes, and Workload Enabling components (ie: router) That will however take much more consideration and rearchitecting

Scenarios

  1. An admin selecting their upgrade is offered two options "Upgrade Cluster" and "Upgrade Control Plane"
    1. If the admin selects Upgrade Cluster they get the pre 4.10 behavior
    2. If the admin selects Upgrade Control Plane all non master pools are paused and an upgrade is initiated
  1. A tooltip should clarify what the difference between the two are
  2. The pool progress bars should indicate pause/unpaused status, non master pools should allow for unpausing

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

  1. While this epic doesn't specifically target upgrading from 4.N to 4.N+1 to 4.N+2 with non master pools paused it would fundamentally enable that and it would simplify the UX described in Paused Worker Pool Upgrades

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Goal
Add the ability to choose between a full cluster upgrade (which exists today) or control plane upgrade (which will pause all worker pools) in the console.

Background
Currently in the console, users only have the ability to complete a full cluster upgrade. For many customers, upgrades take longer than what their maintenance window allows. Users need the ability to upgrade the control plane independently of the other worker nodes. 

Ex. Upgrades of huge clusters may take too long so admins may do the control plane this weekend, worker-pool-A next weekend, worker-pool-B the weekend after, etc.  It is all at a pool level, they will not be able to choose specific hosts.

Requirements

  1. Changes to the Update modal:
    1. Add the ability to choose between a cluster upgrade and a control plane upgrade (the design does not default to a selection but rather disables the update button to force the user to make a conscious decision)
    2. link out to documentation to learn more about update strategies
  2. Changes to the in progress check list:
    1. Add a status above the worker pool section to let users know that all worker pools are paused and an action to resume all updates
    2. Add a "resume update" button for each worker pool entry
  3. Changes to the update status:
    1. When all master pools are updated successfully, change the status from what we have today "Up to date" to something like "Control plane up to date - all worker pools paused"
  4. Add an inline alert that lets users know there is a 60 day window to update all worker pools. In the alert, include the sentiment that worker pools can remain paused as long as is normally safe, which means until certificate rotation becomes critical which is at about 60 days. The admin would be advised to unpause them in order to complete the full upgrade. If the MCPs are paused, the certification rotation does not happen, which causes the cluster to become degraded and causes failure in multiple 'oc' commands, including but not limited to 'oc debug', 'oc logs', 'oc exec' and 'oc attach'. (Are we missing anything else here?) Inline alert logic:
    1. From day 60 to day 10 use the default alert.
    2. From day 10 to day 3 use the warning alert.
    3. From day 3 to 0 use the critical alert and continue to persist until resolved.

Design deliverables: 

Goal
Improve the UX on the machine config pool page to reflect the new enhancements on the cluster settings that allows users to select the ability to update the control plane only.

Background
Currently in the console, users only have the ability to complete a full cluster upgrade. For many customers, upgrades take longer than what their maintenance window allows. Users need the ability to upgrade the control plane independently of the other worker nodes. 

Ex. Upgrades of huge clusters may take too long so admins may do the control plane this weekend, worker-pool-A next weekend, worker-pool-B the weekend after, etc.  It is all at a pool level, they will not be able to choose specific hosts.

Requirements

  1. Changes to the table:
    1. Remove "Updated, updating and paused" columns. We could also consider adding column management to this table and hide those columns by default.
    2. Add "Update status" as a column, and surface the same status on cluster settings. Not true or false values but instead updating, paused, and up to date.
    3. Surface the update action in the table row.
  2. Add an inline alert that lets users know there is a 60 day window to update all worker pools. In the alert, include the sentiment that worker pools can remain paused as long as is normally safe, which means until certificate rotation becomes critical which is at about 60 days. The admin would be advised to unpause them in order to complete the full upgrade. If the MCPs are paused, the certification rotation does not happen, which causes the cluster to become degraded and causes failure in multiple 'oc' commands, including but not limited to 'oc debug', 'oc logs', 'oc exec' and 'oc attach'. (Are we missing anything else here?) Add the same alert logic to this page as the cluster settings:
    1. From day 60 to day 10 use the default inline alert.
    2. From day 10 to day 3 use the warning inline alert.
    3. From day 3 to 0 use the critical alert and continue to persist until resolved.

Design deliverables: 

Feature Overview

Enable sharing ConfigMap and Secret across namespaces

Requirements

Requirement Notes isMvp?
Secrets and ConfigMaps can get shared across namespaces   YES

Questions to answer…

NA

Out of Scope

NA

Background, and strategic fit

Consumption of RHEL entitlements has been a challenge on OCP 4 since it moved to a cluster-based entitlement model compared to the node-based (RHEL subscription manager) entitlement mode. In order to provide a sufficiently similar experience to OCP 3, the entitlement certificates that are made available on the cluster (OCPBU-93) should be shared across namespaces in order to prevent the need for cluster admin to copy these entitlements in each namespace which leads to additional operational challenges for updating and refreshing them. 

Documentation Considerations

Questions to be addressed:
 * What educational or reference material (docs) is required to support this product feature? For users/admins? Other functions (security officers, etc)?
 * Does this feature have doc impact?
 * New Content, Updates to existing content, Release Note, or No Doc Impact
 * If unsure and no Technical Writer is available, please contact Content Strategy.
 * What concepts do customers need to understand to be successful in [action]?
 * How do we expect customers will use the feature? For what purpose(s)?
 * What reference material might a customer want/need to complete [action]?
 * Is there source material that can be used as reference for the Technical Writer in writing the content? If yes, please link if available.
 * What is the doc impact (New Content, Updates to existing content, or Release Note)?

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Require volumes that use the Shared Resources CSI driver to specify readOnly: true in order to create the pod
  • Reserve the "openshift-" prefix for SharedSecrets and SharedConfigMaps, such that these resources can only be created by OpenShift operators. We must do this while the driver is tech preview.

Why is this important?

  • readOnly: true must be specified in order for the driver to mount the volume correctly. If this is not set, the volume mount is rejected and the pod will be stuck in a Pending/Initializing state.
  • A validating admission webhook will ensure that the pods won't be created in such a state, improving user experience.
  • Openshift operators may want/need to create SharedSecrets and SharedConfigMaps so they can be used as system level resources. For example, Insights Operator can automatically create a SharedSecret for the Simple Content Access cert.

Scenarios

  1. As a developer, I want to consume shared Secrets and ConfigMaps in my workloads so that I can have access to shared credentials and configuration.
  2. As a cluster admin, I want the Insights operator to automatically create a SharedSecret for my cluster's simple content access certificate.
  3. As a cluster admin/SRE, I want OpenShift to use SharedConfigMaps to distribute cluster certificate authorities so that data is not duplicated in ConfigMaps across my cluster.

Acceptance Criteria

  • Pods must have readOnly: true set to use the shared resource CSI Driver - admission should be rejected if this is not set.
  • Documentation updated to reflect this requirement.
  • Users (admins?) are not allowed to create SharedSecrets or SharedConfigMaps with the "openshift-" prefix.

Dependencies (internal and external)

  1. ART - to create payload image for the webhook
  2. Arch review for the enhancement proposal (Apiserver/control plane team)

Previous Work (Optional):

  1. BUILD-293 - Shared Resources tech preview

Open questions::

  1. From email exchange with David Eads:  "Thinking ahead to how we'd like to use this in builds once we're GA, are we likely to choose openshift-etc-pki-entitlement as one of our well-known names?  If we do, what sort of validation (if any) would we like to provide on the backing secret and does that require any new infrastructure?"

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a developer using SharedSecrets and ConfigMaps
I want to ensure all pods set readOnly; true on admission
So that I don't have pods stuck in the "Pending" state because of a bad volume mount

Acceptance Criteria

  • Pods which reference the Shared Resource CSI driver must set readOnly: true on admission.
  • If readOnly: true is not set, or is set to false, the pod should not be created.
  • Appropriate testing in place to verify behavior

QE Impact

QE will need to verify the new Pod Admission behavior

Docs Impact

Docs will need to ensure that readOnly: true is required and must be set to true.

PX Impact

None.

QE testing/verification of the feature - require readOnly to be true

Actions:

1. Create smoke test and submit to GitHub
2. Run script to integrate smoke test with Polarion

User Story

As an OpenShift engineer,
I want to initialize a validating admission webhook for the shared resource CSI driver
So that I can eventually require readOnly: true to be set on all pods that use the Shared Resource CSI Driver

Acceptance Criteria

  • Container image created in CI which builds a "hello world" binary for the future validating webhook.
  • ART sets up downstream build process for the image.

QE Impact

None.

Docs Impact

None.

PX Impact

None.

Notes

This is a prerequisite for implementing the validating admission webhook.
We need to have ART build the container image downstream so that we can add the correct image references for the CVO.
If we reference images in the CVO manifests which do not have downstream counterparts, we break the downstream build for the payload.

CI is capable of producing multiple images for a GitHub repository. For example, github.com/openshift/oc produces 4-5 images with various capabilities.

We did similar work in BUILD-234 - some of these steps are not required.

See also:

User Story

As an OpenShift engineer
I want the shared resource CSI Driver webhook to be installed with the cluster storage operator
So that the webhook is deployed when the CSI driver is deployed

Acceptance Criteria

  • Shared Resource CSI Driver operator deploys the webhook alongside the CSI driver
  • Cluster storage operator is updated if needed to deploy the shared resource CSI driver webhook.

Docs Impact

None - no new functional capabilities will be added

QE Impact

None - we can verify in CI that we are deploying the webhook correctly.

PX Impact

None - no new functional capabilities will be added

Notes

The scope of this story is to just deploy the "hello world" webhook with the Cluster Storage Operator.
Adding the live ValidatingWebhook configuration and service will be done in a separate story.

Complete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were completed when this image was assembled

Summary (PM+lead)

https://issues.redhat.com/browse/AUTH-2 revealed that, in prinicipal, Pod Security Admission is possible to integrate into OpenShift while retaining SCC functionality.

 

This epic is about the concrete steps to enable Pod Security Admission by default in OpenShift

Motivation (PM+lead)

Goals (lead)

  • Enable Pod Security Admission in "restricted" policy level by default
  • Migrate existing core workloads to comply to the "restricted" pod security policy level

Non-Goals (lead)

  • Other OpenShift workloads must be migrated by the individual responsible teams.

Deliverables

Proposal (lead)

Enhancement - https://github.com/openshift/enhancements/pull/1010

User Stories (PM)

Dependencies (internal and external, lead)

Previous Work (lead)

Open questions (lead)

  1. ...

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

ingress-operator must comply to pod security. The current audit warning is:

 

{   "objectRef": "openshift-ingress-operator/deployments/ingress-operator",   "pod-security.kubernetes.io/audit-violations": "would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.run AsNonRoot=true), seccompProfile (pod or containers \"ingress-operator\", \"kube-rbac-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" }

dns-operator must comply to restricted pod security level. The current audit warning is:

{   "objectRef": "openshift-dns-operator/deployments/dns-operator",   "pod-security.kubernetes.io/audit-violations": "would violate PodSecurity \"restricted:latest\": allowPrivilegeEscalation != false (containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.allowPrivilegeEscalation=false), unre stricted capabilities (containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.capabilities.drop=[\"ALL\"]), runAsNonRoot != true (pod or containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.runAsNonRoot=tr ue), seccompProfile (pod or containers \"dns-operator\", \"kube-rbac-proxy\" must set securityContext.seccompProfile.type to \"RuntimeDefault\" or \"Localhost\")" }

Epic Goal

HyperShift provisions OpenShift clusters with externally managed control-planes. It follows a slightly different process for provisioning clusters. For example, HyperShift uses cluster API as a backend and moves all the machine management bits to the management cluster.  

Why is this important?

showing machine management/cluster auto-scaling tabs in the console is likely to confuse users and cause unnecessary side effects. 

Definition of Done

  • MachineConfig and MachineConfigPool should not be present, they should be either removed or hidden when the cluster is spawned using HyperShift. 
  • Cluster Settings show say the control plane is externally managed and be read-only.
  • Cluster Settings -> Configuration resources should be read-only, maybe hide the tab
  • Some resources should go in an allowlist. Most will be hidden
  • Review getting started steps

See Design Doc: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#

 

Setup / Testing

It's based on the SERVER_FLAG controlPlaneTopology being set to External is really the driving factor here; this can be done in one of two ways:

  • Locally via a Bridge Variable, export BRIDGE_CONTROL_PLANE_TOPOLOGY_MODE="External"
  • Locally / OnCluster via modifying the window.SERVER_FLAGS.controlPlaneTopology to External in the dev tools

To test work related to cluster upgrade process, use a 4.10.3 cluster set on the candidate-4.10 upgrade channel using 4.11 frontend code.

If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml co the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need surface a message that the control plane is externally managed and add following changes:

  • Remove update button
  • Make channel read only
  • Link out to read only CV details page
  • Remove the ability to edit upstream configuration
  • Remove the cluster autoscaler field
  • Add an alert to the page so that users know the control plane is externally managed

In general, anything that changes a cluster version should be read only.

Check section 02 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#

 

Based on Cesar's comment we should be removing the `Control Plane` section, if the infrastructure.status.controlplanetopology being "External".

If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml to the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to suspend kubeadmin notifier, from the global notifications, since it contain link for updating the cluster OAuth configuration (see attachment).

 

 

If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml to the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to suspend these notifications:

  • cluster upgrade notifications
  • new channel available notifications

For these we will need to check `ControlPlaneTopology`, if it's set to 'External' and also check if the user can edit cluster version(either by creating a hook or an RBAC call, eg. `canEditClusterVersion`)

 

Check section 05 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#

If the Infrastructure.Status.ControlPlaneTopology is set to 'External', the console-operator will pass this information via the console-config.yaml co the console. Console pod will get re-deployed and will store the topology mode information as a SERVER_FLAG. Based on that value we need to remove the ability to “Add identity providers” under “Set up your Cluster”. In addition to the getting started card, we should remove the ability to update a cluster on the details card when applicable (anything that changes a cluster version should be read only).

Summary of changes to the overview page:

  • Remove the ability to “Add identify providers” under “Set up your Cluster”
  • Remove cluster update CTA from the details card
  • Remove update alerts from the status card

Check section 03 for more info: https://docs.google.com/document/d/1k76JtRRHBdCCEjHPqKcYvbNVsuaGmRhWDLESWIm0mbo/edit#

Epic Goal

Why is this important?

  • So the UX satisfies the current trands, where dark mode is becoming a standard for modern services.  

Acceptance Criteria

  • OCP admin console must be rendered in a preferred mode based on `prefers-color-scheme` media query
  • OCP admin console must be rendered in a preferred mode selected in the User Setting page
  • Create an followup epic/story for and listing and tracking changes needed in OCP console's dynamic plugins

Dependencies (internal and external)

  1. PatternFly - Dark mode PF variables

Previous Work (Optional):

  1. Mike Coker has worked on a POC from the PF point of view on both the admin and dev console, and the screenshot results are listed below along with the repo branch. Also listed is a document covering some of the common issues found when putting together the admin console POC. https://github.com/mcoker/console/tree/dark-theme
    Background POC work completed for reference:

PatternFly Dark Theme Handbookhttps://docs.google.com/document/d/1mRYEfUoOjTsSt7hiqjbeplqhfo3_rVDO0QqMj2p67pw/edit

Admin Console -> Workloads & Pods

Dev Console -> Gotcha pages: Observe Dashboard and Metrics, Add, Pipelines: builder, list, log, and run

Open questions::

  1. Who should be responsible for updating DynamicPlugins to be able to render in dark mode?

As a developer, I want to be able to scope the changes needed to enable dark mode for the admin console. As such, I need to investigate how much of the console will display dark mode using PF variables and also define a list of gotcha pages/components which will need special casing above and beyond PF variable settings.

 

Acceptance criteria:

As a developer, I want to be able to fix remaining issues from the spreadsheet of issues generated after the initial pass and spike of adding dark theme to the console.. As such, I need to make sure to either complete all remaining issues for the spreadsheet, or, create a bug or future story for any remaining issues in these two documents.

 

Acceptance criteria:

An epic we can duplicate for each release to ensure we have a place to catch things we ought to be doing regularly but can tend to fall by the wayside.

The Cluster Dashboard Details Card Protractor integration test was failing at high rate, and despite multiple attempts to fix, was never fully resolved, so it was disabled as a way to fix https://bugzilla.redhat.com/show_bug.cgi?id=2068594. Migrating this entire file to Cypress should give us better debugging capability, which is what was done to fix a similarly problematic project dashboard Protractor test.

This epic contains all the Dynamic Plugins related stories for OCP release-4.11 

Epic Goal

  • Track all the stories under a single epic

Acceptance Criteria

  •  

We have a Timestamp component for consistent display of dates and times that we should expose through the SDK. We might also consider a hook that formats dates and times for places were you don't want or cant use the component, eg. times on a chart. 

This will become important when we add a user preference for dates so that plugins show consistent dates and times as console. If I set my user preference to UTC dates, console should show UTC dates everywhere.

 

AC:

  • Expose the Timestamp component inside the SDK. 
  • Replace the connect with useSelector hook
  • Keep the original component and proxy it to the new one in the SDK

 

 

 

cc Jakub Hadvig Sho Weimer 

In the 4.11 release, a console.openshift.io/default-i18next-namespace annotation is being introduced. The annotation indicates whether the ConsolePlugin contains localization resources. If the annotation is set to "true", the localization resources from the i18n namespace named after the dynamic plugin (e.g. plugin__kubevirt), are loaded. If the annotation is set to any other value or is missing on the ConsolePlugin resource, localization resources are not loaded. 

 

In case these resources are not present in the dynamic plugin, the initial console load will be slowed down. For more info check BZ#2015654

 

AC:

  • console-operator should be checking for the new console.openshift.io/use-i18n annotation, update the console-config.yaml accordingly and redeploy the console server
  • console server should pick up the changes in the console-config.yaml and only load the i18n namespace that are available

 

Follow up of https://issues.redhat.com/browse/CONSOLE-3159

 

 

Currently, you need to navigate to

Cluster Settings ->
Global configuration ->
Console (operator) config ->
Console plugins

to see and managed plugins. This takes a lot of clicks and is not discoverable. We should look at surfacing plugin details where they're easier to find – perhaps on the Cluster Settings page – or at least provide a more convenient link somewhere in the UI.

AC: Add the Dynamic Plugins section to the Status Card in the overview that will contain:

  • count of active and non-active plugins
  • link to the ConsolePlugins instances page
  • status of the loaded plugins and breakout error

cc Ali Mobrem Robb Hamilton

Currently, enabled plugins can fail to load for a variety of reasons. For instance, plugins don't load if the plugin name in the manifest doesn't match the ConsolePlugin name or the plugin has an invalid codeRef. There is no indication in the UI that something has gone wrong. We should explore ways to report this problem in the UI to cluster admins. Depending on the nature of the issue, an admin might be able to resolve the issue or at least report a bug against the plugin.

The message about failing could appear in the notification drawer and/or console plugins tab on the operator config. We could also explore creating an alert if a plugin is failing.

 

AC:

  • Add notification into the Notification Drawer in case a Dynamic Plugin will error out during load.
  • Render these errors in the status card, notification section, as well.
  • For each failed plugin we should create a separate notification.

We need to provide a base for running integration tests using the dynamic plugins. The tests should initially

  • Create a deployment and service to run the dynamic demo plugin
  • Update the console operator config to enable the plugin
  • Wait for the plugin to be available
  • Test at least one extension point used by the plugin (such as adding items to the nav)
  • Disable the plugin when done

Once the basic framework is in place, we can update the demo plugin and add new integration tests when we add new extension points.

https://github.com/openshift/console/tree/master/frontend/dynamic-demo-plugin

 

https://github.com/openshift/enhancements/blob/master/enhancements/console/dynamic-plugins.md

 

https://github.com/openshift/console/tree/master/frontend/packages/console-plugin-sdk

Goal

  • Add the ability for users to select supported but not recommended updates.
  • Refine workflow when both "upgradeable=false" and "supported-but-not-recommended" updates occur

Background
RFE: for 4.10, Cincinnati and the cluster-version operator are adding conditional updates (a.k.a. targeted edge blocking): https://issues.redhat.com/browse/OTA-267

High-level plans in https://github.com/openshift/enhancements/blob/master/enhancements/update/targeted-update-edge-blocking.md#update-client-support-for-the-enhanced-schema

Example of what the oc adm upgrade UX will be in https://github.com/openshift/enhancements/blob/master/enhancements/update/targeted-update-edge-blocking.md#cluster-administrator.

The oc implementation landed via https://github.com/openshift/oc/pull/961.

Design

  • Use case 01: "supported but not recommended" occurs to the latest version:
    • Add an info icon next to the version on update path with a pop-over to explain about why updating to this version is supported, but not recommended and a link to known risks
    • Identify the difference in "recommended" versions, "supported but not recommended" versions, and "blocked" versions (upgradeable=false) in the + more modal.
    • The latest version is pre-selected in the dropdown in the update modal with an inline alert to inform users about supported-but-not-recommended version with link to known risks. Users can choose to update to another recommended versions, update to a supported-but-not-recommended one, or wait.
    • The "recommended" and "supported but not recommended" updates are separated in the dropdown.
    • If a user selects a "recommended" update, the inline alert disappears.
  • Use case 02: When both "upgradeable=false" and "supported but not recommended" occur:
    • Add an alert banner to explain why users shouldn’t update to the latest version and link to how to resolve on the cluster settings details page. Users have the options to resolve the issue, update to a patch version, or wait.
    • If users open the update modal without resolving the "upgradeable=false" issue, the next recommended version is pre-selected. An expandable link "View blocked versions (#)" is included under the dropdown to show "upgradeable=false" versions with resolve link.
    • If users resolve the "upgradeable=false" issue, the cluster settings page will change to use case 01
    • Question: Priority on changing the upgradeable=false alert banner in update modal and blocked versions in dropdown

See design doc: https://docs.google.com/document/d/1Nja4whdsI5dKmQNS_rXyN8IGtRXDJ8gXuU_eSxBLMIY/edit#

See marvel: https://marvelapp.com/prototype/h3ehaa4/screen/86077932

Update the cluster settings page to inform the user when the latest available update is supported but not recommended. Add an informational popover to the latest version in  update path visualization.

The "Update Version" modal on the cluster settings page should be updated to give users information about recommended, not recommended, and blocked update versions.

  • When the modal is opened, the latest recommended update version should be pre-selected in the version dropdown.
  • Blocked versions should no longer be displayed in the version dropdown, and should instead be displayed in a collapsible field below the dropdown.
  • When blocked versions are present, a link should be provided to the cluster operator tab. The version dropdown itself should have two labeled sections: "Recommended" and "Supported but not recommended".
  • When the user selects a "Supported but not recommended" item from the version dropdown, an inline info alert should appear below the version selection field and should provide a link to known risks associated with the selected version. This is an external link provided through the ClusterVersion API.

Epic Goal

  • Add telemetry so that we know how image stream features are used.

Why is this important?

  • We have a long standing epic to create image streams v2. We need to better understand how image streams are used today.

Scenarios

  1. ...

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Dependencies (internal and external)

  1. ...

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Epic Goal

  • Make the image registry distributed across availability zones.

Why is this important?

  • The registry should be highly available and zone failsafe.

Scenarios

  1. As an administrator I want to rely on a default configuration that spreads image registry pods across topology zones so that I don't suffer from a long recovery time (>6 mins) in case of a complete zone failure if all pods are impacted.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Dependencies (internal and external)

  1. Pod's topologySpreadConstraints

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: https://github.com/openshift/cluster-image-registry-operator/pull/730
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Story: As an administrator I want to rely on a default configuration that spreads image registry pods across topology zones so that I don't suffer from a long recovery time (>6 mins) in case of a complete zone failure if all pods are impacted.

Background: The image registry currently uses affinity/anti-affinity rules to spread registry pods across different hosts. However this might cause situations in which all pods end up on hosts of a single zone, leading to a long recovery time of the registry if that zone is lost entirely. However due to problems in the past with the preferred setting of anti-affinity rule adherence the configuration was forced instead with required and the rules became constraints. With zones as constraints the internal registry would not have deployed anymore in environments with a single zone, e.g. internal CI environment. Pod topology constraints is a new API that is supported in OCP which can also relax constraints in case they cannot be satisfied. Details here: https://docs.openshift.com/container-platform/4.7/nodes/scheduling/nodes-scheduler-pod-topology-spread-constraints.html

Acceptance criteria:

  • by default the internal registry is deployed with at least two replica
  • by default the topology constraints should be on a zone-basis, so that by defaults one registry pod is scheduled in each zone
  • when constraints can't be satisfied the registry should deploy anyway
  • we should not do this in SNO environments
  • the registry should still work on SNO environments

Open Questions:

  • what happens in environments where the storage is zone dependent?
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

As an OpenShift administrator
I want to provide the registry operator with a custom certificate authority for S3 storage
so that I can use a third-party S3 storage provider.

Acceptance criteria

  1. Users can specify a configmap name (from openshift-config) in config.imageregistry/cluster's spec.storage.s3.
  2. The operator uses CA from this configmap to check S3 bucket.
  3. The image registry pod uses CA from this configmap to access the S3 bucket.
  4. When a custom CA is defined, the operator/image-registry should still trust certificate authorities that are used by Amazon S3 and other well-known CAs.
  5. An end-to-end test that runs minio and checks the image registry becomes healthy with it.

Goal

Remove Jenkins from the OCP Payload.

Problem

  • Jenkins images are "non-trival in size, impact experience around OCP payloads
  • Security advisories cannot be handled once, but against all actively supported OCP releases, adding to response time for handling said advisories
  • Some customers may now want to upgrade Jenkins as OCP upgrades (making this configurable is more ideal)

Why is this important

  • This is an engineering motivated item to reduce costs so we have more cycles for strategic work
  • Aside from the team itself, top level OCP architects want this to reduce the image size, improve general OCP upgrade experience
  • Sends a mix message with respect to what is startegic CI/CI when Jenkins is baked into OCP, but Tekton/Pipelines is an add-on, day 2 install sort of thing

Dependencies (internal and external)

See epic linking - need alternative non payload image available to provide relatively seamless migration

 

Also, the EP for this is approved and merged at https://github.com/openshift/enhancements/blob/master/enhancements/builds/remove-jenkins-payload.md

Estimate (xs, s, m, l, xl, xxl):

Questions:

       PARTIAL ANSWER ^^:  confirmed with Ben Parees in https://coreos.slack.com/archives/C014MHHKUSF/p1646683621293839 that EP merging is currently sufficient OCP "technical leadership" approval.

 

Previous work

 

Customers

assuming none

User Stories

 

As maintainers of the OpenShift jenkins component, we need run Jenkins CI for PR testing against openshift/jenkins, openshift/jenkins-sync-plugin, openshift/jenkins-client-plugin, openshift/jenkins-openshift-login-plugin, using images built in the CI pipeline but not injected into CI test clusters via sample operator overriding the jenkins sample imagestream with the jenkins payload image.

 

As maintainers of the OpenShift Jenkins component, we need Jenkins periodics for the client and sync plugins to run against the latest non payload, CPaas image, promoted to CI's image locations on quay.io, for the current release in development.

 

As maintainers of the OpenShift Jenkins component, we need Jenkins related tests outside of very basic Jenkins Pipieline Strategy Build Config verification, removed from openshift-tests in OpenShift Origin, using a non-payload, CPaas image pertinent to the branch in question.

Acceptance criteria

  • all PR CI Tests do not utilize samples operator manipulation of the jenkins imagestream with the in payload image, but rather images including the PRs changes
  • all periodic CI Tests do not utilize samples operator manipulation of the jenkins imagestream with the in payload image, but rather CI promoted images for the current release pushed to quay.io

High Level, we ideally want to vet the new CPaas image via CI and periodics BEFORE we start changing the samples operator so that it does not manipulate the jenkins imagestream (our tests will override the samples operator override)

QE Impact

NONE ... QE should wait until JNKS-254

Docs Impact

NONE

PX Impact

 

NONE

Launch Checklist

Dependencies identified
Blockers noted and expected delivery timelines set
Design is implementable
Acceptance criteria agreed upon
Story estimated

Notes

  • Our CSI shared resource experience will help us here
  • but the old IMAGE_FORMAT stuff is deprecated, and does not work well with step registry stuff
  • instead, we need to use https://docs.ci.openshift.org/docs/architecture/ci-operator/#dependency-overrides
  • Makefile level logic will use `oc tag` to update the jenkins imagestream created as part of samples to override the use of the in payload image with the image build by the PR, or for periodics, with what has been promoted to quay.io
  • Ultimately, CI step registry for capturing the `oc tag` update the imagestream logic is the probably end goal
  • JNKS-268 might change how we do periodics, but the current thought is to get existing periodics working with the CPaas image first

Possible staging

1) before CPaas is available, we can validate images generated by PRs to openshift/jenkins, openshift/jenkins-sync-plugin, openshift/jenkins-client-plugin by taking the image built by the image (where the info needed to get the right image from the CI registry is in the IMAGE_FORMAT env var) and then doing an `oc tag --source=docker <PR image ref> openshift/jenkins:2` to replace the use of the payload image in the jenkins imagestream in the openshift namespace with the PRs image

2) insert 1) in https://github.com/openshift/release/blob/master/ci-operator/step-registry/jenkins/sync-plugin/e2e/jenkins-sync-plugin-e2e-commands.sh and https://github.com/openshift/release/blob/master/ci-operator/step-registry/jenkins/client-plugin/tests/jenkins-client-plugin-tests-commands.sh where you test for IMAGE_FORMAT being set

3) or instead of 2) you update the Makefiles for the plugins to call a script that does the same sort of thing, see what is in IMAGE_FORMAT, and if it has something, do the `oc tag`

 

https://github.com/openshift/release/pull/26979 is a prototype of how to stick the image built from a PR and conceivably the periodics to get the image built from it and tag it into the jenkins imagestream in the openshift namespace in the test cluster

 

Epic Goal

  • Remove this UI from our stack that we cannot support.

Why is this important?

  • Reduce support burden.
  • Remove Bugzilla burden of addressing continuous CVEs found in this project.

Acceptance Criteria

  • All Prometheus upstream UI links are removed
  • Related documentation is updated
  • Ports/routes etc configured to expose access to this UI are removed such that no configuration we provide enables access to this UI or its codepaths.
  • There is no reason any CVEs found in this UI would ever require intervention by the Monitoring Team.

Dependencies (internal and external)

  1. Make the Prometheus Targets information available in Console UI (https://issues.redhat.com/browse/MON-1079)

Previous Work (Optional):

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

After installing or upgrading to the latest OCP version, the existing OpenShift route to the prometheus-k8s service is updated to be a path-based route to '/api/v1'.

DoD:

  • It is not possible to access the Prometheus UI via the OpenShift route
  • Using a bearer token with sufficient permissions, it is possible to access the /api/v1/* endpoints via the OpenShift route.
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Following up on https://issues.redhat.com/browse/MON-1320, we added three new CLI flags to Prometheus to apply different limits on the samples' labels. These new flags are available starting from Prometheus v2.27.0, which will most likely be shipped in OpenShift 4.9.

The limits that we want to look into for OCP are the following ones:

# Per-scrape limit on number of labels that will be accepted for a sample. If
# more than this number of labels are present post metric-relabeling, the
# entire scrape will be treated as failed. 0 means no limit.
[ label_limit: <int> | default = 0 ]

# Per-scrape limit on length of labels name that will be accepted for a sample.
# If a label name is longer than this number post metric-relabeling, the entire
# scrape will be treated as failed. 0 means no limit.
[ label_name_length_limit: <int> | default = 0 ]

# Per-scrape limit on length of labels value that will be accepted for a sample.
# If a label value is longer than this number post metric-relabeling, the
# entire scrape will be treated as failed. 0 means no limit.
[ label_value_length_limit: <int> | default = 0 ]

We could benefit from them by setting relatively high values that could only induce unbound cardinality and thus reject the targets completely if they happened to breach our constrainst.

DoD:

  • Being able to configure label scrape limits for UWM

Epic Goal

When users configure CMO to interact with systems outside of an OpenShift cluster, we want to provide an easy way to add the cluster ID to the data send.

Why is this important?

Technically this can be achieved today, by adding an identifying label to the remote_write configuration for a given cluster. The operator adding the remote_write integration needs to take care that the label is unique over the managed fleet of clusters. This however adds management complexity. Any given cluster already has a pseudo-unique datum, that can be used for this purpose.

  • Starting in 4.9 we support the Prometheus remote_write feature to send metric data to a storage integration outside of the cluster similar to our own Telemetry service.
  • In Telemetry we already use the cluster ID to distinguish the various clusters.
  • For users of remote_write this could add an easy way to add such distinguishing information.

Scenarios

  1. An organisation with multiple OpenShift clusters want to store their metric data centralized in a dedicated system and use remote_write in all their clusters to send this data. When querying their centralized storage, metadata (here a label) is needed to separate the data of the various clusters.
  2. Service providers who manage multiple clusters for multiple customers via a centralized storage system need distinguishing metadata too. See https://issues.redhat.com/browse/OSD-6573 for example

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • Document how to use this feature

Dependencies (internal and external)

  1. none

Previous Work (Optional):

  1. none

Open questions::

  1.  

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

Implementation proposal:

 

Expose a flag in the CMO configuration, that is false by default (keeps backward compatibility) and when set to true will add the _id label to a remote_write configuration. More specifically it will be added to the top of a remote_write relabel_config list via the replace action. This will add the label as expect, but additionally a user could alter this label in a later relabel config to suit any specific requirements (say rename the label or add additional information to the value).
The location of this flag is the remote_write Spec, so this can be set for individual remote_write configurations.

We currently use a sample app to e2e test remote write in CMO.
In order to test the addition of the cluster_id relabel config, we need to confirm that the metrics send actually have the expected label.
For this test we should use Prometheus as the remote_write target. This allows us to query the metrics send via remote write and confirm they have the expected label.

Add an optional boolean flag to CMOs definition of RemoteWriteSpec that if true adds an entry in the specs WriteRelabelConfigs list.

I went with adding the relabel config to all user-supplied remote_write configurations. This path has no risk for backwards compatibility (unless users use the {}tmp_openshift_cluster_id{} label, seems unlikely) and reduces overall complexity, as well as documentation complexity.

The entry should look like what is already added to the telemetry remote write config and it should be added as the first entry in the list, before any user supplied relabel configs.

Epic Goal

  • Offer the option to double the scrape intervals for CMO controlled ServiceMonitors in single node deployments
  • Alternatively automatically double the same scrape intervals if CMO detects an SNO setup

The potential target ServiceMonitors are:

  • kubelet
  • kube-state-metrics
  • node-exporter
  • etcd
  • openshift-state-metrics

Why is this important?

  • Reduce CPU usage in SNO setups
  • Specifically doubling the scrape interval is important because:
  1. we are confident that this will have the least chance to interfere with existing rules. We typically have rate queries over the last 2 minutes (no shorter time window). With 30 second scrape intervals (the current default) this gives us 4 samples in any 2 minute window. rate needs at least 2 samples to work, we want another 2 for failure tolerance. Doubling the scrape interval will still give us 2 samples in most 2 minute windows. If a scrape fails, a few rule evaluations might fail intermittently.
  2. We expect a measureable reduction of CPU resources (see previous work)

Scenarios

  1. RAN deployments (Telco Edge) are SNO deployments. In these setups a full CMO deployment is often not needed and the default setup consumes too many resources. OpenShift as a whole has only very limited CPU cycles available and too many cycles are spend on Monitoring

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.
  • ...

Previous Work (Optional):

  1. https://issues.redhat.com/browse/MON-1569

Open questions:

  1. Whether doubling some scrape intervals reduces CPU usage to fit into the assigned budget

Non goals

  • Allow arbitrarily long scrape intervals. This will interfere with alert and recoring rules
  • Implement a global override to scrape intervals.
The details of this Jira Card are restricted (Red Hat Employee and Contractors only)

Description

As a user, I want to understand which service bindings connected a service to a component successfully or not. Currently it's really difficult to understand and needs inspection into each ServiceBinding resource (yaml).

Acceptance Criteria

  1. Show a status badge on the SB details page
  2. Show a Status field in the right column of the SB details page
  3. Show the Status field in the right column of the Topology side panel when a SB is selected
  4. Show an indicator in the Topology view which will help to differentiate when the service binding is in error state
  5. Define the available statuses & associated icons 🥴
    1. Connected
    2. Error
  6. Error states defined by the SB conditions … if any of these 3 are not True, the status will be displayed as Error

Additional Details:

See also https://docs.google.com/document/d/1OzE74z2RGO5LPjtDoJeUgYBQXBSVmD5tCC7xfJotE00/edit

Description

As a user, I want the topology view to be less cluttered as I doom out showing only information that I can discern and still be able to get a feel for the status of my project.

Acceptance Criteria

  1. When zoomed to 50% scale, all labels & decorators will be hidden. Label are shown when hovering over the node
  2. When zoomed to 30% scale, all labels, decorators, pod rings & icons will be hidden. Node shape remains the same, and background is either white, yellow or red. Background color is determined based on aggregate status of pods, alerts, builds and pipelines. Tooltip is available showing node name as well as the "things" which are attributing to the warning/error status.

Additional Details:

Problem:

This epic is mainly focused on the 4.10 Release QE activities

Goal:

1. Identify the scenarios for automation
2. Segregate the test Scenarios into smoke, Regression and other user stories
a. Update the https://docs.jboss.org/display/ODC/Automation+Status+Report
3. Align with layered operator teams for updating scripts
3. Work closely with dev team for epic automation
4. Create the automation scripts using cypress
5. Implement CI for nightly builds
6. Execute scripts on sprint basis

Why is it important?

To the track the QE progress at one place in 4.10 Release Confluence page

Use cases:

  1. <case>

Acceptance criteria:

  1. <criteria>

Dependencies (External/Internal):

Design Artifacts:

Exploration:

Note:

Acceptance criteria:

  1. Execute the automation scripts on ODC nightly builds in OpenShift CI (prow) periodically
  2. provide a separate job for each "plugin" (like pipelines, knative, etc.)

Goal:

This epic covers a number of customer requests(RFEs) as well as increases usability.

Why is it important?

Customer satisfaction as well as improved usability.

Acceptance Criteria

  1. Allow user to re-arrange the resources which have ben added to nav by the user
  2. Improved user experience (form based experience)
    1. Form based editing of Routes
    2. Form based creation and editing of Config Maps
    3. Form base creation of Deployments
  3. Improved discovery
    1. Include Share my project on the Add page to increase discoverability
    2. NS Helm Chart Repo
      1. Add tile to Add page for discoverability
      2. Provide a form driven creation experience
      3. User should be able to switch back and forth from Form/YAML
      4. change the intro text to the below & have the link in the intro text bring up the full page form
        1. Browse for charts that help manage complex installations and upgrades. Cluster administrators can customize the content made available in the catalog. Alternatively, developers can try to configure their own custom Helm Chart repository.

Dependencies (External/Internal):

None

Exploration:

Miro board from Epic Exploration

Description

As a user, I want to use a form to create Deployments

Acceptance Criteria

  1. Use existing edit Deployment form component for creating Deployments
  2. Display the form when clicked on `Create Deployment` in the Deployments Search page in the Dev perspective
  3. The `Create Deployment` button in the Deployments list page & the search page in the Admin perspective should have a similar experience.

Additional Details:

Edit deployment form ODC-5007

Description

As a user, I should be able to switch between the form and yaml editor while creating the ProjectHelmChartRepository CR.

Acceptance Criteria

  1. Convert the create form into a form-yaml switcher
  2. Display this form-yaml view in Search -> ProjectHelmChartRepositories in both perspectives

Additional Details:

Form component https://github.com/openshift/console/pull/11227

Problem:

Currently we are only able to get limited telemetry from the Dev Sandbox, but not from any of our managed clusters or on prem clusters.

Goals:

  1. Enable gathering segment telemetry whenever cluster telemetry is enabled on OSD clusters
  2. Have our OSD clusters opt into telemetry by default
  3. Work with PM & UX to identify additional metrics to capture in addition to what we have enabled currently on Sandbox.
  4. Ability to get a single report from woopra across all of our Sandbox and OSD clusters.
  5. Be able to generate a report including metrics of a single cluster or all clusters of a certain type ( sandbox, or OSD)

Why is it important?

In order to improve properly analyze usage and the user experience, we need to be able to gather as much data as possible.

Acceptance Criteria

  1. Extend console backend (bridge) to provide configuration as SERVER_FLAGS
    // JS type
    telemetry?: Record<string, string>
    
    1. Read the annotation of the cluster ConfigMap for telemetry data and pass them into the internal serverconfig.
    2. Pass through this internal serverconfig and export it as SERVER_FLAGS.
    3. Add a new --telemetry CLI option so that the telemetry options could be tested in a dev environment:
      ./bin/bridge --telemetry SEGMENT_API_KEY=a-key-123-xzy
      ./bin/bridge --telemetry CONSOLE_LOG=debug
      
  2. TBD: In best case the new annotation could be read from the cluster ConfigMap...
    1. Otherwise update the console-operator to pass the annotation from the console cluster configuration to the console ConfigMap.

Additional Details:

  1. More information about the integration with the backend could be found in the Telemetry on OSD clusters Google Doc

Goal:
Enhance oc adm release new (and related verbs info, extract, mirror) with heterogeneous architecture support

tl;dr

oc adm release new (and related verbs info, extract, mirror) would be enhanced to optionally allow the creation of manifest list release payloads. The manifest list flow would be triggered whenever the CVO image in an imagestream was a manifest list. If the CVO image is a standard manifest, the generated release payload will also be a manifest. If the CVO image is a manifest list, the generated release payload would be a manifest list (containing a manifest for each arch possessed by the CVO manifest list).

In either case, oc adm release new would permit non-CVO component images to be manifest or manifest lists and pass them through directly to the resultant release manifest(s).

If a manifest list release payload is generated, each architecture specific release payload manifest will reference the same pullspecs provided in the input imagestream.

 

More details in Option 1 of https://docs.google.com/document/d/1BOlPrmPhuGboZbLZWApXszxuJ1eish92NlOeb03XEdE/edit#heading=h.eldc1ppinjjh

Incomplete Epics

This section includes Jira cards that are linked to an Epic, but the Epic itself is not linked to any Feature. These epics were not completed when this image was assembled

Epic Goal

  • Update image registry dependencies (Kubernetes and OpenShift) to the latest versions.

Why is this important?

  • New versions usually bring improvements that are needed by the registry and help with getting updates for z-stream.

Scenarios

  1. As an OpenShift engineer, I want my components to use the versions of dependencies, so that they get fixes for known issues and can be easily updated in z-stream.

Acceptance Criteria

  • CI - MUST be running successfully with tests automated

Dependencies (internal and external)

  1. Kubernetes 1.24

Previous Work (Optional):

  1. IR-210

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • QE - Test plans in Polarion: <link or reference to Polarion>

As a OpenShift engineer
I want image-registry to use the latest k8s libraries
so that image-registry can benefit from new upstream features.

Acceptance criteria

  • image-registry uses k8s.io/api v1.24.z
  • image-registry uses latest openshift/api, openshift/library-go, openshift/client-go

Epic Goal

  • Provide a dedicated dashboard for NVIDIA GPU usage visualization in the OpenShift Console.

Why is this important?

  • Customers that use GPUs in their clusters usually have the GPU workloads as the main purpose of their cluster. As such, it makes much more sense to have the details about the usage they are doing of GPGPU resources AND CPU/RAM rather than just CPU/RAM

Scenarios

  1. As an admin of a cluster dedicated to data science, I want to quickly find out how much of my very costly resources are currently in use and if things are getting queued due to lack of resources

Acceptance Criteria

  • CI - MUST be running successfully with tests automated
  • Release Technical Enablement - Provide necessary release enablement details and documents.

Dependencies (internal and external)

  1. The NVIDIA GPU Operator must export to prometheus the relevant data

Open questions::

  1. Will NVIDIA agree to these extra data exports in their GPU Operator?

I asked Zvonko Kaiser and he seemed open to it. I need to confirm with Shiva Merla

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

OCP/Telco Definition of Done
Epic Template descriptions and documentation.

<--- Cut-n-Paste the entire contents of this description into your new Epic --->

Epic Goal

  • Run OpenShift builds that do not execute as the "root" user on the host node.

Why is this important?

  • OpenShift builds require an elevated set of capabilities to build a container image
  • Builds currently run as root to maintain adequate performance
  • Container workloads should run as non-root from the host's perspective. Containers running as root are a known security risk.
  • Builds currently run as root and require a privileged container. See BUILD-225 for removing the privileged container requirement.

Scenarios

  1. Run BuildConfigs in a multi-tenant environment
  2. Run BuildConfigs in a heightened security environment/deployment

Acceptance Criteria

  • Developers can opt into running builds in a cri-o user namespace by providing an environment variable with a specific value.
  • When the correct environment variable is provided, builds run in a cri-o user namespace, and the build pod does not require the "privileged: true" security context.
  • User namespace builds can pass basic test scenarios for the Docker and Source strategy build.
  • Steps to run unprivileged builds are documented.

Dependencies (internal and external)

  1. Buildah supports running inside a non-privileged container
  2. CRI-O allows workloads to opt into running containers in user namespaces.

Previous Work (Optional):

  1. BUILD-225 - remove privileged requirement for builds.

Open questions::

Done Checklist

  • CI - CI is running, tests are automated and merged.
  • Release Enablement <link to Feature Enablement Presentation>
  • DEV - Upstream code and tests merged: <link to meaningful PR or GitHub Issue>
  • DEV - Upstream documentation merged: <link to meaningful PR or GitHub Issue>
  • DEV - Downstream build attached to advisory: <link to errata>
  • QE - Test plans in Polarion: <link or reference to Polarion>
  • QE - Automated tests merged: <link or reference to automated tests>
  • DOC - Downstream documentation merged: <link to meaningful PR>

User Story

As a developer building container images on OpenShift
I want to specify that my build should run without elevated privileges
So that builds do not run as root from the host's perspective with elevated privileges

Acceptance Criteria

  • Developers can provide an environment variable to indicate the build should not use privileged containers
  • When the correct env var + value is specified, builds run in a user namespace (non-root on the host)

QE Impact

No QE required for Dev Preview. OpenShift regression testing will verify that existing behavior is not impacted.

Docs Impact

We will need to document how to enable this feature, with sufficient warnings regarding Dev Preview.

PX Impact

This likely warrants an OpenShift blog post, potentially?

Notes

Other Complete

This section includes Jira cards that are not linked to either an Epic or a Feature. These tickets were completed when this image was assembled

Description of problem:
Switching the spec.endpointPublishingStrategy.loadBalancer.scope of the default ingresscontroller results in a degraded ingress operator. The routes using that endpoint like the console URL become inaccessible.
Degraded operators after scope change:

$ oc get co | grep -v ' True        False         False'
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.11.4    False       False         True       72m     OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.kartrosa.ukld.s1.devshift.org/healthz": EOF
console                                    4.11.4    False       False         False      72m     RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.kartrosa.ukld.s1.devshift.org): Get "https://console-openshift-console.apps.kartrosa.ukld.s1.devshift.org": EOF
ingress                                    4.11.4    True        False         True       65m     The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)

We have noticed that each time this happens the underlying AWS loadbalancer gets recreated which is as expected however the router pods probably do not get notified about the new loadbalancer. The instances in the new loadbalancer become 'outOfService'.

Restarting one of the router pods fixes the issue and brings back a couple of instances under the loadbalancer back to 'InService' which leads to the operators becoming happy again.

Version-Release number of selected component (if applicable):

ingress in 4.11.z however we suspect this issue to also apply to older versions

How reproducible:

Consistently reproducible

Steps to Reproduce:

1. Create a test OCP 4.11 cluster in AWS
2. Switch the spec.endpointPublishingStrategy.loadBalancer.scope of the default ingresscontroller in openshift-ingress-operator to Internal from External (or vice versa)
3. New Loadbalancer is created in AWS for the default router service, however the instances behind are not in service

Actual results:

ingress, authentication and console operators go into a degraded state. Console URL of the cluster is inaccessible

Expected results:

The ingresscontroller scope transition from internal->External (or vice versa) is smooth without any downtime or operators going into degraded state. The console is accessible.

 

Description of problem:

When a pod runs to a completed state, we typically rely on the update event that will indicate to us that this pod is completed. At that point the pod IP is released and the port configuration is removed in OVN. The subsequent delete event for this pod will be ignored because it should have been cleaned up in the previous update.

However, there can be cases where the update event is missed with pod completed. In this case we will only receive a delete with pod completed event, and ignore tearing down the pod. The end result is the pod is not cleaned up in OVN and the IP address remains allocated, reducing the amount of address range available to launch another pod. This can lead to exhausting all IP addresses available for pod allocation on a node.

Version-Release number of selected component (if applicable):

4.10.24

How reproducible:

Not sure how to reproduce this. I'm guessing some lag in kapi updates can cause the completed update event and the final delete event to be combined into a single event.

Steps to Reproduce:

1.
2.
3.

Actual results:

Port still exists in OVN, IP remains allocated for a deleted pod.

Expected results:

IP should be freed, port should be removed from OVN.

Additional info:

 

This is a clone of issue OCPBUGS-5100. The following is the description of the original issue:

This is a clone of issue OCPBUGS-5068. The following is the description of the original issue:

Description of problem:

virtual media provisioning fails when iLO Ironic driver is used

Version-Release number of selected component (if applicable):

4.13

How reproducible:

Always

Steps to Reproduce:

1. attempt virtual media provisioning on a node configured with ilo-virtualmedia:// drivers
2.
3.

Actual results:

Provisioning fails with "An auth plugin is required to determine endpoint URL" error

Expected results:

Provisioning succeeds

Additional info:

Relevant log snippet:

3742 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector [None req-e58ac1f2-fac6-4d28-be9e-983fa900a19b - - - - - -] Unable to start managed inspection for node e4445d43-3458-4cee-9cbe-6da1de75      78cd: An auth plugin is required to determine endpoint URL: keystoneauth1.exceptions.auth_plugins.MissingAuthPlugin: An auth plugin is required to determine endpoint URL
 3743 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector Traceback (most recent call last):
 3744 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector   File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/inspector.py", line 210, in _start_managed_inspection
 3745 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector     task.driver.boot.prepare_ramdisk(task, ramdisk_params=params)
 3746 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector   File "/usr/lib/python3.9/site-packages/ironic_lib/metrics.py", line 59, in wrapped
 3747 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector     result = f(*args, **kwargs)
 3748 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector   File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/ilo/boot.py", line 408, in prepare_ramdisk
 3749 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector     iso = image_utils.prepare_deploy_iso(task, ramdisk_params,
 3750 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector   File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 624, in prepare_deploy_iso
 3751 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector     return prepare_iso_image(inject_files=inject_files)
 3752 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector   File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 537, in _prepare_iso_image
 3753 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector     image_url = img_handler.publish_image(
 3754 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector   File "/usr/lib/python3.9/site-packages/ironic/drivers/modules/image_utils.py", line 193, in publish_image
 3755 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector     swift_api = swift.SwiftAPI()
 3756 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector   File "/usr/lib/python3.9/site-packages/ironic/common/swift.py", line 66, in __init__
 3757 2022-12-19T19:02:05.997747170Z 2022-12-19 19:02:05.995 1 ERROR ironic.drivers.modules.inspector     endpoint = keystone.get_endpoint('swift', session=session)

Description of problem:

Each LB created for a Service type LoadBalancer results in 1 client rule and <# of public subnets> health rules being created.  The rules per SG quota in AWS is quite small; 60 by default, and 200 hard max.  OCP has about 40 rules OOTB. Assuming an HA cluster in 3 AZs, that is 4 rules per LB.  With default AWS quota, only ~5 LBs can be create and with the hard max of 200, only ~40 LBs can be created.

Version-Release number of selected component (if applicable):

4.12

How reproducible:

Always

Steps to Reproduce:

1.  Create Service type LoadBalancer and observe increase in master-sg and worker-sg rules sets
2.
3.

Actual results:

4 rules are created

Expected results:

1 rules is created when the client rule is a superset of the per-subnet health rules

Additional info:

This ~4x the number of Services of type LoadBalancer.  This is required for Hypershift.

Description of problem:

When running node-density (245 pods/node) on a 120 node cluster, we see that there is a huge spike (~22s) in Avg pod-latency. When the spike occurs we see all the ovnkube-master pods go through a restart. 

The restart happens because of (ovnkube-master pods)

2022-08-10T04:04:44.494945179Z panic: reflect: call of reflect.Value.Len on ptr Value

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-08-09-114621

How reproducible:

Steps to Reproduce:
1. Run node-density on a 120 node cluster

Actual results:

Spike observed in pod-latency graph ~22s

Expected results:

Steady pod-latency graph ~4s

Additional info:

Description of problem:

Upgrade OCP 4.11 --> 4.12 fails with one 'NotReady,SchedulingDisabled' node and MachineConfigDaemonFailed.

Version-Release number of selected component (if applicable):

Upgrade from OCP 4.11.0-0.nightly-2022-09-19-214532 on top of OSP RHOS-16.2-RHEL-8-20220804.n.1 to 4.12.0-0.nightly-2022-09-20-040107.

Network Type: OVNKubernetes

How reproducible:

Twice out of two attempts.

Steps to Reproduce:

1. Install OCP 4.11.0-0.nightly-2022-09-19-214532 (IPI) on top of OSP RHOS-16.2-RHEL-8-20220804.n.1.
   The cluster is up and running with three workers:
   $ oc get clusterversion
   NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
   version   4.11.0-0.nightly-2022-09-19-214532   True        False         51m     Cluster version is 4.11.0-0.nightly-2022-09-19-214532

2. Run the OC command to upgrade to 4.12.0-0.nightly-2022-09-20-040107:
$ oc adm upgrade --to-image=registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 --allow-explicit-upgrade --force=true
warning: Using by-tag pull specs is dangerous, and while we still allow it in combination with --force for backward compatibility, it would be much safer to pass a by-digest pull spec instead
warning: The requested upgrade image is not one of the available updates.You have used --allow-explicit-upgrade for the update to proceed anyway
warning: --force overrides cluster verification of your supplied release image and waives any update precondition failures.
Requesting update to release image registry.ci.openshift.org/ocp/release:4.12.0-0.nightly-2022-09-20-040107 

3. The upgrade is not succeeds: [0]
$ oc get clusterversion
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2022-09-19-214532   True        True          17h     Unable to apply 4.12.0-0.nightly-2022-09-20-040107: wait has exceeded 40 minutes for these operators: network

One node degrided to 'NotReady,SchedulingDisabled' status:
$ oc get nodes
NAME                          STATUS                        ROLES    AGE   VERSION
ostest-9vllk-master-0         Ready                         master   19h   v1.24.0+07c9eb7
ostest-9vllk-master-1         Ready                         master   19h   v1.24.0+07c9eb7
ostest-9vllk-master-2         Ready                         master   19h   v1.24.0+07c9eb7
ostest-9vllk-worker-0-4x4pt   NotReady,SchedulingDisabled   worker   18h   v1.24.0+3882f8f
ostest-9vllk-worker-0-h6kcs   Ready                         worker   18h   v1.24.0+3882f8f
ostest-9vllk-worker-0-xhz9b   Ready                         worker   18h   v1.24.0+3882f8f

$ oc get pods -A | grep -v -e Completed -e Running
NAMESPACE                                          NAME                                                         READY   STATUS      RESTARTS       AGE
openshift-openstack-infra                          coredns-ostest-9vllk-worker-0-4x4pt                          0/2     Init:0/1    0              18h
 
$ oc get events
LAST SEEN   TYPE      REASON                                        OBJECT            MESSAGE
7m15s       Warning   OperatorDegraded: MachineConfigDaemonFailed   /machine-config   Unable to apply 4.12.0-0.nightly-2022-09-20-040107: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)]
7m15s       Warning   MachineConfigDaemonFailed                     /machine-config   Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)]

$ oc get co
NAME                                       VERSION                              AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.12.0-0.nightly-2022-09-20-040107   True        False         False      18h    
baremetal                                  4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
cloud-controller-manager                   4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
cloud-credential                           4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
cluster-autoscaler                         4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
config-operator                            4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
console                                    4.12.0-0.nightly-2022-09-20-040107   True        False         False      18h    
control-plane-machine-set                  4.12.0-0.nightly-2022-09-20-040107   True        False         False      17h    
csi-snapshot-controller                    4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
dns                                        4.12.0-0.nightly-2022-09-20-040107   True        True          False      19h     DNS "default" reports Progressing=True: "Have 5 available node-resolver pods, want 6."
etcd                                       4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
image-registry                             4.12.0-0.nightly-2022-09-20-040107   True        True          False      18h     Progressing: The registry is ready...
ingress                                    4.12.0-0.nightly-2022-09-20-040107   True        False         False      18h    
insights                                   4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
kube-apiserver                             4.12.0-0.nightly-2022-09-20-040107   True        True          False      18h     NodeInstallerProgressing: 1 nodes are at revision 11; 2 nodes are at revision 13
kube-controller-manager                    4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
kube-scheduler                             4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
kube-storage-version-migrator              4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
machine-api                                4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
machine-approver                           4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
machine-config                             4.11.0-0.nightly-2022-09-19-214532   False       True          True       16h     Cluster not available for [{operator 4.11.0-0.nightly-2022-09-19-214532}]: failed to apply machine config daemon manifests: error during waitForDaemonsetRollout: [timed out waiting for the condition, daemonset machine-config-daemon is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)]
marketplace                                4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
monitoring                                 4.12.0-0.nightly-2022-09-20-040107   True        False         False      18h    
network                                    4.12.0-0.nightly-2022-09-20-040107   True        True          True       19h     DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making progress - last change 2022-09-20T14:16:13Z...
node-tuning                                4.12.0-0.nightly-2022-09-20-040107   True        False         False      17h    
openshift-apiserver                        4.12.0-0.nightly-2022-09-20-040107   True        False         False      18h    
openshift-controller-manager               4.12.0-0.nightly-2022-09-20-040107   True        False         False      17h    
openshift-samples                          4.12.0-0.nightly-2022-09-20-040107   True        False         False      17h    
operator-lifecycle-manager                 4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
operator-lifecycle-manager-catalog         4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
operator-lifecycle-manager-packageserver   4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
service-ca                                 4.12.0-0.nightly-2022-09-20-040107   True        False         False      19h    
storage                                    4.12.0-0.nightly-2022-09-20-040107   True        True          False      19h     ManilaCSIDriverOperatorCRProgressing: ManilaDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods...

[0] http://pastebin.test.redhat.com/1074531

Actual results:

OCP 4.11 --> 4.12 upgrade fails.

Expected results:

OCP 4.11 --> 4.12 upgrade success.

Additional info:

Attached logs of the NotReady node - [^journalctl_ostest-9vllk-worker-0-4x4pt.log.tar.gz]

This is a clone of issue OCPBUGS-4851. The following is the description of the original issue:

This is a clone of issue OCPBUGS-4850. The following is the description of the original issue:

Description of problem:

Kuryr might take a while to create Pods because it has to create Neutron ports for the pods. If a pod gets deleted while this is being processed, a
warning Event will be generated causing the "[sig-network] pods should successfully create sandboxes by adding pod to network" to fail.

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-12956. The following is the description of the original issue:

This is a clone of issue OCPBUGS-12910. The following is the description of the original issue:

This is a clone of issue OCPBUGS-12904. The following is the description of the original issue:

Description of problem:

In order to test proxy installations, the CI base image for OpenShift on OpenStack needs netcat.

This is a clone of issue OCPBUGS-15643. The following is the description of the original issue:

This is a clone of issue OCPBUGS-15606. The following is the description of the original issue:

This is a clone of issue OCPBUGS-15497. The following is the description of the original issue:

I am using a BuildConfig with git source and the Docker strategy. The git repo contains a large zip file via LFS and that zip file is not getting downloaded. Instead just the ascii metadata is getting downloaded. I've created a simple reproducer (https://github.com/selrahal/buildconfig-git-lfs) on my personal github. If you clone the repo

git clone git@github.com:selrahal/buildconfig-git-lfs.git

and apply the bc.yaml file with

oc apply -f bc.yaml

Then start the build with

oc start-build test-git-lfs

You will see the build fails at the unzip step in the docker file

STEP 3/7: RUN unzip migrationtoolkit-mta-cli-5.3.0-offline.zip
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.

I've attached the full build logs to this issue.

Description of problem:
Cannot scale up worker node have deploying OCP 4.11.1 cluster via UPI on Azure

5h2m Warning FailedCreate machine/pokus-2knkh-worker-northeurope1-f6kc4 InvalidConfiguration: failed to reconcile machine "pokus-2knkh-worker-northeurope1-f6kc4": failed to create vm pokus-2knkh-worker-northeurope1-f6kc4: failure sending request for machine pokus-2knkh-worker-northeurope1-f6kc4: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=404 - Original Error: Code="NotFound" Message="The Image '/subscriptions/e639e479-2737-4b3d-b338-f1928f6429a1/resourceGroups/mlpipe-2163-azpln-rg/providers/Microsoft.Compute/images/pokus-2knkh-gen2' cannot be found in 'northeurope' region."

Customer would like to have the installer create machineset from the inital installation, therefore Kubernetes manifest files that define the worker machines were not removed during the installation.

Highlights:
Can I please let help verifying if these are the correct steps to have the initial installation created and manage the worker machines?Is there an explanation on how changing the image to -gen2 in [concat(parameters('baseName'),'-gen2')] from the 02_storage.json template can resolve the problem?
Version-Release number of selected component (if applicable):

Environment:
OCP 4.11.1 UPI install on Azure using ARM
VM size:
bootstrap: Standard_D4s_v3
master: Standard_D4s_v3

How reproducible:
Always

Steps to Reproduce:
Following the step described in the document: Installing a cluster on Azure using ARM templates .

In the install-config.yaml, worker replicas was set to 0

compute:
- architecture: amd64
  hyperthreading: Enabled
  name: worker
  platform: {}
  replicas: 3   
controlPlane:
  architecture: amd64
  hyperthreading: Enabled
  name: master
  platform: {}
  replicas: 3

After creating the manifests described in this step: Creating the Kubernetes manifest and Ignition config files only control plane machines manifests were removed, worker machines manifests remain untouchedAfter three masters and three worker nodes were created by ARM templates, additional worker were added using machine sets via command

oc scale --replicas=1 machineset cluster-g7rzv-worker-francecentral1 -n openshift-machine-api` 

Actual results:
No addition node visible from `oc get nodes` and the following error occur:

5h2m Warning FailedCreate machine/pokus-2knkh-worker-northeurope1-f6kc4 InvalidConfiguration: failed to reconcile machine "pokus-2knkh-worker-northeurope1-f6kc4": failed to create vm pokus-2knkh-worker-northeurope1-f6kc4: failure sending request for machine pokus-2knkh-worker-northeurope1-f6kc4: cannot create vm: compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=404 - Original Error: Code="NotFound" Message="The Image '/subscriptions/e639e479-2737-4b3d-b338-f1928f6429a1/resourceGroups/mlpipe-2163-azpln-rg/providers/Microsoft.Compute/images/pokus-2knkh-gen2' cannot be found in 'northeurope' region."

The customer found out that this can be resolved if changing the -image to -gen2 in [concat(parameters('baseName'),'-gen2')] from the 02_storage.json template

Expected results:
The installer should be able to create and manage machineset

Additional info:
SFDC case #03304526

Slack discussion, might due to MAO not able to support UPI in Azure Thread1, Thread2
 

 

 

 

 

 

 

 

 

This is a clone of issue OCPBUGS-8286. The following is the description of the original issue:

Description of problem:

mapi_machinehealthcheck_short_circuit is not properly reconciling the state, when a MachineHealthCheck is failing because of unhealthy Machines but then is removed.

When doing two MachineSet (called blue and green and only one has running Machines at a specific point in time) with MachineAutoscaler and MachineHealthCheck, the mapi_machinehealthcheck_short_circuit will continue to report 1 for MachineHealth that actually was removed because of a switch from blue to green.

$ oc get machineset | egrep 'blue|green'
housiocp4-wvqbx-worker-blue-us-east-2a    0         0                             2d17h
housiocp4-wvqbx-worker-green-us-east-2a   1         1         1       1           2d17h

$ oc get machineautoscaler
NAME                      REF KIND     REF NAME                                   MIN   MAX   AGE
worker-green-us-east-1a   MachineSet   housiocp4-wvqbx-worker-green-us-east-2a   1     4     2d17h

$ oc get machinehealthcheck
NAME                              MAXUNHEALTHY   EXPECTEDMACHINES   CURRENTHEALTHY
machine-api-termination-handler   100%           0                  0
worker-green-us-east-1a           40%            1                  1

      {
        "name": "machine-health-check-unterminated-short-circuit",
        "file": "/etc/prometheus/rules/prometheus-k8s-rulefiles-0/openshift-machine-api-machine-api-operator-prometheus-rules-ccb650d9-6fc4-422b-90bb-70452f4aff8f.yaml",
        "rules": [
          { 
            "state": "firing",
            "name": "MachineHealthCheckUnterminatedShortCircuit",
            "query": "mapi_machinehealthcheck_short_circuit == 1",
            "duration": 1800,
            "labels": {
              "severity": "warning"
            },
            "annotations": {
              "description": "The number of unhealthy machines has exceeded the `maxUnhealthy` limit for the check, you should check\nthe status of machines in the cluster.\n",
              "summary": "machine health check {{ $labels.name }} has been disabled by short circuit for more than 30 minutes"
            },
            "alerts": [
              { 
                "labels": {
                  "alertname": "MachineHealthCheckUnterminatedShortCircuit",
                  "container": "kube-rbac-proxy-mhc-mtrc",
                  "endpoint": "mhc-mtrc",
                  "exported_namespace": "openshift-machine-api",
                  "instance": "10.128.0.58:8444",
                  "job": "machine-api-controllers",
                  "name": "worker-blue-us-east-1a",
                  "namespace": "openshift-machine-api",
                  "pod": "machine-api-controllers-779dcb8769-8gcn6",
                  "service": "machine-api-controllers",
                  "severity": "warning"
                },
                "annotations": {
                  "description": "The number of unhealthy machines has exceeded the `maxUnhealthy` limit for the check, you should check\nthe status of machines in the cluster.\n",
                  "summary": "machine health check worker-blue-us-east-1a has been disabled by short circuit for more than 30 minutes"
                },
                "state": "firing",
                "activeAt": "2022-12-09T15:59:25.1287541Z",
                "value": "1e+00"
              }
            ],
            "health": "ok",
            "evaluationTime": 0.000648129,
            "lastEvaluation": "2022-12-12T09:35:55.140174009Z",
            "type": "alerting"
          }
        ],
        "interval": 30,
        "limit": 0,
        "evaluationTime": 0.000661589,
        "lastEvaluation": "2022-12-12T09:35:55.140165629Z"
      },

As we can see above, worker-blue-us-east-1a is no longer available and active but rather worker-green-us-east-1a. But worker-blue-us-east-1a was there before the switch to green has happen and was actuall reporting some unhealthy Machines. But since it's now gone, mapi_machinehealthcheck_short_circuit should properly reconcile as otherwise this is a false/positive alert.

Version-Release number of selected component (if applicable):

OpenShift Container Platform 4.12.0-rc.3 (but is also seen on previous version)

How reproducible:

- Always

Steps to Reproduce:

1. Setup OpenShift Container Platform 4 on AWS for example
2. Create blue and green MachineSet with MachineAutoScaler and MachineHealthCheck
3. Have active Machines for blue only
4. Trigger unhealthy Machines in blue MachineSet
5. Switch to green MachineSet, by removing MachineHealthCheck, MachineAutoscaler and setting replicate of blue MachineSet to 0
6. Create green MachineHealthCheck, MachineAutoscaler and scale geen MachineSet to 1
7. Observe how mapi_machinehealthcheck_short_circuit continues to report unhealthy state for blue MachineHealthCheck which no longer exists.

Actual results:

mapi_machinehealthcheck_short_circuit reporting problematic MachineHealthCheck even though the faulty MachineHealthCheck does no longer exist.

Expected results:

mapi_machinehealthcheck_short_circuit to properly reconcile it's state and remove MachineHealthChecks that have been removed on OpenShift Container Platform level

Additional info:

It kind of looks like similar to the issue reported in https://bugzilla.redhat.com/show_bug.cgi?id=2013528 respectively https://bugzilla.redhat.com/show_bug.cgi?id=2047702 (although https://bugzilla.redhat.com/show_bug.cgi?id=2047702 may not be super relevant)

Description of problem:

During restart egress firewall acls will be deleted and re-created from scratch, meaning that egress firewall rules won't be applied for some time during restart

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-7437. The following is the description of the original issue:

This is a clone of issue OCPBUGS-5547. The following is the description of the original issue:

Description of problem:
This is a follow-up on https://bugzilla.redhat.com/show_bug.cgi?id=2083087 and https://github.com/openshift/console/pull/12390

When creating a Knative Service and delete it again with enabled option "Delete other resources created by console" (only available on 4.13+ with the PR above) the secret "$name-github-webhook-secret" is not deleted.

When the user tries to create the same Knative Service again this fails with an error:

An error occurred
secrets "nodeinfo-github-webhook-secret" already exists

Version-Release number of selected component (if applicable):
4.13

(we might want to backport this together with https://github.com/openshift/console/pull/12390 and OCPBUGS-5548)

How reproducible:
Always

Steps to Reproduce:

  1. Install OpenShift Serverless operator (tested with 1.26.0)
  2. Create a new project
  3. Navigate to Add > Import from git and create an application
  4. In the topology select the Knative Service > "Delete Service" (not Delete App)

Actual results:
Deleted resources:

  1. Knative Service (tries it twice!) $name
  2. ImageStream $name
  3. BuildConfig $name
  4. Secret $name-generic-webhook-secret

Expected results:
Should also remove this resource

  1. Delete Knative Service should be called just once
  2. Secret $name-github-webhook-secret

Additional info:
When delete the whole application all the resources are deleted correctly (and just once)!

  1. Knative Service (just once!) $name
  2. ImageStream $name
  3. BuildConfig $name
  4. Secret $name-generic-webhook-secret
  5. Secret $name-github-webhook-secret

Description of problem:

This is a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2074299 for backporting purposes.

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-10977. The following is the description of the original issue:

This is a clone of issue OCPBUGS-10890. The following is the description of the original issue:

This is a clone of issue OCPBUGS-10649. The following is the description of the original issue:

Description of problem:

After a replace upgrade from OCP 4.14 image to another 4.14 image first node is in NotReady.

jiezhao-mac:hypershift jiezhao$ oc get node --kubeconfig=hostedcluster.kubeconfig 
NAME                     STATUS   ROLES  AGE   VERSION
ip-10-0-128-175.us-east-2.compute.internal  Ready   worker  72m   v1.26.2+06e8c46
ip-10-0-134-164.us-east-2.compute.internal  Ready   worker  68m   v1.26.2+06e8c46
ip-10-0-137-194.us-east-2.compute.internal  Ready   worker  77m   v1.26.2+06e8c46
ip-10-0-141-231.us-east-2.compute.internal  NotReady  worker  9m54s  v1.26.2+06e8c46

- lastHeartbeatTime: "2023-03-21T19:48:46Z"
  lastTransitionTime: "2023-03-21T19:42:37Z"
  message: 'container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady
   message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/.
   Has your network provider started?'
  reason: KubeletNotReady
  status: "False"
  type: Ready

Events:
 Type   Reason          Age         From          Message
 ----   ------          ----        ----          -------
 Normal  Starting         11m         kubelet        Starting kubelet.
 Normal  NodeHasSufficientMemory 11m (x2 over 11m)  kubelet        Node ip-10-0-141-231.us-east-2.compute.internal status is now: NodeHasSufficientMemory
 Normal  NodeHasNoDiskPressure  11m (x2 over 11m)  kubelet        Node ip-10-0-141-231.us-east-2.compute.internal status is now: NodeHasNoDiskPressure
 Normal  NodeHasSufficientPID   11m (x2 over 11m)  kubelet        Node ip-10-0-141-231.us-east-2.compute.internal status is now: NodeHasSufficientPID
 Normal  NodeAllocatableEnforced 11m         kubelet        Updated Node Allocatable limit across pods
 Normal  Synced          11m         cloud-node-controller Node synced successfully
 Normal  RegisteredNode      11m         node-controller    Node ip-10-0-141-231.us-east-2.compute.internal event: Registered Node ip-10-0-141-231.us-east-2.compute.internal in Controller
 Warning ErrorReconcilingNode   17s (x30 over 11m) controlplane      nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation

ovnkube-master log:

I0321 20:55:16.270197       1 default_network_controller.go:667] Node add failed for ip-10-0-141-231.us-east-2.compute.internal, will try again later: nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:55:16.270209       1 obj_retry.go:326] Retry add failed for *v1.Node ip-10-0-141-231.us-east-2.compute.internal, will try again later: nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:55:16.270273       1 event.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-141-231.us-east-2.compute.internal", UID:"621e6289-ca5a-4e17-afff-5b49961cfb38", APIVersion:"v1", ResourceVersion:"52970", FieldPath:""}): type: 'Warning' reason: 'ErrorReconcilingNode' nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:55:17.851497       1 master.go:719] Adding or Updating Node "ip-10-0-137-194.us-east-2.compute.internal"
I0321 20:55:25.965132       1 master.go:719] Adding or Updating Node "ip-10-0-128-175.us-east-2.compute.internal"
I0321 20:55:45.928694       1 client.go:783]  "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1679432145 mac_prefix:2e:f9:d8 max_tunid:16711680 northd_internal_version:23.03.1-20.27.0-70.6 northd_probe_interval:5000 svc_monitor_mac:fe:cb:72:cf:f8:5f use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {c8b24290-296e-44a2-a4d0-02db7e312614}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]"
I0321 20:55:46.270129       1 obj_retry.go:265] Retry object setup: *v1.Node ip-10-0-141-231.us-east-2.compute.internal
I0321 20:55:46.270154       1 obj_retry.go:319] Adding new object: *v1.Node ip-10-0-141-231.us-east-2.compute.internal
I0321 20:55:46.270164       1 master.go:719] Adding or Updating Node "ip-10-0-141-231.us-east-2.compute.internal"
I0321 20:55:46.270201       1 default_network_controller.go:667] Node add failed for ip-10-0-141-231.us-east-2.compute.internal, will try again later: nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:55:46.270209       1 obj_retry.go:326] Retry add failed for *v1.Node ip-10-0-141-231.us-east-2.compute.internal, will try again later: nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:55:46.270284       1 event.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-141-231.us-east-2.compute.internal", UID:"621e6289-ca5a-4e17-afff-5b49961cfb38", APIVersion:"v1", ResourceVersion:"52970", FieldPath:""}): type: 'Warning' reason: 'ErrorReconcilingNode' nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:55:52.916512       1 reflector.go:559] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Namespace total 5 items received
I0321 20:56:06.910669       1 reflector.go:559] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Pod total 12 items received
I0321 20:56:15.928505       1 client.go:783]  "msg"="transacting operations" "database"="OVN_Northbound" "operations"="[{Op:update Table:NB_Global Row:map[options:{GoMap:map[e2e_timestamp:1679432175 mac_prefix:2e:f9:d8 max_tunid:16711680 northd_internal_version:23.03.1-20.27.0-70.6 northd_probe_interval:5000 svc_monitor_mac:fe:cb:72:cf:f8:5f use_logical_dp_groups:true]}] Rows:[] Columns:[] Mutations:[] Timeout:<nil> Where:[where column _uuid == {c8b24290-296e-44a2-a4d0-02db7e312614}] Until: Durable:<nil> Comment:<nil> Lock:<nil> UUIDName:}]"
I0321 20:56:16.269611       1 obj_retry.go:265] Retry object setup: *v1.Node ip-10-0-141-231.us-east-2.compute.internal
I0321 20:56:16.269637       1 obj_retry.go:319] Adding new object: *v1.Node ip-10-0-141-231.us-east-2.compute.internal
I0321 20:56:16.269646       1 master.go:719] Adding or Updating Node "ip-10-0-141-231.us-east-2.compute.internal"
I0321 20:56:16.269688       1 default_network_controller.go:667] Node add failed for ip-10-0-141-231.us-east-2.compute.internal, will try again later: nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:56:16.269697       1 obj_retry.go:326] Retry add failed for *v1.Node ip-10-0-141-231.us-east-2.compute.internal, will try again later: nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation
I0321 20:56:16.269724       1 event.go:285] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-10-0-141-231.us-east-2.compute.internal", UID:"621e6289-ca5a-4e17-afff-5b49961cfb38", APIVersion:"v1", ResourceVersion:"52970", FieldPath:""}): type: 'Warning' reason: 'ErrorReconcilingNode' nodeAdd: error adding node "ip-10-0-141-231.us-east-2.compute.internal": could not find "k8s.ovn.org/node-subnets" annotation

cluster-network-operator log:

I0321 21:03:38.487602       1 log.go:198] Set operator conditions:
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "False"
  type: ManagementStateDegraded
- lastTransitionTime: "2023-03-21T19:53:10Z"
  message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making
    progress - last change 2023-03-21T19:42:39Z
  reason: RolloutHung
  status: "True"
  type: Degraded
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "True"
  type: Upgradeable
- lastTransitionTime: "2023-03-21T19:42:39Z"
  message: |-
    DaemonSet "/openshift-network-diagnostics/network-check-target" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)
  reason: Deploying
  status: "True"
  type: Progressing
- lastTransitionTime: "2023-03-21T17:39:26Z"
  status: "True"
  type: Available
I0321 21:03:38.488312       1 log.go:198] Skipping reconcile of Network.operator.openshift.io: spec unchanged
I0321 21:03:38.499825       1 log.go:198] Set ClusterOperator conditions:
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "False"
  type: ManagementStateDegraded
- lastTransitionTime: "2023-03-21T19:53:10Z"
  message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making
    progress - last change 2023-03-21T19:42:39Z
  reason: RolloutHung
  status: "True"
  type: Degraded
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "True"
  type: Upgradeable
- lastTransitionTime: "2023-03-21T19:42:39Z"
  message: |-
    DaemonSet "/openshift-network-diagnostics/network-check-target" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)
  reason: Deploying
  status: "True"
  type: Progressing
- lastTransitionTime: "2023-03-21T17:39:26Z"
  status: "True"
  type: Available
I0321 21:03:38.571013       1 log.go:198] Set HostedControlPlane conditions:
- lastTransitionTime: "2023-03-21T17:38:24Z"
  message: All is well
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: ValidAWSIdentityProvider
- lastTransitionTime: "2023-03-21T17:37:06Z"
  message: Configuration passes validation
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: ValidHostedControlPlaneConfiguration
- lastTransitionTime: "2023-03-21T19:24:24Z"
  message: ""
  observedGeneration: 3
  reason: QuorumAvailable
  status: "True"
  type: EtcdAvailable
- lastTransitionTime: "2023-03-21T17:38:23Z"
  message: Kube APIServer deployment is available
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: KubeAPIServerAvailable
- lastTransitionTime: "2023-03-21T20:26:29Z"
  message: ""
  observedGeneration: 3
  reason: AsExpected
  status: "False"
  type: Degraded
- lastTransitionTime: "2023-03-21T17:37:11Z"
  message: All is well
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: InfrastructureReady
- lastTransitionTime: "2023-03-21T17:37:06Z"
  message: External DNS is not configured
  observedGeneration: 3
  reason: StatusUnknown
  status: Unknown
  type: ExternalDNSReachable
- lastTransitionTime: "2023-03-21T19:24:24Z"
  message: ""
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: Available
- lastTransitionTime: "2023-03-21T17:37:06Z"
  message: Reconciliation active on resource
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: ReconciliationActive
- lastTransitionTime: "2023-03-21T17:38:25Z"
  message: All is well
  reason: AsExpected
  status: "True"
  type: AWSDefaultSecurityGroupCreated
- lastTransitionTime: "2023-03-21T19:30:54Z"
  message: 'Error while reconciling 4.14.0-0.nightly-2023-03-20-201450: the cluster
    operator network is degraded'
  observedGeneration: 3
  reason: ClusterOperatorDegraded
  status: "False"
  type: ClusterVersionProgressing
- lastTransitionTime: "2023-03-21T17:39:11Z"
  message: Condition not found in the CVO.
  observedGeneration: 3
  reason: StatusUnknown
  status: Unknown
  type: ClusterVersionUpgradeable
- lastTransitionTime: "2023-03-21T17:44:05Z"
  message: Done applying 4.14.0-0.nightly-2023-03-20-201450
  observedGeneration: 3
  reason: FromClusterVersion
  status: "True"
  type: ClusterVersionAvailable
- lastTransitionTime: "2023-03-21T19:55:15Z"
  message: Cluster operator network is degraded
  observedGeneration: 3
  reason: ClusterOperatorDegraded
  status: "True"
  type: ClusterVersionFailing
- lastTransitionTime: "2023-03-21T17:39:11Z"
  message: Payload loaded version="4.14.0-0.nightly-2023-03-20-201450" image="registry.ci.openshift.org/ocp/release:4.14.0-0.nightly-2023-03-20-201450"
    architecture="amd64"
  observedGeneration: 3
  reason: PayloadLoaded
  status: "True"
  type: ClusterVersionReleaseAccepted
- lastTransitionTime: "2023-03-21T17:39:21Z"
  message: ""
  reason: AsExpected
  status: "False"
  type: network.operator.openshift.io/ManagementStateDegraded
- lastTransitionTime: "2023-03-21T19:53:10Z"
  message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making
    progress - last change 2023-03-21T19:42:39Z
  reason: RolloutHung
  status: "True"
  type: network.operator.openshift.io/Degraded
- lastTransitionTime: "2023-03-21T17:39:21Z"
  message: ""
  reason: AsExpected
  status: "True"
  type: network.operator.openshift.io/Upgradeable
- lastTransitionTime: "2023-03-21T19:42:39Z"
  message: |-
    DaemonSet "/openshift-network-diagnostics/network-check-target" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)
  reason: Deploying
  status: "True"
  type: network.operator.openshift.io/Progressing
- lastTransitionTime: "2023-03-21T17:39:27Z"
  message: ""
  reason: AsExpected
  status: "True"
  type: network.operator.openshift.io/Available
I0321 21:03:39.450912       1 pod_watcher.go:125] Operand /, Kind= openshift-multus/multus updated, re-generating status
I0321 21:03:39.450953       1 pod_watcher.go:125] Operand /, Kind= openshift-multus/multus updated, re-generating status
I0321 21:03:39.493206       1 log.go:198] Set operator conditions:
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "False"
  type: ManagementStateDegraded
- lastTransitionTime: "2023-03-21T19:53:10Z"
  message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making
    progress - last change 2023-03-21T19:42:39Z
  reason: RolloutHung
  status: "True"
  type: Degraded
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "True"
  type: Upgradeable
- lastTransitionTime: "2023-03-21T19:42:39Z"
  message: |-
    DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-network-diagnostics/network-check-target" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
  reason: Deploying
  status: "True"
  type: Progressing
- lastTransitionTime: "2023-03-21T17:39:26Z"
  status: "True"
  type: Available
I0321 21:03:39.494050       1 log.go:198] Skipping reconcile of Network.operator.openshift.io: spec unchanged
I0321 21:03:39.508538       1 log.go:198] Set ClusterOperator conditions:
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "False"
  type: ManagementStateDegraded
- lastTransitionTime: "2023-03-21T19:53:10Z"
  message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making
    progress - last change 2023-03-21T19:42:39Z
  reason: RolloutHung
  status: "True"
  type: Degraded
- lastTransitionTime: "2023-03-21T17:39:21Z"
  status: "True"
  type: Upgradeable
- lastTransitionTime: "2023-03-21T19:42:39Z"
  message: |-
    DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-network-diagnostics/network-check-target" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
  reason: Deploying
  status: "True"
  type: Progressing
- lastTransitionTime: "2023-03-21T17:39:26Z"
  status: "True"
  type: Available
I0321 21:03:39.684429       1 log.go:198] Set HostedControlPlane conditions:
- lastTransitionTime: "2023-03-21T17:38:24Z"
  message: All is well
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: ValidAWSIdentityProvider
- lastTransitionTime: "2023-03-21T17:37:06Z"
  message: Configuration passes validation
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: ValidHostedControlPlaneConfiguration
- lastTransitionTime: "2023-03-21T19:24:24Z"
  message: ""
  observedGeneration: 3
  reason: QuorumAvailable
  status: "True"
  type: EtcdAvailable
- lastTransitionTime: "2023-03-21T17:38:23Z"
  message: Kube APIServer deployment is available
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: KubeAPIServerAvailable
- lastTransitionTime: "2023-03-21T20:26:29Z"
  message: ""
  observedGeneration: 3
  reason: AsExpected
  status: "False"
  type: Degraded
- lastTransitionTime: "2023-03-21T17:37:11Z"
  message: All is well
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: InfrastructureReady
- lastTransitionTime: "2023-03-21T17:37:06Z"
  message: External DNS is not configured
  observedGeneration: 3
  reason: StatusUnknown
  status: Unknown
  type: ExternalDNSReachable
- lastTransitionTime: "2023-03-21T19:24:24Z"
  message: ""
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: Available
- lastTransitionTime: "2023-03-21T17:37:06Z"
  message: Reconciliation active on resource
  observedGeneration: 3
  reason: AsExpected
  status: "True"
  type: ReconciliationActive
- lastTransitionTime: "2023-03-21T17:38:25Z"
  message: All is well
  reason: AsExpected
  status: "True"
  type: AWSDefaultSecurityGroupCreated
- lastTransitionTime: "2023-03-21T19:30:54Z"
  message: 'Error while reconciling 4.14.0-0.nightly-2023-03-20-201450: the cluster
    operator network is degraded'
  observedGeneration: 3
  reason: ClusterOperatorDegraded
  status: "False"
  type: ClusterVersionProgressing
- lastTransitionTime: "2023-03-21T17:39:11Z"
  message: Condition not found in the CVO.
  observedGeneration: 3
  reason: StatusUnknown
  status: Unknown
  type: ClusterVersionUpgradeable
- lastTransitionTime: "2023-03-21T17:44:05Z"
  message: Done applying 4.14.0-0.nightly-2023-03-20-201450
  observedGeneration: 3
  reason: FromClusterVersion
  status: "True"
  type: ClusterVersionAvailable
- lastTransitionTime: "2023-03-21T19:55:15Z"
  message: Cluster operator network is degraded
  observedGeneration: 3
  reason: ClusterOperatorDegraded
  status: "True"
  type: ClusterVersionFailing
- lastTransitionTime: "2023-03-21T17:39:11Z"
  message: Payload loaded version="4.14.0-0.nightly-2023-03-20-201450" image="registry.ci.openshift.org/ocp/release:4.14.0-0.nightly-2023-03-20-201450"
    architecture="amd64"
  observedGeneration: 3
  reason: PayloadLoaded
  status: "True"
  type: ClusterVersionReleaseAccepted
- lastTransitionTime: "2023-03-21T17:39:21Z"
  message: ""
  reason: AsExpected
  status: "False"
  type: network.operator.openshift.io/ManagementStateDegraded
- lastTransitionTime: "2023-03-21T19:53:10Z"
  message: DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" rollout is not making
    progress - last change 2023-03-21T19:42:39Z
  reason: RolloutHung
  status: "True"
  type: network.operator.openshift.io/Degraded
- lastTransitionTime: "2023-03-21T17:39:21Z"
  message: ""
  reason: AsExpected
  status: "True"
  type: network.operator.openshift.io/Upgradeable
- lastTransitionTime: "2023-03-21T19:42:39Z"
  message: |-
    DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-network-diagnostics/network-check-target" is not available (awaiting 1 nodes)
    DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes)
  reason: Deploying
  status: "True"
  type: network.operator.openshift.io/Progressing
- lastTransitionTime: "2023-03-21T17:39:27Z"
  message: ""
  reason: AsExpected
  status: "True"
  type: network.operator.openshift.io/Available

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1. management cluster 4.13
2. bring up the hostedcluster and nodepool in 4.14.0-0.nightly-2023-03-19-234132
3. upgrade the hostedcluster to 4.14.0-0.nightly-2023-03-20-201450 
4. replace upgrade the nodepool to 4.14.0-0.nightly-2023-03-20-201450 

Actual results

First node is in NotReady

Expected results:

All nodes should be Ready

Additional info:

No issue with replace upgrade from 4.13 to 4.14

 

 

 

 

 

 

Description of problem:

We need to have admin-ack in 4.11 so that admins can check the deprecated APIs and approve when they move to 4.12.Refer https://access.redhat.com/articles/6955381 for  more information. As planned we want to add the admin-ack around 4.12 feature freeze. 

Version-Release number of selected component (if applicable):

4.11

How reproducible:

Always

Steps to Reproduce:

1. Install a cluster in 4.11. 
2. Run an application which uses the deprecated API. See https://access.redhat.com/articles/6955381 for more information.
3. Upgrade to 4.12

Actual results:

The upgrade happens without asking the admin to confirm that the worksloads do not use the deprecated APIs.

Expected results:

Upgrade should wait for the admin-ack.

Additional info:

We had admin-acks in the past too e.g. https://docs.openshift.com/container-platform/4.9/updating/updating-cluster-prepare.html#update-preparing-migrate_updating-cluster-prepare

This is a clone of issue OCPBUGS-10622. The following is the description of the original issue:

Description of problem:

Unit test failing 

=== RUN   TestNewAppRunAll/app_generation_using_context_dir
    newapp_test.go:907: app generation using context dir: Error mismatch! Expected <nil>, got supplied context directory '2.0/test/rack-test-app' does not exist in 'https://github.com/openshift/sti-ruby'
    --- FAIL: TestNewAppRunAll/app_generation_using_context_dir (0.61s)


Version-Release number of selected component (if applicable):

 

How reproducible:

100

Steps to Reproduce:

see for example https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_oc/1376/pull-ci-openshift-oc-master-images/1638172620648091648 

Actual results:

unit tests fail

Expected results:

TestNewAppRunAll unit test should pass

Additional info:

 

Description of problem:

If we use a macvlan with the configuration...
spec:
  config: '{ "cniVersion": "0.3.1", "name": "ran-bh-macvlan-test", "plugins": [ {"type": "macvlan","master": "vlan306", "mode": "bridge", "ipam": { "type": "whereabouts", "range": "2001:1b74:480:603d:0304:0403:000:0000-2001:1b74:480:603d:0304:0403:0000:0004/64","gateway": "2001:1b74:480:603d::1" } } ]}'

there is an error creating the pod:

  Warning  FailedCreatePodSandBox  17s (x3 over 55s)  kubelet            (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_test31_test-ecoloma-01_a593bd0a-83e7-4d31-857e-0c31491e849e_0(5cf36bd99ffa532fd34735e68caecfbc69d820ba6cb04e348c9f9f168498022f): error adding pod test-ecoloma-01_test31 to CNI network "multus-cni-network": [test-ecoloma-01/test31:ran-bh-macvlan-test]: error adding container to network "ran-bh-macvlan-test": Error at storage engine: OverlappingRangeIPReservation.whereabouts.cni.cncf.io "2001-1b74-480-603d-304-403--" is invalid: metadata.name: Invalid value: "2001-1b74-480-603d-304-403--": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
  
  
If we change the start IP address to 2001:1b74:480:603d:0304:0403:000:0001, it works ok ok.

Version-Release number of selected component (if applicable):

4.13

How reproducible:

Always reproducible

Steps to Reproduce:

1. See description of problem.

Actual results:

Unable to create pod

Expected results:

IP range should be valid and pod should get created

Additional info:

 

Description of problem:

[OVN][OSP] After reboot egress node, egress IP cannot be applied anymore.

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-11-07-181244

How reproducible:

Frequently happened in automation. But didn't reproduce it in manual.

Steps to Reproduce:

1. Label one node as egress node

2.
Config one egressIP object
STEP: Check  one EgressIP assigned in the object.

Nov  8 15:28:23.591: INFO: egressIPStatus: [{"egressIP":"192.168.54.72","node":"huirwang-1108c-pg2mt-worker-0-2fn6q"}]

3.
Reboot the node, wait for the node ready.


Actual results:

EgressIP cannot be applied anymore. Waited more than 1 hour.
 oc get egressip
NAME             EGRESSIPS       ASSIGNED NODE   ASSIGNED EGRESSIPS
egressip-47031   192.168.54.72    

Expected results:

The egressIP should be applied correctly.

Additional info:


Some logs
E1108 07:29:41.849149       1 egressip.go:1635] No assignable nodes found for EgressIP: egressip-47031 and requested IPs: [192.168.54.72]
I1108 07:29:41.849288       1 event.go:285] Event(v1.ObjectReference{Kind:"EgressIP", Namespace:"", Name:"egressip-47031", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'NoMatchingNodeFound' no assignable nodes for EgressIP: egressip-47031, please tag at least one node with label: k8s.ovn.org/egress-assignable


W1108 07:33:37.401149       1 egressip_healthcheck.go:162] Could not connect to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107): context deadline exceeded
I1108 07:33:37.401348       1 master.go:1364] Adding or Updating Node "huirwang-1108c-pg2mt-worker-0-2fn6q"
I1108 07:33:37.437465       1 egressip_healthcheck.go:168] Connected to huirwang-1108c-pg2mt-worker-0-2fn6q (10.131.0.2:9107)

After this log, seems like no logs related to "192.168.54.72" happened.

This is a clone of issue OCPBUGS-212. The following is the description of the original issue:

Description of problem:

oc --context build02 get clusterversion
NAME      VERSION       AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.12.0-ec.1   True        False         45h     Error while reconciling 4.12.0-ec.1: the cluster operator kube-controller-manager is degraded

oc --context build02 get co kube-controller-manager
NAME                      VERSION       AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
kube-controller-manager   4.12.0-ec.1   True        False         True       2y87d   GarbageCollectorDegraded: error fetching rules: Get "https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules": dial tcp 172.30.153.28:9091: connect: cannot assign requested address

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:

Expected results:

Additional info:

build02 is a build farm cluster in CI production.
I can provide credentials to access the cluster if needed.

This is a clone of issue OCPBUGS-8016. The following is the description of the original issue:

This is a clone of issue OCPBUGS-1748. The following is the description of the original issue:

Description of problem:

PipelineRun templates are currently fetched from `openshift-pipelines` namespace. It has to be fetched from `openshift` namespace.

Version-Release number of selected component (if applicable):
4.11 and 1.8.1 OSP

Align with operator changes https://issues.redhat.com/browse/SRVKP-2413 in 1.8.1, UI has to update the code to fetch pipelinerun templates from openshift namespace.

Description of problem:

Manual backport of 
* https://github.com/openshift/cluster-dns-operator/pull/336
* https://github.com/openshift/cluster-dns-operator/pull/339

Version-Release number of selected component (if applicable):

4.11

The two modules that are auto generated for the CLI docs need to add ":_content-type: REFERENCE" to the top of the files. Update the doc generation templates to add these.

Description of problem:

[4.11.z] Fix kubevirt-console tests

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-858. The following is the description of the original issue:

Description of problem:

In OCP 4.9, the package-server-manager was introduced to manage the packageserver CSV. However, when OCP 4.8 in upgraded to 4.9, the packageserver stays stuck in v0.17.0, which is the version in OCP 4.8, and v0.18.3 does not roll out, which is the version in OCP 4.9

Version-Release number of selected component (if applicable):

 

How reproducible:

Always

Steps to Reproduce:

1. Install OCP 4.8

2. Upgrade to OCP 4.9 

$ oc get clusterversion 
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2022-08-31-160214   True        True          50m     Working towards 4.9.47: 619 of 738 done (83% complete)

$ oc get clusterversion 
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.47    True        False         4m26s   Cluster version is 4.9.47
 

Actual results:

Check packageserver CSV. It's in v0.17.0 

$ oc get csv  NAME            DISPLAY          VERSION   REPLACES   PHASE packageserver   Package Server   0.17.0               Succeeded 

Expected results:

packageserver CSV is at 0.18.3 

Additional info:

packageserver CSV version in 4.8: https://github.com/openshift/operator-framework-olm/blob/release-4.8/manifests/0000_50_olm_15-packageserver.clusterserviceversion.yaml#L12

packageserver CSV version in 4.9: https://github.com/openshift/operator-framework-olm/blob/release-4.9/pkg/manifests/csv.yaml#L8

Our Prometheus alerts are inconsistent with both upstream and sometimes our own vendor folder. Let's do a clean update run before the next release is branched off.

Description of problem:

When trying to add a Cisco UCS Rackmount server as a `baremetalhost` CR the following error comes up in the metal3 container log in the openshift-machine-api namespace.

'TransferProtocolType' property which is mandatory to complete the action is missing in the request body

Full log entry:

{"level":"info","ts":1677155695.061805,"logger":"provisioner.ironic","msg":"current provision state","host":"ucs-rackmounts~ocp-test-1","lastError":"Deploy step deploy.deploy failed with BadRequestError: HTTP POST https://10.5.4.78/redfish/v1/Managers/CIMC/VirtualMedia/0/Actions/VirtualMedia.InsertMedia returned code 400. Base.1.4.0.GeneralError: 'TransferProtocolType' property which is mandatory to complete the action is missing in the request body. Extended information: [{'@odata.type': 'Message.v1_0_6.Message', 'MessageId': 'Base.1.4.0.GeneralError', 'Message': "'TransferProtocolType' property which is mandatory to complete the action is missing in the request body.", 'MessageArgs': [], 'Severity': 'Critical'}].","current":"deploy failed","target":"active"}

Version-Release number of selected component (if applicable):

    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30328143480d6598d0b52d41a6b755bb0f4dfe04c4b7aa7aefd02ea793a2c52b
    imagePullPolicy: IfNotPresent
    name: metal3-ironic

How reproducible:

Adding a Cisco UCS Rackmount with Redfish enabled as a baremetalhost to metal3

Steps to Reproduce:

1. The address to use: redfish-virtualmedia://10.5.4.78/redfish/v1/Systems/WZP22100SBV

Actual results:

[baelen@baelen-jumphost mce]$ oc get baremetalhosts.metal3.io  -n ucs-rackmounts  ocp-test-1
NAME         STATE          CONSUMER   ONLINE   ERROR                AGE
ocp-test-1   provisioning              true     provisioning error   23h

Expected results:

For the provisioning to be successfull.

Additional info:

 

This is a clone of issue OCPBUGS-7633. The following is the description of the original issue:

This is a clone of issue OCPBUGS-1125. The following is the description of the original issue:

(originally reported in BZ as https://bugzilla.redhat.com/show_bug.cgi?id=1983200)

test:
[sig-etcd][Feature:DisasterRecovery][Disruptive] [Feature:EtcdRecovery] Cluster should restore itself after quorum loss [Serial]

is failing frequently in CI, see search results:
https://search.ci.openshift.org/?maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job&search=%5C%5Bsig-etcd%5C%5D%5C%5BFeature%3ADisasterRecovery%5C%5D%5C%5BDisruptive%5C%5D+%5C%5BFeature%3AEtcdRecovery%5C%5D+Cluster+should+restore+itself+after+quorum+loss+%5C%5BSerial%5C%5D

https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-disruptive-4.8/1413625606435770368
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-disruptive-4.8/1415075413717159936

some brief triaging from Thomas Jungblut on:
https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-disruptive-4.11/1568747321334697984

it seems the last guard pod doesn't come up, etcd operator installs this properly and the revision installer also does not spout any errors. It just doesn't progress to the latest revision. At first glance doesn't look like an issue with etcd itself, but needs to be taken a closer look at for sure.

Description of problem:

The test local-test is failing on openshift/thanos when upgrading golang version to 1.18 on the branch release-4.11.

Please refer to this test log for details:
https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_thanos/82/pull-ci-openshift-thanos-release-4.11-test-local/1541516614497734656

Version-Release number of selected component (if applicable):

4.11

How reproducible:

See local-test job on pull request on the repository Openshift/Thanos

Steps to Reproduce:


Actual results:

local-test fails on the following error:
level=error ts=2022-06-27T20:28:12.306Z caller=web.go:99 component=web msg="panic while serving request" client=127.0.0.1:37064 url=/api/v1/metadata err="runtime error: invalid memory address or nil pointer dereference" stack="goroutine 278 [running]:\ngithub.com/prometheus/prometheus/web.withStackTracer.func1.1()\n\t/go/pkg/mod/github.com/prometheus/prometheus@v1.8.2-0.20200724121523-657ba532e42f/web/web.go:98 +0x99\npanic({0x1c34760, 0x308ad40})\n\t/usr/lib/golang/src/runtime/panic.go:838 +0x207\nreflect.mapiternext(0xc000458540?)\n\t/usr/lib/golang/src/runtime/map.go:1378 +0x19\ngithub.com/modern-go/reflect2.(*UnsafeMapIterator).UnsafeNext(0x1bd62e0?)\n\t/go/pkg/mod/github.com/modern-go/reflect2@v1.0.1/unsafe_map.go:136 +0x32\ngithub.com/json-iterator/go.(*sortKeysMapEncoder).Encode(0xc000949d10, 0xc0002966b0, 0xc0006c7740)\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect_map.go:297 +0x31a\ngithub.com/json-iterator/go.(*onePtrEncoder).Encode(0xc0008cb120, 0xc000948fc0, 0xc0001139c0?)\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect.go:219 +0x82\ngithub.com/json-iterator/go.(*Stream).WriteVal(0xc0006c7740, {0x1c16da0, 0xc000948fc0})\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect.go:98 +0x158\ngithub.com/json-iterator/go.(*dynamicEncoder).Encode(0xc00094cd58?, 0xfa9a07?, 0xc0006c7758?)\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect_dynamic.go:15 +0x39\ngithub.com/json-iterator/go.(*structFieldEncoder).Encode(0xc000949620, 0x1a4aaba?, 0xc0006c7740)\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect_struct_encoder.go:110 +0x56\ngithub.com/json-iterator/go.(*structEncoder).Encode(0xc000949740, 0x0?, 0xc0006c7740)\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect_struct_encoder.go:158 +0x652\ngithub.com/json-iterator/go.(*OptionalEncoder).Encode(0xc0001afd60?, 0x0?, 0x0?)\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect_optional.go:74 +0xa4\ngithub.com/json-iterator/go.(*onePtrEncoder).Encode(0xc0008cad40, 0xc0006c76e0, 0xc000949020?)\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect.go:219 +0x82\ngithub.com/json-iterator/go.(*Stream).WriteVal(0xc0006c7740, {0x1ac56e0, 0xc0006c76e0})\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/reflect.go:98 +0x158\ngithub.com/json-iterator/go.(*frozenConfig).Marshal(0xc0001afd60, {0x1ac56e0, 0xc0006c76e0})\n\t/go/pkg/mod/github.com/json-iterator/go@v1.1.9/config.go:299 +0xc9\ngithub.com/prometheus/prometheus/web/api/v1.(*API).respond(0xc0002d7a40, {0x229a448, 0xc00022bd60}, {0x1c16da0?, 0xc000948fc0}, {0x0?, 0x7fe5a05a5b20?, 0x20?})\n\t/go/pkg/mod/github.com/prometheus/prometheus@v1.8.2-0.20200724121523-657ba532e42f/web/api/v1/api.go:1437 +0x162\ngithub.com/prometheus/prometheus/web/api/v1.(*API).Register.func1.1({0x229a448, 0xc00022bd60}, 0x7fe5982c5300?)\n\t/go/pkg/mod/github.com/prometheus/prometheus@v1.8.2-0.20200724121523-657ba532e42f/web/api/v1/api.go:273 +0x20b\nnet/http.HandlerFunc.ServeHTTP(0x7fe5982c5300?, {0x229a448?, 0xc00022bd60?}, 0xc00072b270?)\n\t/usr/lib/golang/src/net/http/server.go:2084 +0x2f\ngithub.com/prometheus/prometheus/util/httputil.CompressionHandler.ServeHTTP({{0x2290780?, 0xc000856288?}}, {0x7fe5982c5300?, 0xc00072b270?}, 0x228fb20?)\n\t/go/pkg/mod/github.com/prometheus/prometheus@v1.8.2-0.20200724121523-657ba532e42f/util/httputil/compression.go:90 +0x69\ngithub.com/prometheus/prometheus/web.(*Handler).testReady.func1({0x7fe5982c5300?, 0xc00072b270?}, 0x7fe5982c5300?)\n\t/go/pkg/mod/github.com/prometheus/prometheus@v1.8.2-0.20200724121523-657ba532e42f/web/web.go:499 +0x39\nnet/http.HandlerFunc.ServeHTTP(0x7fe5982c5300?, {0x7fe5982c5300?, 0xc00072b270?}, 0x50?)\n\t/usr/lib/golang/src/net/http/server.go:2084 +0x2f\ngithub.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerResponseSize.func1({0x7fe5982c5300?, 0xc00072b220?}, 0xc000250c00)\n\t/go/pkg/mod/github.com/prometheus/client_golang@v1.6.0/prometheus/promhttp/instrument_server.go:196 +0xa5\nnet/http.HandlerFunc.ServeHTTP(0x228fb80?, {0x7fe5982c5300?, 0xc00072b220?}, 0xc000948ed0?)\n\t/usr/lib/golang/src/net/http/server.go:2084 +0x2f\ngithub.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerDuration.func2({0x7fe5982c5300, 0xc00072b220}, 0xc000250c00)\n\t/go/pkg/mod/github.com/prometheus/client_golang@v1.6.0/prometheus/promhttp/instrument_server.go:76 +0xa2\nnet/http.HandlerFunc.ServeHTTP(0x22a4a68?, {0x7fe5982c5300?, 0xc00072b220?}, 0x0?)\n\t/usr/lib/golang/src/net/http/server.go:2084 +0x2f\ngithub.com/prometheus/client_golang/prometheus/promhttp.InstrumentHandlerCounter.func1({0x22a4a68?, 0xc00072b1d0?}, 0xc000250c00)\n\t/go/pkg/mod/github.com/prometheus/client_golang@v1.6.0/prometheus/promhttp/instrument_server.go:100 +0x94\ngithub.com/prometheus/prometheus/web.setPathWithPrefix.func1.1({0x22a4a68, 0xc00072b1d0}, 0xc000250b00)\n\t/go/pkg/mod/github.com/prometheus/prometheus@v1.8.2-0.20200724121523-657ba532e42f/web/web.go:1142 +0x290\ngithub.com/prometheus/common/route.(*Router).handle.func1({0x22a4a68, 0xc00072b1d0}, 0xc000250a00, {0x0, 0x0, 0xc00022c364?})\n\t/go/pkg/mod/github.com/prometheus/common@v0.10.0/route/route.go:83 +0x2ae\ngithub.com/julienschmidt/httprouter.(*Router).ServeHTTP(0xc0001cc780, {0x22a4a68, 0xc00072b1d0}, 0xc000250a00)\n\t/go/pkg/mod/github.com/julienschmidt/httprouter@v1.3.0/router.go:387 +0x82b\ngithub.com/prometheus/common/route.(*Router).ServeHTTP(0x8?, {0x22a4a68?, 0xc00072b1d0?}, 0x203000?)\n\t/go/pkg/mod/github.com/prometheus/common@v0.10.0/route/route.go:121 +0x26\nnet/http.StripPrefix.func1({0x22a4a68, 0xc00072b1d0}, 0xc000250900)\n\t/usr/lib/golang/src/net/http/server.go:2127 +0x330\nnet/http.HandlerFunc.ServeHTTP(0x10?, {0x22a4a68?, 0xc00072b1d0?}, 0x7fe5c8423f18?)\n\t/usr/lib/golang/src/net/http/server.go:2084 +0x2f\nnet/http.(*ServeMux).ServeHTTP(0x413d87?, {0x22a4a68, 0xc00072b1d0}, 0xc000250900)\n\t/usr/lib/golang/src/net/http/server.go:2462 +0x149\ngithub.com/opentracing-contrib/go-stdlib/nethttp.MiddlewareFunc.func5({0x22a3808?, 0xc000a282a0}, 0xc000250200)\n\t/go/pkg/mod/github.com/opentracing-contrib/go-stdlib@v0.0.0-20190519235532-cf7a6c988dc9/nethttp/server.go:140 +0x662\nnet/http.HandlerFunc.ServeHTTP(0x0?, {0x22a3808?, 0xc000a282a0?}, 0xffffffffffffffff?)\n\t/usr/lib/golang/src/net/http/server.go:2084 +0x2f\ngithub.com/prometheus/prometheus/web.withStackTracer.func1({0x22a3808?, 0xc000a282a0?}, 0xc0008ca850?)\n\t/go/pkg/mod/github.com/prometheus/prometheus@v1.8.2-0.20200724121523-657ba532e42f/web/web.go:103 +0x97\nnet/http.HandlerFunc.ServeHTTP(0x0?, {0x22a3808?, 0xc000a282a0?}, 0xc000100000?)\n\t/usr/lib/golang/src/net/http/server.go:2084 +0x2f\nnet/http.serverHandler.ServeHTTP({0xc000c55380?}, {0x22a3808, 0xc000a282a0}, 0xc000250200)\n\t/usr/lib/golang/src/net/http/server.go:2916 +0x43b\nnet/http.(*conn).serve(0xc0000d1540, {0x22a4e18, 0xc00061a0c0})\n\t/usr/lib/golang/src/net/http/server.go:1966 +0x5d7\ncreated by net/http.(*Server).Serve\n\t/usr/lib/golang/src/net/http/server.go:3071 +0x4db\n"
level=error ts=2022-06-27T20:28:12.306Z caller=stdlib.go:89 component=web caller="http: panic serving 127.0.0.1:37064" msg="runtime error: invalid memory address or nil pointer dereference"

Expected results:

local-test does no fail on the error above.

Additional info:


Description of problem:

Similar to OCPBUGS-11636 ccoctl needs to be updated to account for the s3 bucket changes described in https://aws.amazon.com/blogs/aws/heads-up-amazon-s3-security-changes-are-coming-in-april-of-2023/

these changes have rolled out to us-east-2 and China regions as of today and will roll out to additional regions in the near future

See OCPBUGS-11636 for additional information

Version-Release number of selected component (if applicable):

 

How reproducible:

Reproducible in affected regions.

Steps to Reproduce:

1. Use "ccoctl aws create-all" flow to create STS infrastructure in an affected region like us-east-2. Notice that document upload fails because the s3 bucket is created in a state that does not allow usage of ACLs with the s3 bucket.

Actual results:

./ccoctl aws create-all --name abutchertestue2 --region us-east-2 --credentials-requests-dir ./credrequests --output-dir _output
2023/04/11 13:01:06 Using existing RSA keypair found at _output/serviceaccount-signer.private
2023/04/11 13:01:06 Copying signing key for use by installer
2023/04/11 13:01:07 Bucket abutchertestue2-oidc created
2023/04/11 13:01:07 Failed to create Identity provider: failed to upload discovery document in the S3 bucket abutchertestue2-oidc: AccessControlListNotSupported: The bucket does not allow ACLs
        status code: 400, request id: 2TJKZC6C909WVRK7, host id: zQckCPmozx+1yEhAj+lnJwvDY9rG14FwGXDnzKIs8nQd4fO4xLWJW3p9ejhFpDw3c0FE2Ggy1Yc=

Expected results:

"ccoctl aws create-all" successfully creates IAM and S3 infrastructure. OIDC discovery and JWKS documents are successfully uploaded to the S3 bucket and are publicly accessible.

Additional info:

 

This is a clone of issue OCPBUGS-13013. The following is the description of the original issue:

This is a clone of issue OCPBUGS-12854. The following is the description of the original issue:

This is a clone of issue OCPBUGS-11550. The following is the description of the original issue:

Description of problem:

`cluster-reader` ClusterRole should have ["get", "list", "watch"] permissions for a number of privileged CRs, but lacks them for the API Group "k8s.ovn.org", which includes CRs such as EgressFirewalls, EgressIPs, etc.

Version-Release number of selected component (if applicable):

OCP 4.10 - 4.12 OVN

How reproducible:

Always

Steps to Reproduce:

1. Create a cluster with OVN components, e.g. EgressFirewall
2. Check permissions of ClusterRole `cluster-reader`

Actual results:

No permissions for OVN resources 

Expected results:

Get, list, and watch verb permissions for OVN resources

Additional info:

Looks like a similar bug was opened for "network-attachment-definitions" in OCPBUGS-6959 (whose closure is being contested).

This is a clone of issue OCPBUGS-2508. The following is the description of the original issue:

Description of problem:

Installer fails due to Neutron policy error when creating Openstack servers for OCP master nodes.

$ oc get machines -A
NAMESPACE               NAME                          PHASE          TYPE   REGION   ZONE   AGE
openshift-machine-api   ostest-kwtf8-master-0         Running                               23h
openshift-machine-api   ostest-kwtf8-master-1         Running                               23h
openshift-machine-api   ostest-kwtf8-master-2         Running                               23h
openshift-machine-api   ostest-kwtf8-worker-0-g7nrw   Provisioning                          23h
openshift-machine-api   ostest-kwtf8-worker-0-lrkvb   Provisioning                          23h
openshift-machine-api   ostest-kwtf8-worker-0-vwrsk   Provisioning                          23h

$ oc -n openshift-machine-api logs machine-api-controllers-7454f5d65b-8fqx2 -c machine-controller
[...]
E1018 10:51:49.355143       1 controller.go:317] controller/machine_controller "msg"="Reconciler error" "error"="error creating Openstack instance: Failed to create port err: Request forbidden: [POST https://overcloud.redhat.local:13696/v2.0/ports], error message: {\"NeutronError\": {\"type\": \"PolicyNotAuthorized\", \"message\": \"(rule:create_port and (rule:create_port:allowed_address_pairs and (rule:create_port:allowed_address_pairs:ip_address and rule:create_port:allowed_address_pairs:ip_address))) is disallowed by policy\", \"detail\": \"\"}}" "name"="ostest-kwtf8-worker-0-lrkvb" "namespace"="openshift-machine-api"

Version-Release number of selected component (if applicable):

4.10.0-0.nightly-2022-10-14-023020

How reproducible:

Always

Steps to Reproduce:

1. Install 4.10 within provider networks (in primary or secondary interface)

Actual results:

Installation failure:
4.10.0-0.nightly-2022-10-14-023020: some cluster operators have not yet rolled out

Expected results:

Successful installation

Additional info:

Please find must-gather for installation on primary interface link here and for installation on secondary interface link here.

 

This is a clone of issue OCPBUGS-3111. The following is the description of the original issue:

This is a clone of issue OCPBUGS-2992. The following is the description of the original issue:

Description of problem:

The metal3-ironic container image in OKD fails during steps in configure-ironic.sh that look for additional Oslo configuration entries as environment variables to configure the Ironic instance. The mechanism by which it fails in OKD but not OpenShift is that the image for OpenShift happens to have unrelated variables set which match the regex, because it is based on the builder image, but the OKD image is based only on a stream8 image without these unrelated OS_ prefixed variables set.

The metal3 pod created in response to even a provisioningNetwork: Disabled Provisioning object will therefore crashloop indefinitely.

Version-Release number of selected component (if applicable):

4.11

How reproducible:

Always

Steps to Reproduce:

1. Deploy OKD to a bare metal cluster using the assisted-service, with the OKD ConfigMap applied to podman play kube, as in :https://github.com/openshift/assisted-service/tree/master/deploy/podman#okd-configuration
2. Observe the state of the metal3 pod in the openshift-machine-api namespace.

Actual results:

The metal3-ironic container repeatedly exits with nonzero, with the logs ending here:

++ export IRONIC_URL_HOST=10.1.1.21
++ IRONIC_URL_HOST=10.1.1.21
++ export IRONIC_BASE_URL=https://10.1.1.21:6385
++ IRONIC_BASE_URL=https://10.1.1.21:6385
++ export IRONIC_INSPECTOR_BASE_URL=https://10.1.1.21:5050
++ IRONIC_INSPECTOR_BASE_URL=https://10.1.1.21:5050
++ '[' '!' -z '' ']'
++ '[' -f /etc/ironic/ironic.conf ']'
++ cp /etc/ironic/ironic.conf /etc/ironic/ironic.conf_orig
++ tee /etc/ironic/ironic.extra
# Options set from Environment variables
++ echo '# Options set from Environment variables'
++ env
++ grep '^OS_'
++ tee -a /etc/ironic/ironic.extra

Expected results:

The metal3-ironic container starts and the metal3 pod is reported as ready.

Additional info:

This is the PR that introduced pipefail to the downstream ironic-image, which is not yet accepted in the upstream:
https://github.com/openshift/ironic-image/pull/267/files#diff-ab2b20df06f98d48f232d90f0b7aa464704257224862780635ec45b0ce8a26d4R3

This is the line that's failing:
https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/scripts/configure-ironic.sh#L57

This is the image base that OpenShift uses for ironic-image (before rewriting in ci-operator):
https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/Dockerfile.ocp#L9

Here is where the relevant environment variables are set in the builder images for OCP:
https://github.com/openshift/builder/blob/973602e0e576d7eccef4fc5810ba511405cd3064/hack/lib/build/version.sh#L87

Here is the final FROM line in the OKD image build (just stream8):
https://github.com/openshift/ironic-image/blob/4838a077d849070563b70761957178055d5d4517/Dockerfile.okd#L9

This results in the following differences between the two images:
$ podman run --rm -it --entrypoint bash quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:519ac06836d972047f311de5e57914cf842716e22a1d916a771f02499e0f235c -c 'env | grep ^OS_'
OS_GIT_MINOR=11
OS_GIT_TREE_STATE=clean
OS_GIT_COMMIT=97530a7
OS_GIT_VERSION=4.11.0-202210061001.p0.g97530a7.assembly.stream-97530a7
OS_GIT_MAJOR=4
OS_GIT_PATCH=0
$ podman run --rm -it --entrypoint bash quay.io/openshift/okd-content@sha256:6b8401f8d84c4838cf0e7c598b126fdd920b6391c07c9409b1f2f17be6d6d5cb -c 'env | grep ^OS_'

Here is what the OS_ prefixed variables should be used for:
https://github.com/metal3-io/ironic-image/blob/807a120b4ce5e1675a79ebf3ee0bb817cfb1f010/README.md?plain=1#L36
https://opendev.org/openstack/oslo.config/src/commit/84478d83f87e9993625044de5cd8b4a18dfcaf5d/oslo_config/sources/_environment.py

It's worth noting that ironic.extra is not consumed anywhere, and is simply being used here to save off the variables that Oslo _might_ be consuming (it won't consume the variables that are present in the OCP builder image, though they do get caught by this regex).

With pipefail set, grep returns non-zero when it fails to find an environment variable that matches the regex, as in the case of the OKD ironic-image builds.

 

This is a clone of issue OCPBUGS-4504. The following is the description of the original issue:

This is a clone of issue OCPBUGS-1557. The following is the description of the original issue:

Seen in an instance created recently by a 4.12.0-ec.2 GCP provider:

  "scheduling": {
    "automaticRestart": false,
    "onHostMaintenance": "MIGRATE",
    "preemptible": false,
    "provisioningModel": "STANDARD"
  },

From GCP's docs, they may stop instances on hardware failures and other causes, and we'd need automaticRestart: true to auto-recover from that. Also from GCP docs, the default for automaticRestart is true. And on the Go provider side, we doc:

If omitted, the platform chooses a default, which is subject to change over time, currently that default is "Always".

But the implementing code does not actually float the setting. Seems like a regression here, which is part of 4.10:

$ git clone https://github.com/openshift/machine-api-provider-gcp.git
$ cd machine-api-provider-gcp
$ git log --oneline origin/release-4.10 | grep 'migrate to openshift/api'
44f0f958 migrate to openshift/api

But that's not where the 4.9 and earlier code is located:

$ git branch -a | grep origin/release
  remotes/origin/release-4.10
  remotes/origin/release-4.11
  remotes/origin/release-4.12
  remotes/origin/release-4.13

Hunting for 4.9 code:

$ oc adm release info --commits quay.io/openshift-release-dev/ocp-release:4.9.48-x86_64 | grep gcp
  gcp-machine-controllers                        https://github.com/openshift/cluster-api-provider-gcp                       c955c03b2d05e3b8eb0d39d5b4927128e6d1c6c6
  gcp-pd-csi-driver                              https://github.com/openshift/gcp-pd-csi-driver                              48d49f7f9ef96a7a42a789e3304ead53f266f475
  gcp-pd-csi-driver-operator                     https://github.com/openshift/gcp-pd-csi-driver-operator                     d8a891de5ae9cf552d7d012ebe61c2abd395386e

So looking there:

$ git clone https://github.com/openshift/cluster-api-provider-gcp.git
$ cd cluster-api-provider-gcp
$ git log --oneline | grep 'migrate to openshift/api'
...no hits...
$ git grep -i automaticRestart origin/release-4.9  | grep -v '"description"\|compute-gen.go'
origin/release-4.9:vendor/google.golang.org/api/compute/v1/compute-api.json:        "automaticRestart": {

Not actually clear to me how that code is structured. So 4.10 and later GCP machine-API providers are impacted, and I'm unclear on 4.9 and earlier.

This is a clone of issue OCPBUGS-675. The following is the description of the original issue:

Description of problem:

A cluster hit a panic in etcd operator in bootstrap:
I0829 14:46:02.736582 1 controller_manager.go:54] StaticPodStateController controller terminated
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1e940ab]

goroutine 2701 [running]:
github.com/openshift/cluster-etcd-operator/pkg/etcdcli.checkSingleMemberHealth({0x29374c0, 0xc00217d920}, 0xc0021fb110)
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:135 +0x34b
github.com/openshift/cluster-etcd-operator/pkg/etcdcli.getMemberHealth.func1()
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:58 +0x7f
created by github.com/openshift/cluster-etcd-operator/pkg/etcdcli.getMemberHealth
github.com/openshift/cluster-etcd-operator/pkg/etcdcli/health.go:54 +0x2ac
Version-Release number of selected component (if applicable):

 

How reproducible:

Pulled up a 4.12 cluster and hit panic during bootstrap

Steps to Reproduce:

1.
2.
3.

Actual results:

panic as above

Expected results:

no panic

Additional info:

 

This is a clone of issue OCPBUGS-6671. The following is the description of the original issue:

This is a clone of issue OCPBUGS-3228. The following is the description of the original issue:

While starting a Pipelinerun using UI, and in the process of providing the values on "Start Pipeline" , the IBM Power Customer (Deepak Shetty from IBM) has tried creating credentials under "Advanced options" with "Image Registry Credentials" (Authenticaion type). When the IBM Customer verified the credentials from  Secrets tab (in Workloads) , the secret was found in broken state. Screenshot of the broken secret is attached. 

The issue has been observed on OCP4.8, OCP4.9 and OCP4.10.

Description of problem:

The alertmanager pod is stuck on OCP 4.11 with OVN in container Creating State

From oc describe alertmanager pod:
...
Events:
  Type     Reason                  Age                  From     Message
  ----     ------                  ----                 ----     -------
  Warning  FailedCreatePodSandBox  16s (x459 over 17h)  kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-managed-ocs-alertmanager-0_openshift-storage_3a55ed54-4eaa-4f65-8a10-e5d21fad1ebc_0(88575547dc0b210307b89dd2bb8e379ece0962b607ac2707a1c2cf630b1aaa78): error adding pod openshift-storage_alertmanager-managed-ocs-alertmanager-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): [openshift-storage/alertmanager-managed-ocs-alertmanager-0/3a55ed54-4eaa-4f65-8a10-e5d21fad1ebc:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-storage/alertmanager-managed-ocs-alertmanager-0 88575547dc0b210307b89dd2bb8e379ece0962b607ac2707a1c2cf630b1aaa78] [openshift

Version-Release number of selected component (if applicable):

OCP 4.11 with OVN

How reproducible:

100%

Steps to Reproduce:

1. Terminate the node on which alertmanager pod is running
2. pod will get stuck in container Creating state
3.

Actual results:

AlertManager pod is stuck in container Creating state

Expected results:

Alertmanager pod is ready

Additional info:

The workaround would be to terminate the alertmanager pod

This is a clone of issue OCPBUGS-4072. The following is the description of the original issue:

This is a clone of issue OCPBUGS-4026. The following is the description of the original issue:

Description of problem:
There is an endless re-render loop and a browser feels slow to stuck when opening the add page or the topology.

Saw also endless API calls to /api/kubernetes/apis/binding.operators.coreos.com/v1alpha1/bindablekinds/bindable-kinds

Version-Release number of selected component (if applicable):
1. Console UI 4.12-4.13 (master)
2. Service Binding Operator (tested with 1.3.1)

How reproducible:
Always with installed SBO

But the "stuck feeling" depends on the browser (Firefox feels more stuck) and your locale machine power

Steps to Reproduce:
1. Install Service Binding Operator
2. Create or update the BindableKinds resource "bindable-kinds"

apiVersion: binding.operators.coreos.com/v1alpha1
kind: BindableKinds
metadata:
  name: bindable-kinds

3. Open the browser console log
4. Open the console UI and navigate to the add page

Actual results:
1. Saw endless API calls to /api/kubernetes/apis/binding.operators.coreos.com/v1alpha1/bindablekinds/bindable-kinds
2. Browser feels slow and get stuck after some time
3. The page crashs after some time

Expected results:
1. The API call should be called just once
2. The add page should just work without feeling laggy
3. No crash

Additional info:
Get introduced after we watching the bindable-kinds resource with https://github.com/openshift/console/pull/11161

It looks like this happen only if the SBO is installed and the bindable-kinds resource exist, but doesn't contain any status.

The status list all available bindable resource types. I could not reproduce this by installing and uninstalling an operator, but you can manually create or update this resource as mentioned above.

Description of problem:

Cluster running 4.10.52 had three aws-ebs-csi-driver-node pods begin to consume multiple GB of memory, causing heavy node memory pressure as the pods have no memory limit. 

All other aws-ebs-csi-driver-node pods were still in the 50-70MB range:

NAME                                            CPU(cores)   MEMORY(bytes)   
aws-ebs-csi-driver-controller-59867579b-d6s2q   0m           397Mi           
aws-ebs-csi-driver-controller-59867579b-t4wgq   0m           276Mi           
aws-ebs-csi-driver-node-4rmvk                   0m           53Mi            
aws-ebs-csi-driver-node-5799f                   0m           50Mi            
aws-ebs-csi-driver-node-6dpvg                   0m           59Mi            
aws-ebs-csi-driver-node-6ldzk                   0m           65Mi            
aws-ebs-csi-driver-node-6mbk5                   0m           54Mi            
aws-ebs-csi-driver-node-bkvsr                   0m           50Mi            
aws-ebs-csi-driver-node-c2fb2                   0m           62Mi            
aws-ebs-csi-driver-node-f422m                   0m           61Mi            
aws-ebs-csi-driver-node-lwzbb                   6m           1940Mi          
aws-ebs-csi-driver-node-mjznt                   0m           53Mi            
aws-ebs-csi-driver-node-pczsj                   0m           62Mi            
aws-ebs-csi-driver-node-pmskn                   0m           3493Mi          
aws-ebs-csi-driver-node-qft8w                   0m           68Mi            
aws-ebs-csi-driver-node-v5bpx                   11m          2076Mi          
aws-ebs-csi-driver-node-vn8km                   0m           84Mi            
aws-ebs-csi-driver-node-ws6hx                   0m           73Mi            
aws-ebs-csi-driver-node-xsk7k                   0m           59Mi            
aws-ebs-csi-driver-node-xzwlh                   0m           55Mi            
aws-ebs-csi-driver-operator-8c5ffb6d4-fk6zk     5m           88Mi            

Deleting the pods caused them to recreate, with normal memory consumption levels.

Version-Release number of selected component (if applicable):

4.10.52

How reproducible:

Unknown

This is a clone of issue OCPBUGS-501. The following is the description of the original issue:

Description of problem: 

Version-Release number of selected component (if applicable): 4.10.16

How reproducible: Always

Steps to Reproduce:
1. Edit the apiserver resource and add spec.audit.customRules field

$ oc get apiserver cluster -o yaml
spec:
audit:
customRules:

  • group: system:authenticated:oauth
    profile: AllRequestBodies
  • group: system:authenticated
    profile: AllRequestBodies
    profile: Default

2. Allow the kube-apiserver pods to rollout new revision.
3. Once the kube-apiserver pods are in new revision execute $ oc get dc

Actual results:

Error from server (InternalError): an error on the server ("This request caused apiserver to panic. Look in the logs for details.") has prevented the request from succeeding (get deploymentconfigs.apps.openshift.io)

Expected results: The command "oc get dc" should display the deploymentconfig without any error.

Additional info:

This is a clone of issue OCPBUGS-13183. The following is the description of the original issue:

This is a clone of issue OCPBUGS-10990. The following is the description of the original issue:

This is a clone of issue OCPBUGS-10526. The following is the description of the original issue:

Description of problem:


Version-Release number of selected component (if applicable):

 4.13.0-0.nightly-2023-03-17-161027 

How reproducible:

Always

Steps to Reproduce:

1.  Create a GCP XPN cluster with flexy job template ipi-on-gcp/versioned-installer-xpn-ci, then 'oc descirbe node'

2. Check logs for cloud-network-config-controller pods

Actual results:


 % oc get nodes
NAME                                                          STATUS   ROLES                  AGE    VERSION
huirwang-0309d-r85mj-master-0.c.openshift-qe.internal         Ready    control-plane,master   173m   v1.26.2+06e8c46
huirwang-0309d-r85mj-master-1.c.openshift-qe.internal         Ready    control-plane,master   173m   v1.26.2+06e8c46
huirwang-0309d-r85mj-master-2.c.openshift-qe.internal         Ready    control-plane,master   173m   v1.26.2+06e8c46
huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal   Ready    worker                 162m   v1.26.2+06e8c46
huirwang-0309d-r85mj-worker-b-5txgq.c.openshift-qe.internal   Ready    worker                 162m   v1.26.2+06e8c46
 `oc describe node`, there is no related egressIP annotations 
% oc describe node huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal 
Name:               huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal
Roles:              worker
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=n2-standard-4
                    beta.kubernetes.io/os=linux
                    failure-domain.beta.kubernetes.io/region=us-central1
                    failure-domain.beta.kubernetes.io/zone=us-central1-a
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal
                    kubernetes.io/os=linux
                    machine.openshift.io/interruptible-instance=
                    node-role.kubernetes.io/worker=
                    node.kubernetes.io/instance-type=n2-standard-4
                    node.openshift.io/os_id=rhcos
                    topology.gke.io/zone=us-central1-a
                    topology.kubernetes.io/region=us-central1
                    topology.kubernetes.io/zone=us-central1-a
Annotations:        csi.volume.kubernetes.io/nodeid:
                      {"pd.csi.storage.gke.io":"projects/openshift-qe/zones/us-central1-a/instances/huirwang-0309d-r85mj-worker-a-wsrls"}
                    k8s.ovn.org/host-addresses: ["10.0.32.117"]
                    k8s.ovn.org/l3-gateway-config:
                      {"default":{"mode":"shared","interface-id":"br-ex_huirwang-0309d-r85mj-worker-a-wsrls.c.openshift-qe.internal","mac-address":"42:01:0a:00:...
                    k8s.ovn.org/node-chassis-id: 7fb1870c-4315-4dcb-910c-0f45c71ad6d3
                    k8s.ovn.org/node-gateway-router-lrp-ifaddr: {"ipv4":"100.64.0.5/16"}
                    k8s.ovn.org/node-mgmt-port-mac-address: 16:52:e3:8c:13:e2
                    k8s.ovn.org/node-primary-ifaddr: {"ipv4":"10.0.32.117/32"}
                    k8s.ovn.org/node-subnets: {"default":["10.131.0.0/23"]}
                    machine.openshift.io/machine: openshift-machine-api/huirwang-0309d-r85mj-worker-a-wsrls
                    machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable
                    machineconfiguration.openshift.io/currentConfig: rendered-worker-bec5065070ded51e002c566a9c5bd16a
                    machineconfiguration.openshift.io/desiredConfig: rendered-worker-bec5065070ded51e002c566a9c5bd16a
                    machineconfiguration.openshift.io/desiredDrain: uncordon-rendered-worker-bec5065070ded51e002c566a9c5bd16a
                    machineconfiguration.openshift.io/lastAppliedDrain: uncordon-rendered-worker-bec5065070ded51e002c566a9c5bd16a
                    machineconfiguration.openshift.io/reason: 
                    machineconfiguration.openshift.io/state: Done
                    volumes.kubernetes.io/controller-managed-attach-detach: true


 % oc logs cloud-network-config-controller-5cd96d477d-2kmc9  -n openshift-cloud-network-config-controller  
W0320 03:00:08.981493       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0320 03:00:08.982280       1 leaderelection.go:248] attempting to acquire leader lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock...
E0320 03:00:38.982868       1 leaderelection.go:330] error retrieving resource lock openshift-cloud-network-config-controller/cloud-network-config-controller-lock: Get "https://api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/cloud-network-config-controller-lock": dial tcp: lookup api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com: i/o timeout
E0320 03:01:23.863454       1 leaderelection.go:330] error retrieving resource lock openshift-cloud-network-config-controller/cloud-network-config-controller-lock: Get "https://api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com:6443/api/v1/namespaces/openshift-cloud-network-config-controller/configmaps/cloud-network-config-controller-lock": dial tcp: lookup api-int.huirwang-0309d.qe.gcp.devcluster.openshift.com on 172.30.0.10:53: read udp 10.129.0.14:52109->172.30.0.10:53: read: connection refused
I0320 03:02:19.249359       1 leaderelection.go:258] successfully acquired lease openshift-cloud-network-config-controller/cloud-network-config-controller-lock
I0320 03:02:19.250662       1 controller.go:88] Starting node controller
I0320 03:02:19.250681       1 controller.go:91] Waiting for informer caches to sync for node workqueue
I0320 03:02:19.250693       1 controller.go:88] Starting secret controller
I0320 03:02:19.250703       1 controller.go:91] Waiting for informer caches to sync for secret workqueue
I0320 03:02:19.250709       1 controller.go:88] Starting cloud-private-ip-config controller
I0320 03:02:19.250715       1 controller.go:91] Waiting for informer caches to sync for cloud-private-ip-config workqueue
I0320 03:02:19.258642       1 controller.go:182] Assigning key: huirwang-0309d-r85mj-master-2.c.openshift-qe.internal to node workqueue
I0320 03:02:19.258671       1 controller.go:182] Assigning key: huirwang-0309d-r85mj-master-1.c.openshift-qe.internal to node workqueue
I0320 03:02:19.258682       1 controller.go:182] Assigning key: huirwang-0309d-r85mj-master-0.c.openshift-qe.internal to node workqueue
I0320 03:02:19.351258       1 controller.go:96] Starting node workers
I0320 03:02:19.351303       1 controller.go:102] Started node workers
I0320 03:02:19.351298       1 controller.go:96] Starting secret workers
I0320 03:02:19.351331       1 controller.go:102] Started secret workers
I0320 03:02:19.351265       1 controller.go:96] Starting cloud-private-ip-config workers
I0320 03:02:19.351508       1 controller.go:102] Started cloud-private-ip-config workers
E0320 03:02:19.589704       1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-1.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-1.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue
E0320 03:02:19.615551       1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-0.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-0.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue
E0320 03:02:19.644628       1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-2.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-2.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue
E0320 03:02:19.774047       1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-0.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-0.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue
E0320 03:02:19.783309       1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-1.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-1.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue
E0320 03:02:19.816430       1 controller.go:165] error syncing 'huirwang-0309d-r85mj-master-2.c.openshift-qe.internal': error retrieving the private IP configuration for node: huirwang-0309d-r85mj-master-2.c.openshift-qe.internal, err: error retrieving the network interface subnets, err: googleapi: Error 404: The resource 'projects/openshift-qe/regions/us-central1/subnetworks/installer-shared-vpc-subnet-1' was not found, notFound, requeuing in node workqueue

Expected results:

EgressIP should work

Additional info:

It can be reproduced in  4.12 as well, not regression issue.

This is a clone of issue OCPBUGS-6755. The following is the description of the original issue:

This is a clone of issue OCPBUGS-3316. The following is the description of the original issue:

Description of problem:

Branch name in repository pipelineruns list view should match the actual github branch name.

Version-Release number of selected component (if applicable):

4.11.z

How reproducible:

alwaus

Steps to Reproduce:

1. Create a repository
2. Trigger the pipelineruns by push or pull request event on the github 

Actual results:

Branch name contains "refs-heads-" prefix in front of the actual branch name eg: "refs-heads-cicd-demo" (cicd-demo is the branch name)

Expected results:

Branch name should be the acutal github branch name. just `cicd-demo`should be shown in the branch column.

 

Additional info:
Ref: https://coreos.slack.com/archives/CHG0KRB7G/p1667564311865459

This is a 4.11.z backport.  By not backporting this change originally to 4.11 we introduced a mandatory flag for `opm serve` command without at least a 2-OCP-version runway as optional.   This is impacting customers on 4.11 clusters as well as dependent peer projects like oc-mirror.

 

-----------------------------

Description of problem:

opm serve fails with message:

Error: compute digest: compute hash: write tar: stat .: os: DirFS with empty root

Version-Release number of selected component (if applicable):

4.12

How reproducible:

100%

Steps to Reproduce:

(The easiest reproducer involves serving an empty catalog)

1. mkdir /tmp/catalog

2. using Dockerfile /tmp/catalog.Dockerfile based on 4.12 docs (https://access.redhat.com/documentation/en-us/openshift_container_platform/4.12/html-single/operators/index#olm-creating-fb-catalog-image_olm-managing-custom-catalogs
# The base image is expected to contain
# /bin/opm (with a serve subcommand) and /bin/grpc_health_probe
FROM registry.redhat.io/openshift4/ose-operator-registry:v4.12

# Configure the entrypoint and command
ENTRYPOINT ["/bin/opm"]
CMD ["serve", "/configs"]

# Copy declarative config root into image at /configs
ADD catalog /configs

# Set DC-specific label for the location of the DC root directory
# in the image
LABEL operators.operatorframework.io.index.configs.v1=/configs

3. build the image `cd /tmp/ && docker build -f catalog.Dockerfile .`

4. execute an instance of the container in docker/podman `docker run --name cat-run [image-file]`

5. error

Using a dockerfile generated from opm (`opm generate dockerfile [dir]`) works, but includes precache and cachedir options to opm.

 

Actual results:

Error: compute digest: compute hash: write tar: stat .: os: DirFS with empty root

Expected results:

opm generates cache in default /tmp/cache location and serves without error

Additional info:

 

 

This is a clone of issue OCPBUGS-7474. The following is the description of the original issue:

This is a clone of issue OCPBUGS-6714. The following is the description of the original issue:

Description of problem:

Traffic from egress IPs was interrupted after Cluster patch to Openshift 4.10.46

a customer cluster was patched. It is an Openshift 4.10.46 cluster with SDN.

More description about issue is available in private comment below since it contains customer data.

This is a clone of issue OCPBUGS-1237. The following is the description of the original issue:

job=pull-ci-openshift-origin-master-e2e-gcp-builds=all

This test has started permafailing on e2e-gcp-builds:

[sig-builds][Feature:Builds][Slow] s2i build with environment file in sources Building from a template should create a image from "test-env-build.json" template and run it in a pod [apigroup:build.openshift.io][apigroup:image.openshift.io]

The error in the test says

Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:21 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Pulling: Pulling image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-sti-env-nglnt/test@sha256:262820fd1a94d68442874346f4c4024fdf556631da51cbf37ce69de094f56fe8"
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:23 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Pulled: Successfully pulled image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-sti-env-nglnt/test@sha256:262820fd1a94d68442874346f4c4024fdf556631da51cbf37ce69de094f56fe8" in 1.763914719s
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:23 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Created: Created container test
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:23 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Started: Started container test
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:24 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Pulled: Container image "image-registry.openshift-image-registry.svc:5000/e2e-test-build-sti-env-nglnt/test@sha256:262820fd1a94d68442874346f4c4024fdf556631da51cbf37ce69de094f56fe8" already present on machine
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:25 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} Unhealthy: Readiness probe failed: Get "http://10.129.2.63:8080/": dial tcp 10.129.2.63:8080: connect: connection refused
Sep 13 07:03:30.345: INFO: At 2022-09-13 07:00:26 +0000 UTC - event for build-test-pod: {kubelet ci-op-kg1t2x13-4e3c6-7hrm8-worker-a-66nwd} BackOff: Back-off restarting failed container

Tracker issue for bootimage bump in 4.11. This issue should block issues which need a bootimage bump to fix.

The previous bump was OCPBUGS-562.

While running a PerfScale test we noticed that the hosted ovnkube-master pods always initially error on deployment. They eventually succeed on retry however. 

This is running quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters and the hypershift operator is quay.io/hypershift/hypershift-operator:4.11 on a 4.11.9 management cluster.

An example of the error in the ovnkube-master container:

```

F1102 13:27:51.935600       1 ovnkube.go:133] error when trying to initialize libovsdb SB client: unable to connect to any endpoints: failed to connect to ssl:ovnkube-master-0.ovnkube-master-internal.clusters-perf-pqd-0021.svc.cluster.local:9642: failed to open connection: dial tcp 10.131.8.25:9642: connect: connection refused. failed to connect to ssl:ovnkube-master-1.ovnkube-master-internal.clusters-perf-pqd-0021.svc.cluste

```

Description of problem:

The 4.11 version of openshift-installer does not support the mon01 zone

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

1.
2.
3.

Actual results:


Expected results:


Additional info:


This is a clone of issue OCPBUGS-2895. The following is the description of the original issue:

Description of problem:

Current validation will not accept Resource Groups or DiskEncryptionSets which have upper-case letters.

Version-Release number of selected component (if applicable):

4.11

How reproducible:

Attempt to create a cluster/machineset using a DiskEncryptionSet with an RG or Name with upper-case letters

Steps to Reproduce:

1. Create cluster with DiskEncryptionSet with upper-case letters in DES name or in Resource Group name

Actual results:

See error message:

encountered error: [controlPlane.platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup: Invalid value: \"v4-e2e-V62447568-eastus\": invalid resource group format, compute[0].platform.azure.defaultMachinePlatform.osDisk.diskEncryptionSet.resourceGroup: Invalid value: \"v4-e2e-V62447568-eastus\": invalid resource group format]

Expected results:

Create a cluster/machineset using the existing and valid DiskEncryptionSet

Additional info:

I have submitted a PR for this already, but it needs to be reviewed and backported to 4.11: https://github.com/openshift/installer/pull/6513

Description of problem: defined in https://bugzilla.redhat.com/show_bug.cgi?id=2051533 

When adding remote worker node using ZTP the agent finishes the installation and is marked as done. 
oc get agent -o wide
NAME                                   CLUSTER   APPROVED   ROLE     STAGE   HOSTNAME                                      REQUESTED HOSTNAME
0277804e-2a7c-4d95-9d0f-e22a190d582a   spoke-0   true       worker   Done    spoke-worker-0-0.spoke-0.qe.lab.redhat.com    spoke-worker-0-0
12efa520-5b99-4474-805d-931e46ad43f7   spoke-0   true       master   Done    spoke-master-0-2.spoke-0.qe.lab.redhat.com    spoke-master-0-2
3b8eec89-f26f-4896-8f71-8a810894c560   spoke-0   true       master   Done    spoke-master-0-0.spoke-0.qe.lab.redhat.com    spoke-master-0-0
3fb3749e-c132-4258-ad1a-08a0445c9022   spoke-0   true       worker   Done    spoke-worker-0-1.spoke-0.qe.lab.redhat.com    spoke-worker-0-1
728559e9-5543-41d9-adb0-e58196f765af   spoke-0   true       master   Done    spoke-master-0-1.spoke-0.qe.lab.redhat.com    spoke-master-0-1
982e1ff6-6e83-4800-b061-8cdfd0b844fb   spoke-0   true       worker   Done    spoke-rwn-0-1.spoke-rwn-0.qe.lab.redhat.com   spoke-rwn-0-1
a76eaa6a-b351-429f-bfa1-e53a70503573   spoke-0   true       worker   Done    spoke-rwn-0-0.spoke-rwn-0.qe.lab.redhat.com   spoke-rwn-0-0



Logging into the spoke cluster the bmh and machine resources are created and the node resource is not:
oc get bmh -n openshift-machine-api
NAME                STATE                    CONSUMER                       ONLINE   ERROR                            AGE
spoke-master-0-0    unmanaged                spoke-0-pxbfh-master-0         true                                      3h32m
spoke-master-0-1    unmanaged                spoke-0-pxbfh-master-1         true                                      3h32m
spoke-master-0-2    unmanaged                spoke-0-pxbfh-master-2         true                                      3h32m
spoke-rwn-0-0-bmh   externally provisioned   spoke-0-spoke-rwn-0-0-bmh      true     provisioned registration error   168m
spoke-rwn-0-1-bmh   externally provisioned   spoke-0-spoke-rwn-0-1-bmh      true     provisioned registration error   168m
spoke-worker-0-0    unmanaged                spoke-0-pxbfh-worker-0-65mrb   true                                      3h32m
spoke-worker-0-1    unmanaged                spoke-0-pxbfh-worker-0-nnmcq   true                                      3h32m     

 oc get machine -n openshift-machine-api
NAME                           PHASE         TYPE   REGION   ZONE   AGE
spoke-0-pxbfh-master-0         Running                              3h33m
spoke-0-pxbfh-master-1         Running                              3h33m
spoke-0-pxbfh-master-2         Running                              3h33m
spoke-0-pxbfh-worker-0-65mrb   Running                              3h19m
spoke-0-pxbfh-worker-0-nnmcq   Running                              3h20m
spoke-0-spoke-rwn-0-0-bmh      Provisioned                          169m
spoke-0-spoke-rwn-0-1-bmh      Provisioned                          169m

Note: bmh is in error state:
Normal  ProvisionedRegistrationError  30m   metal3-baremetal-controller  Host adoption failed: Error while attempting to adopt node 529b3e75-5d04-4486-9296-269081d0ec02: Error validating Redfish virtual media. Some parameters were missing in node's driver_info. Missing are: ['deploy_kernel', 'deploy_ramdisk'].

oc get nodes
NAME                                         STATUS   ROLES    AGE   VERSION
spoke-master-0-0.spoke-0.qe.lab.redhat.com   Ready    master   72m   v1.22.3+2cb6068
spoke-master-0-1.spoke-0.qe.lab.redhat.com   Ready    master   50m   v1.22.3+2cb6068
spoke-master-0-2.spoke-0.qe.lab.redhat.com   Ready    master   72m   v1.22.3+2cb6068
spoke-worker-0-0.spoke-0.qe.lab.redhat.com   Ready    worker   51m   v1.22.3+2cb6068
spoke-worker-0-1.spoke-0.qe.lab.redhat.com   Ready    worker   51m   v1.22.3+2cb6068


node-bootstrapper CSR is created but not auto-approved; periodically another node-strapper csr is created until it is manually approved:

oc get csr | grep Pending
csr-5ll2g                                        9m9s    kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper         <none>              Pending
csr-f8vbl                                        8m24s   kubernetes.io/kube-apiserver-client-kubelet   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper         <none>              Pending

Version-Release number of selected component (if applicable):

assisted-service master at revision af0bafb3f7f629932f8c3dc31ccddedfe6984926
ocp version: 4.10.0-rc.1

How reproducible:

1. Install remote worker node using ztp

2. Wait for node resource to be created

Steps to Reproduce:

1. Install remote worker node using ztp

2. Wait for node resource to be created
 

Actual results:

node-bootstrapper and node CSR are not auto-approved and node resource is not created.  The bmh resource remains in registration error

Expected results:

node-bootstrapper and node CSR should be auto-approved and node resource created.  The bmh resource should not be in registration error

Additional info:

 

Description of problem:
Follow-up of: https://issues.redhat.com/browse/SDN-2988

This failure is perma-failing in the e2e-metal-ipi-ovn-dualstack-local-gateway jobs.

Example: https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-nightly-4.13-e2e-metal-ipi-ovn-dualstack-local-gateway/1597574181430497280
Search CI: https://search.ci.openshift.org/?search=when+using+openshift+ovn-kubernetes+should+ensure+egressfirewall+is+created&maxAge=336h&context=1&type=junit&name=e2e-metal-ipi-ovn-dualstack-local-gateway&excludeName=&maxMatches=5&maxBytes=20971520&groupBy=job
Sippy: https://sippy.dptools.openshift.org/sippy-ng/jobs/4.13/analysis?filters=%7B%22items%22%3A%5B%7B%22columnField%22%3A%22name%22%2C%22operatorValue%22%3A%22equals%22%2C%22value%22%3A%22periodic-ci-openshift-release-master-nightly-4.13-e2e-metal-ipi-ovn-dualstack-local-gateway%22%7D%5D%7D

Version-Release number of selected component (if applicable):

4.12,4.13

How reproducible:

Every time

Steps to Reproduce:

1. Setup dualstack KinD cluster
2. Create egress fw policy with spec
Spec:
  Egress:
    To:
      Cidr Selector:  0.0.0.0/0
    Type:             Deny
3. create a pod and ping to 1.1.1.1

Actual results:

Egress policy does not block flows to external IP

Expected results:

Egress policy blocks flows to external IP

Additional info:

It seems mixing ip4 and ip6 operands in ACL matchs doesnt work

This is a clone of issue OCPBUGS-11998. The following is the description of the original issue:

This is a clone of issue OCPBUGS-10678. The following is the description of the original issue:

This is a clone of issue OCPBUGS-10655. The following is the description of the original issue:

Description of problem:
The dev console shows a list of samples. The user can create a sample based on a git repository. But some of these samples doesn't include a git repository reference and could not be created.

Version-Release number of selected component (if applicable):
Tested different frontend versions against a 4.11 cluster and all (oldest tested frontend was 4.8) show the sample without git repository.

But the result also depends on the installed samples operator and installed ImageStreams.

How reproducible:
Always

Steps to Reproduce:

  1. Switch to the Developer perspective
  2. Navigate to Add > All Samples
  3. Search for Jboss
  4. Click on "JBoss EAP XP 4.0 with OpenJDK 11" (for example)

Actual results:
The git repository is not filled and the create button is disabled.

Expected results:
Samples without git repositories should not be displayed in the list.

Additional info:
The Git repository is saved as "sampleRepo" in the ImageStream tag section.

This is a clone of issue OCPBUGS-4805. The following is the description of the original issue:

This is a clone of issue OCPBUGS-4101. The following is the description of the original issue:

Description of problem:

We experienced two separate upgrade failures relating to the introduction of the SYSTEM_RESERVED_ES node sizing parameter, causing kubelet to stop running.

One cluster (clusterA) upgraded from 4.11.14 to 4.11.17. It experienced an issue whereby 
   /etc/node-sizing.env 
on its master nodes contained an empty SYSTEM_RESERVED_ES value:

---
cat /etc/node-sizing.env 
SYSTEM_RESERVED_MEMORY=5.36Gi
SYSTEM_RESERVED_CPU=0.11
SYSTEM_RESERVED_ES=
---

causing the kubelet to not start up. To restore service, this file was manually updated to set a value (1Gi), and kubelet was restarted.

We are uncertain what conditions led to this occuring on the clusterA master nodes as part of the upgrade.

A second cluster (clusterB) upgraded from 4.11.16 to 4.11.17. It experienced an issue whereby worker nodes were impacted by a similar problem, however this was because a custom node-sizing-enabled.env MachineConfig which did not set SYSTEM_RESERVED_ES

This caused existing worker nodes to go into a NotReady state after the ugprade, and additionally new nodes did not join the cluster as their kubelet would become impacted. 

For clusterB the conditions are more well-known of why the value is empty.

However, for both clusters, if SYSTEM_RESERVED_ES ends up as empty on a node it can cause the kubelet to not start. 

We have some asks as a result:
- Can MCO be made to recover from this situation if it occurs, perhaps  through application of a safe default if none exists, such that kubelet would start correctly?
- Can there possibly be alerting that could indicate and draw attention to the misconfiguration?

Version-Release number of selected component (if applicable):

4.11.17

How reproducible:

Have not been able to reproduce it on a fresh cluster upgrading from 4.11.16 to 4.11.17

Expected results:

If SYSTEM_RESERVED_ES is empty in /etc/node-sizing*env then a default should be applied and/or kubelet able to continue running.

Additional info:

 

Description of problem:

release-4.11 of openshift/cloud-provider-openstack is missing some commits that were backported in upstream project into the release-1.24 branch.
We should import them in our downstream fork.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:

1.
2.
3.

Actual results:


Expected results:


Additional info:


This is a clone of issue OCPBUGS-4238. The following is the description of the original issue:

This is a clone of issue OCPBUGS-3883. The following is the description of the original issue:

While doing a PerfScale test of we noticed that the ovnkube pods are not being spread out evenly among the available workers. Instead they are all stacking on a few until they fill up the available allocatable ebs volumes (25 in the case of m5 instances that we see here).

An example from partway through our 80 hosted cluster test when there were ~30 hosted clusters created/in progress

There are 24 workers available:

```

$ for i in `oc get nodes l node-role.kubernetes.io/worker=,node-role.kubernetes.io/infra!=,node-role.kubernetes.io/workload!= | egrep -v "NAME" | awk '{ print $1 }'`;    do  echo $i `oc describe node $i | grep -v openshift | grep ovnkube -c`; done
ip-10-0-129-227.us-west-2.compute.internal 0
ip-10-0-136-22.us-west-2.compute.internal 25
ip-10-0-136-29.us-west-2.compute.internal 0
ip-10-0-147-248.us-west-2.compute.internal 0
ip-10-0-150-147.us-west-2.compute.internal 0
ip-10-0-154-207.us-west-2.compute.internal 0
ip-10-0-156-0.us-west-2.compute.internal 0
ip-10-0-157-1.us-west-2.compute.internal 4
ip-10-0-160-253.us-west-2.compute.internal 0
ip-10-0-161-30.us-west-2.compute.internal 0
ip-10-0-164-98.us-west-2.compute.internal 0
ip-10-0-168-245.us-west-2.compute.internal 0
ip-10-0-170-103.us-west-2.compute.internal 0
ip-10-0-188-169.us-west-2.compute.internal 25
ip-10-0-188-194.us-west-2.compute.internal 0
ip-10-0-191-51.us-west-2.compute.internal 5
ip-10-0-192-10.us-west-2.compute.internal 0
ip-10-0-193-200.us-west-2.compute.internal 0
ip-10-0-193-27.us-west-2.compute.internal 7
ip-10-0-199-1.us-west-2.compute.internal 0
ip-10-0-203-161.us-west-2.compute.internal 0
ip-10-0-204-40.us-west-2.compute.internal 23
ip-10-0-220-164.us-west-2.compute.internal 0
ip-10-0-222-59.us-west-2.compute.internal 0

```

This is running quay.io/openshift-release-dev/ocp-release:4.11.11-x86_64 for the hosted clusters and the hypershift operator is quay.io/hypershift/hypershift-operator:4.11 on a 4.11.9 management cluster

This is a clone of issue OCPBUGS-7373. The following is the description of the original issue:

Originally reported by lance5890 in issue https://github.com/openshift/cluster-etcd-operator/issues/1000

Under some circumstances the static pod machinery fails to populate the node status in time to generate the correct env variables for ETCD_URL_HOST, ETCD_NAME etc. The pods that come up will fail to accept those variables.

This is particularly pronounced in SNO topologies, leading to installation failures. 

The fix is to fail fast in the targetconfig/envvar controller to ensure the CEO goes degraded instead of silently failing on the rollout of an invalid static pod.

This is a clone of issue OCPBUGS-1629. The following is the description of the original issue:

Description of problem:

It is a disconnected cluster on AWS. There is an issue configuring Egress IP where the cluster uses STS. While looking into cloud-network-config-controller pod it is trying to connect to the global sts service "https://sts.amazonaws.com/" rather it should connect to the regional one "https://ec2.ap-southeast-1.amazonaws.com".

Version-Release number of selected component (if applicable):

 

How reproducible:

Always

Steps to Reproduce:

1. Create a disconected OCP cluster on AWS.
$ oc get netnamespace | grep egress
egress-ip-test                                     2689387    ["172.16.1.24"]
$ oc get hostsubnet
NAME                                              HOST                                              HOST IP        SUBNET          EGRESS CIDRS   EGRESS IPS
ip-172-16-1-151.ap-southeast-1.compute.internal   ip-172-16-1-151.ap-southeast-1.compute.internal   172.16.1.151   10.130.0.0/23                  
ip-172-16-1-53.ap-southeast-1.compute.internal    ip-172-16-1-53.ap-southeast-1.compute.internal    172.16.1.53    10.131.0.0/23                  ["172.16.1.24"]
ip-172-16-2-15.ap-southeast-1.compute.internal    ip-172-16-2-15.ap-southeast-1.compute.internal    172.16.2.15    10.128.0.0/23                  
ip-172-16-2-77.ap-southeast-1.compute.internal    ip-172-16-2-77.ap-southeast-1.compute.internal    172.16.2.77    10.128.2.0/23                  
ip-172-16-3-111.ap-southeast-1.compute.internal   ip-172-16-3-111.ap-southeast-1.compute.internal   172.16.3.111   10.129.0.0/23                  
ip-172-16-3-79.ap-southeast-1.compute.internal    ip-172-16-3-79.ap-southeast-1.compute.internal    172.16.3.79    10.129.2.0/23                  
$ oc logs sdn-controller-6m5kb -n openshift-sdn I0922 04:09:53.348615       1 vnids.go:105] Allocated netid 2689387 for namespace "egress-ip-test"
E0922 04:24:00.682018       1 egressip.go:254] Ignoring invalid HostSubnet ip-172-16-1-53.ap-southeast-1.compute.internal (host: "ip-172-16-1-53.ap-southeast-1.compute.internal", ip: "172.16.1.53", subnet: "10.131.0.0/23"): related node object "ip-172-16-1-53.ap-southeast-1.compute.internal" has an incomplete annotation "cloud.network.openshift.io/egress-ipconfig", CloudEgressIPConfig: <nil>
 $ oc logs cloud-network-config-controller-5c7556db9f-x78bs -n openshift-cloud-network-config-controller

E0922 04:26:59.468726       1 controller.go:165] error syncing 'ip-172-16-2-77.ap-southeast-1.compute.internal': error retrieving the private IP configuration for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: error: cannot list ec2 instance for node: ip-172-16-2-77.ap-southeast-1.compute.internal, err: WebIdentityErr: failed to retrieve credentials
caused by: RequestError: send request failed
caused by: Post "https://sts.amazonaws.com/": dial tcp 54.239.29.25:443: i/o timeout, requeuing in node workqueue
$ oc get Infrastructure -o yaml
apiVersion: v1
items:
- apiVersion: config.openshift.io/v1
  kind: Infrastructure
  metadata:
    creationTimestamp: "2022-09-22T03:28:15Z"
    generation: 1
    name: cluster
    resourceVersion: "598"
    uid: 994da301-2a96-43b7-b43b-4b7c18d4b716
  spec:
    cloudConfig:
      name: ""
    platformSpec:
      aws:
        serviceEndpoints:
        - name: sts
          url: https://sts.ap-southeast-1.amazonaws.com
        - name: ec2
          url: https://ec2.ap-southeast-1.amazonaws.com
        - name: elasticloadbalancing
          url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com
      type: AWS
  status:
    apiServerInternalURI: https://api-int.openshiftyy.ocpaws.sadiqueonline.com:6443
    apiServerURL: https://api.openshiftyy.ocpaws.sadiqueonline.com:6443
    controlPlaneTopology: HighlyAvailable
    etcdDiscoveryDomain: ""
    infrastructureName: openshiftyy-wfrpf
    infrastructureTopology: HighlyAvailable
    platform: AWS
    platformStatus:
      aws:
        region: ap-southeast-1
        serviceEndpoints:
        - name: ec2
          url: https://ec2.ap-southeast-1.amazonaws.com
        - name: elasticloadbalancing
          url: https://elasticloadbalancing.ap-southeast-1.amazonaws.com
        - name: sts
          url: https://sts.ap-southeast-1.amazonaws.com
      type: AWS
kind: List
metadata:
  resourceVersion: ""
$ oc get secret aws-cloud-credentials -n openshift-machine-api -o json |jq -r .data.credentials |base64 -d 
[default]
sts_regional_endpoints = regional
role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-machine-api-aws-cloud-credentials
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
 
[ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credential-operator-iam-ro-creds -n openshift-cloud-credential-operator -o json |jq -r .data.credentials |base64 -d 
[default]
sts_regional_endpoints = regional
role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cloud-credential-operator-cloud-creden
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
 
[ec2-user@ip-172-17-1-229 ~]$ oc get secret installer-cloud-credentials -n openshift-image-registry -o json |jq -r .data.credentials |base64 -d 
[default]
sts_regional_endpoints = regional
role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-image-registry-installer-cloud-credent
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
 
[ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credentials -n openshift-ingress-operator -o json |jq -r .data.credentials |base64 -d 
[default]
sts_regional_endpoints = regional
role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-ingress-operator-cloud-credentials
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
 
[ec2-user@ip-172-17-1-229 ~]$ oc get secret cloud-credentials -n openshift-cloud-network-config-controller -o json |jq -r .data.credentials |base64 -d 
[default]
sts_regional_endpoints = regional
role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cloud-network-config-controller-cloud-
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
 
[ec2-user@ip-172-17-1-229 ~]$ oc get secret ebs-cloud-credentials -n openshift-cluster-csi-drivers -o json |jq -r .data.credentials |base64 -d
[default]
sts_regional_endpoints = regional
role_arn = arn:aws:iam::015719942846:role/sputhenp-sts-yy-openshift-cluster-csi-drivers-ebs-cloud-credenti
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
 

 

Actual results:

Egress IP not configured properly and cloud-network-config-controller trying to connect to global STS service.

Expected results:

Egress IP should get configured and cloud-network-config-controller should connect to regional STS service instead of global.

Additional info:

 

Description of problem:
When creating a incomplete ClusterServiceVersion resource the OLM details page crashes (on 4.11).

apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  name: minimal-csv
  namespace: christoph
spec:
  apiservicedefinitions:
    owned:
      - group: A
        kind: A
        name: A
        version: v1
  customresourcedefinitions:
    owned:
      - kind: B
        name: B
        version: v1
  displayName: My minimal CSV
  install:
    strategy: ''

Version-Release number of selected component (if applicable):
Crashes on 4.8-4.11, work fine from 4.12 onwards.

How reproducible:
Alway

Steps to Reproduce:
1. Apply the ClusterServiceVersion YAML from above
2. Open the Admin perspective > Installed Operator > Operator detail page

Actual results:
Details page crashes on tab A and B.

Expected results:
Page should not crash

Additional info:
Thi is a follow up on https://bugzilla.redhat.com/show_bug.cgi?id=2084287

Description of problem:

There is a bug affecting verify steps functionality on iDRAC hardware in OpenShift 4.11 and 4.10. Original bug report has been made against 4.10:

https://issues.redhat.com/browse/OCPBUGS-1740

While I am not aware of this issue being reported against 4.11, due to the fact that the fix is only present in 4.12 codebase, 4.11 versions will also be affected by this issue.

This bug is created to meet automation requirements for backporting the fixes from 4.12 version to 4.11 (and then to 4.11 in the bug quoted above).

Version-Release number of selected component (if applicable):

 

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-4250. The following is the description of the original issue:

Description of problem:

Added a script to collect PodNetworkConnectivityChecks to able to view the overall status of the pod network connectivity.

Current must-gather collects the contents of `openshift-network-diagnostics` but does not collect the PodNetworkConnectivityCheck.

Version-Release number of selected component (if applicable):

4.12, 4.11, 4.10

This is a clone of issue OCPBUGS-4311. The following is the description of the original issue:

This is a clone of issue OCPBUGS-4305. The following is the description of the original issue:

Description of problem:

Please add an option to DISABLE debug in ironic-api. Presently it is enabled by default and there is no way to disable it or reduce log level

https://github.com/metal3-io/ironic-image/blob/main/ironic-config/ironic.conf.j2#L3


Version-Release number of selected component (if applicable): none

How reproducible: Every time

Steps to Reproduce:

Please check source code here: https://github.com/metal3-io/ironic-image/blob/main/ironic-config/ironic.conf.j2#L3

It is enabled by default and there is no way to disable it or reduce log level

Actual results:

Please check Case: 03371411, the log file grew to 409 GB

Expected results: Need a way to disable debug

Additional info: Case 03371411. A cluster must gather and log file can be found in the case.

This is a clone of issue OCPBUGS-1329. The following is the description of the original issue:

Description of problem:

etcd and kube-apiserver pods get restarted due to failed liveness probes while deleting/re-creating pods on SNO

Version-Release number of selected component (if applicable):

4.10.32

How reproducible:

Not always, after ~10 attempts

Steps to Reproduce:

1. Deploy SNO with Telco DU profile applied
2. Create multiple pods with local storage volumes attached(attaching yaml manifest)
3. Force delete and re-create pods 10 times

Actual results:

etcd and kube-apiserver pods get restarted, making to cluster unavailable for a period of time

Expected results:

etcd and kube-apiserver do not get restarted

Additional info:

Attaching must-gather.

Please let me know if any additional info is required. Thank you!

This is a clone of issue OCPBUGS-7409. The following is the description of the original issue:

This is a clone of issue OCPBUGS-7374. The following is the description of the original issue:

Originally reported by lance5890 in issue https://github.com/openshift/cluster-etcd-operator/issues/1000

The controllers sometimes get stuck on listing members in failure scenarios, this is known and can be mitigated by simply restarting the CEO. 

similar BZ 2093819 with stuck controllers was fixed slightly different in https://github.com/openshift/cluster-etcd-operator/commit/4816fab709e11e0681b760003be3f1de12c9c103

 

This fix was contributed by lance5890, thanks a lot!

 

This is a clone of issue OCPBUGS-7732. The following is the description of the original issue:

Description of problem:

When services are deleted, the services controller cache should also remove the service from its top level cache to avoid growing forever.

While this is not an issue in 4.13 once the lb_cache rework merges [1], the 4.12 and older branches have this problem because that rework is meant for 4.13 only.

[1]: https://github.com/ovn-org/ovn-kubernetes/pull/3387

This is the location where alreadyApplied is not deleting the removal: 
https://github.com/openshift/ovn-kubernetes/blob/cf9fb51510e1870961bf3a0f064b73536757a4f8/go-controller/pkg/ovn/controller/services/services_controller.go#L269

It should do the similar changes depicted here (currently merged upstream):
https://github.com/ovn-org/ovn-kubernetes/blob/cd78ae1af4657d38bdc41003a8737aa958d62b9d/go-controller/pkg/ovn/controller/services/services_controller.go#L322-L324

 

Version-Release number of selected component (if applicable):

 

How reproducible:

100%

Steps to Reproduce:

1. create service -- use unique name
2. remove service
3. notice how alreadyApplied grows and never gets smaller
4. repeat

Actual results:

^^

Expected results:

alreadyApplied should not grow forever

Additional info:

 

This is a clone of issue OCPBUGS-1717. The following is the description of the original issue:

Description of problem:

Image registry pods panic while deploying OCP in me-central-1 AWS region

Version-Release number of selected component (if applicable):

4.11.2

How reproducible:

Deploy OCP in AWS me-central-1 region

Steps to Reproduce:

Deploy OCP in AWS me-central-1 region 

Actual results:

panic: Invalid region provided: me-central-1

Expected results:

Image registry pods should come up with no errors

Additional info:

 

Description of problem:

Data race seen in unit tests:
https://gcsweb-ci.apps.ci.l2s4.p1.openshiftapps.com/gcs/origin-ci-test/pr-logs/pull/openshift_ovn-kubernetes/1448/pull-ci-openshift-ovn-kubernetes-release-4.11-unit/1604898712423763968/artifacts/test/build-log.txt
 

Description of problem:

Create Loadbalancer type service within the OCP 4.11.x OVNKubernetes cluster to expose the api server endpoint, the service does not response for normal oc request. 
But some of them are working, like "oc whoami", "oc get --raw /api"

Version-Release number of selected component (if applicable):

4.11.8 with OVNKubernetes

How reproducible:

always

Steps to Reproduce:

1. Setup openshift cluster 4.11 on AWS with OVNKubernetes as the default network
2. Create the following service under openshift-kube-apiserver namespace to expose the api
----
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "1800"
  finalizers:
  - service.kubernetes.io/load-balancer-cleanup
  name: test-api
  namespace: openshift-kube-apiserver
spec:
  allocateLoadBalancerNodePorts: true
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  loadBalancerSourceRanges:
  - <my_ip>/32
  ports:
  - nodePort: 31248
    port: 6443
    protocol: TCP
    targetPort: 6443
  selector:
    apiserver: "true"
    app: openshift-kube-apiserver
  sessionAffinity: None
  type: LoadBalancer

3. Setup the DNS resolution for the access
xxx.mydomain.com ---> <elb-auto-generated-dns>

4. Try to access the cluster api via the service above by updating the kubeconfig to use the custom dns name

Actual results:

No response from the server side.

$ time oc get node -v8
I1025 08:29:10.284069  103974 loader.go:375] Config loaded from file:  bmeng.kubeconfig
I1025 08:29:10.294017  103974 round_trippers.go:420] GET https://rh-api.bmeng-ccs-ovn.3o13.s1.devshift.org:6443/api/v1/nodes?limit=500
I1025 08:29:10.294035  103974 round_trippers.go:427] Request Headers:
I1025 08:29:10.294043  103974 round_trippers.go:431]     Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I1025 08:29:10.294052  103974 round_trippers.go:431]     User-Agent: oc/openshift (linux/amd64) kubernetes/e40bd2d
I1025 08:29:10.365119  103974 round_trippers.go:446] Response Status: 200 OK in 71 milliseconds
I1025 08:29:10.365142  103974 round_trippers.go:449] Response Headers:
I1025 08:29:10.365148  103974 round_trippers.go:452]     Audit-Id: 83b9d8ae-05a4-4036-bff6-de371d5bec12
I1025 08:29:10.365155  103974 round_trippers.go:452]     Cache-Control: no-cache, private
I1025 08:29:10.365161  103974 round_trippers.go:452]     Content-Type: application/json
I1025 08:29:10.365167  103974 round_trippers.go:452]     X-Kubernetes-Pf-Flowschema-Uid: 2abc2e2d-ada3-4cb8-a86f-235df3a4e214
I1025 08:29:10.365173  103974 round_trippers.go:452]     X-Kubernetes-Pf-Prioritylevel-Uid: 02f7a188-43c7-4827-af58-5ebe861a1891
I1025 08:29:10.365179  103974 round_trippers.go:452]     Date: Tue, 25 Oct 2022 08:29:10 GMT
^C
real    17m4.840s
user    0m0.567s
sys    0m0.163s


However, it has the correct response if using --raw to request, eg:
$ oc get --raw /api/v1  --kubeconfig bmeng.kubeconfig 
{"kind":"APIResourceList","groupVersion":"v1","resources":[{"name":"bindings","singularName":"","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"componentstatuses","singularName":"","namespaced":false,"kind":"ComponentStatus","verbs":["get","list"],"shortNames":["cs"]},{"name":"configmaps","singularName":"","namespaced":true,"kind":"ConfigMap","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["cm"],"storageVersionHash":"qFsyl6wFWjQ="},{"name":"endpoints","singularName":"","namespaced":true,"kind":"Endpoints","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ep"],"storageVersionHash":"fWeeMqaN/OA="},{"name":"events","singularName":"","namespaced":true,"kind":"Event","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["ev"],"storageVersionHash":"r2yiGXH7wu8="},{"name":"limitranges","singularName":"","namespaced":true,"kind":"LimitRange","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["limits"],"storageVersionHash":"EBKMFVe6cwo="},{"name":"namespaces","singularName":"","namespaced":false,"kind":"Namespace","verbs":["create","delete","get","list","patch","update","watch"],"shortNames":["ns"],"storageVersionHash":"Q3oi5N2YM8M="},{"name":"namespaces/finalize","singularName":"","namespaced":false,"kind":"Namespace","verbs":["update"]},{"name":"namespaces/status","singularName":"","namespaced":false,"kind":"Namespace","verbs":["get","patch","update"]},{"name":"nodes","singularName":"","namespaced":false,"kind":"Node","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["no"],"storageVersionHash":"XwShjMxG9Fs="},{"name":"nodes/proxy","singularName":"","namespaced":false,"kind":"NodeProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"nodes/status","singularName":"","namespaced":false,"kind":"Node","verbs":["get","patch","update"]},{"name":"persistentvolumeclaims","singularName":"","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pvc"],"storageVersionHash":"QWTyNDq0dC4="},{"name":"persistentvolumeclaims/status","singularName":"","namespaced":true,"kind":"PersistentVolumeClaim","verbs":["get","patch","update"]},{"name":"persistentvolumes","singularName":"","namespaced":false,"kind":"PersistentVolume","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["pv"],"storageVersionHash":"HN/zwEC+JgM="},{"name":"persistentvolumes/status","singularName":"","namespaced":false,"kind":"PersistentVolume","verbs":["get","patch","update"]},{"name":"pods","singularName":"","namespaced":true,"kind":"Pod","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["po"],"categories":["all"],"storageVersionHash":"xPOwRZ+Yhw8="},{"name":"pods/attach","singularName":"","namespaced":true,"kind":"PodAttachOptions","verbs":["create","get"]},{"name":"pods/binding","singularName":"","namespaced":true,"kind":"Binding","verbs":["create"]},{"name":"pods/ephemeralcontainers","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"pods/eviction","singularName":"","namespaced":true,"group":"policy","version":"v1","kind":"Eviction","verbs":["create"]},{"name":"pods/exec","singularName":"","namespaced":true,"kind":"PodExecOptions","verbs":["create","get"]},{"name":"pods/log","singularName":"","namespaced":true,"kind":"Pod","verbs":["get"]},{"name":"pods/portforward","singularName":"","namespaced":true,"kind":"PodPortForwardOptions","verbs":["create","get"]},{"name":"pods/proxy","singularName":"","namespaced":true,"kind":"PodProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"pods/status","singularName":"","namespaced":true,"kind":"Pod","verbs":["get","patch","update"]},{"name":"podtemplates","singularName":"","namespaced":true,"kind":"PodTemplate","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"LIXB2x4IFpk="},{"name":"replicationcontrollers","singularName":"","namespaced":true,"kind":"ReplicationController","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["rc"],"categories":["all"],"storageVersionHash":"Jond2If31h0="},{"name":"replicationcontrollers/scale","singularName":"","namespaced":true,"group":"autoscaling","version":"v1","kind":"Scale","verbs":["get","patch","update"]},{"name":"replicationcontrollers/status","singularName":"","namespaced":true,"kind":"ReplicationController","verbs":["get","patch","update"]},{"name":"resourcequotas","singularName":"","namespaced":true,"kind":"ResourceQuota","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["quota"],"storageVersionHash":"8uhSgffRX6w="},{"name":"resourcequotas/status","singularName":"","namespaced":true,"kind":"ResourceQuota","verbs":["get","patch","update"]},{"name":"secrets","singularName":"","namespaced":true,"kind":"Secret","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"storageVersionHash":"S6u1pOWzb84="},{"name":"serviceaccounts","singularName":"","namespaced":true,"kind":"ServiceAccount","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["sa"],"storageVersionHash":"pbx9ZvyFpBE="},{"name":"serviceaccounts/token","singularName":"","namespaced":true,"group":"authentication.k8s.io","version":"v1","kind":"TokenRequest","verbs":["create"]},{"name":"services","singularName":"","namespaced":true,"kind":"Service","verbs":["create","delete","deletecollection","get","list","patch","update","watch"],"shortNames":["svc"],"categories":["all"],"storageVersionHash":"0/CO1lhkEBI="},{"name":"services/proxy","singularName":"","namespaced":true,"kind":"ServiceProxyOptions","verbs":["create","delete","get","patch","update"]},{"name":"services/status","singularName":"","namespaced":true,"kind":"Service","verbs":["get","patch","update"]}]}
 

Expected results:

The normal oc request should be working.

Additional info:

There is no such issue for clusters with openshift-sdn with the same OpenShift version and same LoadBalancer service.

We suspected that it might be related to the MTU setting, but this cannot explain why OpenShiftSDN works well.

Another thing might be related is that the OpenShiftSDN is using iptables for service loadbalancing and OVN is dealing that within the OVN services.

 

Please let me know if any debug log/info is needed.

We're seeing a slight uptick in how long upgrades are taking[1][2]. We are not 100% sure of the cause, but it looks like it started with 4.11 rc.7. There's no obvious culprits in the diff[3].

Looking at some of the jobs, we are seeing the gaps between kube-scheduler being updated and then machine-api appear to take longer. Example job run[4] showing 10+ minutes waiting for it.

TRT had a debugging session, and we have two suggestions:

  • Adding logging around when CVO sees an operator version changed
  • Instead of a fixed polling interval at 5 minutes (which is what we think CVO is doing), would it be possible to trigger on the CO to know when to look again? We think there could be some substantial savings on upgrade time by doing this.

[1] https://search.ci.openshift.org/graph/metrics?metric=job%3Aduration%3Atotal%3Aseconds&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-azure-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-ovn-upgrade&job=periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade
[2] https://sippy.dptools.openshift.org/sippy-ng/tests/4.12/analysis?test=Cluster%20upgrade.%5Bsig-cluster-lifecycle%5D%20cluster%20upgrade%20should%20complete%20in%2075.00%20minutes
[3] https://amd64.ocp.releases.ci.openshift.org/releasestream/4-stable/release/4.11.0-rc.7
[4] https://prow.ci.openshift.org/view/gcs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-e2e-azure-sdn-upgrade/1556865989923049472

This is a clone of issue OCPBUGS-3889. The following is the description of the original issue:

This is a clone of issue OCPBUGS-3744. The following is the description of the original issue:

Description of problem:

Egress router POD creation on Openshift 4.11 is failing with below error.
~~~
Nov 15 21:51:29 pltocpwn03 hyperkube[3237]: E1115 21:51:29.467436    3237 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy(c965a287-28aa-47b6-9e79-0cc0e209fcf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy(c965a287-28aa-47b6-9e79-0cc0e209fcf2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_stage-wfe-proxy-ext-qrhjw_stage-wfe-proxy_c965a287-28aa-47b6-9e79-0cc0e209fcf2_0(72bcf9e52b199061d6e651e84b0892efc142601b2442c2d00b92a1ba23208344): error adding pod stage-wfe-proxy_stage-wfe-proxy-ext-qrhjw to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus\\\" name=\\\"multus-cni-network\\\" failed (add): [stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw/c965a287-28aa-47b6-9e79-0cc0e209fcf2:openshift-sdn]: error adding container to network \\\"openshift-sdn\\\": CNI request failed with status 400: 'could not open netns \\\"/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669\\\": unknown FS magic on \\\"/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669\\\": 1021994\\n'\"" pod="stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw" podUID=c965a287-28aa-47b6-9e79-0cc0e209fcf2
~~~

I have checked SDN POD log from node where egress router POD is failing and I could see below error message.

~~~
2022-11-15T21:51:29.283002590Z W1115 21:51:29.282954  181720 pod.go:296] CNI_ADD stage-wfe-proxy/stage-wfe-proxy-ext-qrhjw failed: could not open netns "/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669": unknown FS magic on "/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669": 1021994
~~~

Crio is logging below event and looking at the log it seems the namespace has been created on node.

~~~
Nov 15 21:51:29 pltocpwn03 crio[3150]: time="2022-11-15 21:51:29.307184956Z" level=info msg="Got pod network &{Name:stage-wfe-proxy-ext-qrhjw Namespace:stage-wfe-proxy ID:72bcf9e52b199061d6e651e84b0892efc142601b2442c2d00b92a1ba23208344 UID:c965a287-28aa-47b6-9e79-0cc0e209fcf2 NetNS:/var/run/netns/8c5ca402-3381-4935-baed-ea454161d669 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
~~~

Version-Release number of selected component (if applicable):

4.11.12

How reproducible:

Not Sure

Steps to Reproduce:

1.
2.
3.

Actual results:

Egress router POD is failing to create. Sample application could be created without any issue.

Expected results:

Egress router POD should get created

Additional info:

Egress router POD is created following below document and it does contain pod.network.openshift.io/assign-macvlan: "true" annotation.

https://docs.openshift.com/container-platform/4.11/networking/openshift_sdn/deploying-egress-router-layer3-redirection.html#nw-egress-router-pod_deploying-egress-router-layer3-redirection

This is a clone of issue OCPBUGS-10497. The following is the description of the original issue:

This is a clone of issue OCPBUGS-10213. The following is the description of the original issue:

This is a clone of issue OCPBUGS-8468. The following is the description of the original issue:

Description of problem:

RHCOS is being published to new AWS regions (https://github.com/openshift/installer/pull/6861) but aws-sdk-go need to be bumped to recognize those regions

Version-Release number of selected component (if applicable):

master/4.14

How reproducible:

always

Steps to Reproduce:

1. openshift-install create install-config
2. Try to select ap-south-2 as a region
3.

Actual results:

New regions are not found. New regions are: ap-south-2, ap-southeast-4, eu-central-2, eu-south-2, me-central-1.

Expected results:

Installer supports and displays the new regions in the Survey

Additional info:

See https://github.com/openshift/installer/blob/master/pkg/asset/installconfig/aws/regions.go#L13-L23

 

Description of problem:

When scaling down the machineSet for worker nodes, a PV(vmdk) file got deleted.

Version-Release number of selected component (if applicable):

4.10

How reproducible:

N/A

Steps to Reproduce:

1. Scale down worker nodes
2. Check VMware logs and VM gets deleted with vmdk still attached

Actual results:

After scaling down nodes, volumes still attached to the VM get deleted alongside the VM

Expected results:

Worker nodes scaled down without any accidental deletion

Additional info:

 

This is a clone of issue OCPBUGS-14152. The following is the description of the original issue:

This is a clone of issue OCPBUGS-14127. The following is the description of the original issue:

This is a clone of issue OCPBUGS-14125. The following is the description of the original issue:

Description of problem:

Since registry.centos.org is closed, tests relying on this registry in e2e-agnostic-ovn-cmd job are failing.

Version-Release number of selected component (if applicable):

all

How reproducible:

Trigger e2e-agnostic-ovn-cmd job

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

Description of problem:
In a complete disconnected cluster, the dev catalog is taking too much time in loading

Version-Release number of selected component (if applicable):

How reproducible:
Always

Steps to Reproduce:
1. A complete disconnected cluster
2. In add page go to the All services page
3.

Actual results:
Taking too much time too load

Expected results:
Time taken should be reduced

Additional info:
Attached a gif for reference

We have created a fix in 4.12 that fetches instance type information from Azure API instead of updating the lists. We feel that backporting that fix is too risky, but agreed to update the list in older versions.

Description of problem:

Add the following instance types to azure_instance_types list[1]:

  • Standard_D8s_v5
  • Standard_E8s_v5
  • Standard_E16s_v5

Version-Release number of selected component (if applicable):
OCP 4.8

Steps to Reproduce:
1. Migrate worker/infra nodes to above mentioned (missing) v5 instance types
2. "Failed to set autoscaling from zero annotations, instance type unknown"

Actual results:

  • "Failed to set autoscaling from zero annotations, instance type unknown"
  • New v5 instance types not officially tested/supported

Expected results:
The new instance types are available in the azure_instance_types list[1] and no errors/warnings are observed after migrating:

  • Standard_D8s_v5
  • Standard_E8s_v5
  • Standard_E16s_v5

Additional info:

The related v4 instance types are already available[1] - I suspect adding the mentioned v5 instance types is a minor update:

  • Standard_D8s_v4
  • Standard_E8s_v4
  • Standard_E16s_v4

1) azure_instance_types.go
https://github.com/openshift/cluster-api-provider-azure/blob/release-4.8/pkg/cloud/azure/actuators/machineset/azure_instance_types.go

Just like kube proxy, ovnk should expose port 10256 on every node, so that cloud LBs can send health checks and know which nodes are available. This is relevant for services with externalTrafficPolicy=Cluster.

Description of problem:

prometheus-k8s-0 ends in CrashLoopBackOff with evel=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0" on SNO after hard reboot tests

Version-Release number of selected component (if applicable):

4.11.6

How reproducible:

Not always, after ~10 attempts

Steps to Reproduce:

1. Deploy SNO with Telco DU profile applied
2. Hard reboot node via out of band interface
3. oc -n openshift-monitoring get pods prometheus-k8s-0 

Actual results:

NAME               READY   STATUS             RESTARTS          AGE
prometheus-k8s-0   5/6     CrashLoopBackOff   125 (4m57s ago)   5h28m

Expected results:

Running

Additional info:

Attaching must-gather.

The pod recovers successfully after deleting/re-creating.


[kni@registry.kni-qe-0 ~]$ oc -n openshift-monitoring logs prometheus-k8s-0
ts=2022-09-26T14:54:01.919Z caller=main.go:552 level=info msg="Starting Prometheus Server" mode=server version="(version=2.36.2, branch=rhaos-4.11-rhel-8, revision=0d81ba04ce410df37ca2c0b1ec619e1bc02e19ef)"
ts=2022-09-26T14:54:01.919Z caller=main.go:557 level=info build_context="(go=go1.18.4, user=root@371541f17026, date=20220916-14:15:37)"
ts=2022-09-26T14:54:01.919Z caller=main.go:558 level=info host_details="(Linux 4.18.0-372.26.1.rt7.183.el8_6.x86_64 #1 SMP PREEMPT_RT Sat Aug 27 22:04:33 EDT 2022 x86_64 prometheus-k8s-0 (none))"
ts=2022-09-26T14:54:01.919Z caller=main.go:559 level=info fd_limits="(soft=1048576, hard=1048576)"
ts=2022-09-26T14:54:01.919Z caller=main.go:560 level=info vm_limits="(soft=unlimited, hard=unlimited)"
ts=2022-09-26T14:54:01.921Z caller=web.go:553 level=info component=web msg="Start listening for connections" address=127.0.0.1:9090
ts=2022-09-26T14:54:01.922Z caller=main.go:989 level=info msg="Starting TSDB ..."
ts=2022-09-26T14:54:01.924Z caller=tls_config.go:231 level=info component=web msg="TLS is disabled." http2=false
ts=2022-09-26T14:54:01.926Z caller=main.go:848 level=info msg="Stopping scrape discovery manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:862 level=info msg="Stopping notify discovery manager..."
ts=2022-09-26T14:54:01.926Z caller=manager.go:951 level=info component="rule manager" msg="Stopping rule manager..."
ts=2022-09-26T14:54:01.926Z caller=manager.go:961 level=info component="rule manager" msg="Rule manager stopped"
ts=2022-09-26T14:54:01.926Z caller=main.go:899 level=info msg="Stopping scrape manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:858 level=info msg="Notify discovery manager stopped"
ts=2022-09-26T14:54:01.926Z caller=main.go:891 level=info msg="Scrape manager stopped"
ts=2022-09-26T14:54:01.926Z caller=notifier.go:599 level=info component=notifier msg="Stopping notification manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:844 level=info msg="Scrape discovery manager stopped"
ts=2022-09-26T14:54:01.926Z caller=manager.go:937 level=info component="rule manager" msg="Starting rule manager..."
ts=2022-09-26T14:54:01.926Z caller=main.go:1120 level=info msg="Notifier manager stopped"
ts=2022-09-26T14:54:01.926Z caller=main.go:1129 level=error err="opening storage failed: /prometheus/chunks_head/000002: invalid magic number 0"

Description of problem:

The metal3 Pod of openshift-machine-api is in CrashLoopBackOff status.

Version-Release number of selected component (if applicable):

4.10.31

How reproducible:

Always reproduce in IPv6

Steps to Reproduce:

1. Preparing to configure ipi on the provisioning node
   - RHEL 8 ( haproxy, named, mirror registry, rhcos_cache_server ..)

2. configuring the install-config.yaml (attached)
   - provisioningNetwork: disabled
   - machine network: only IPv6
   - disconnected installation
   
3. deploy the cluster 

Actual results:

It is not possible to add worker nodes because metal3 does not start normally.

Expected results:

metal3 starts normally in IPv6 environment

Additional info:

1. attached must-gather
https://drive.google.com/file/d/1GKxj3syROIMnURx_PYzOYhJdEuXBNXVW/view?usp=sharing

2. pod status
[kni@prov ~]$ oc get pod
NAME                                           READY   STATUS             RESTARTS          AGE
cluster-autoscaler-operator-6656bfd7b9-bt4j8   2/2     Running            0                 35h
cluster-baremetal-operator-6bbdd6758-rmxgq     2/2     Running            0                 35h
machine-api-controllers-55fb545b56-kl5sj       7/7     Running            0                 34h
machine-api-operator-845b6cf855-q7gdd          2/2     Running            0                 35h
metal3-574876cfdb-98fmz                        6/7     CrashLoopBackOff   179 (3m17s ago)   14h
metal3-image-cache-5mq2w                       1/1     Running            0                 14h
metal3-image-cache-nftpj                       1/1     Running            0                 14h
metal3-image-cache-t7whh                       1/1     Running            0                 14h
metal3-image-customization-68d4d6d99b-dbqgn    1/1     Running            0                 15h

[kni@prov ~]$ oc logs metal3-574876cfdb-98fmz -c metal3-httpd
AH00526: Syntax error on line 8 of /etc/httpd/conf.d/vmedia.conf:
The port number "2001:feed:101::102" is outside the appropriate range (i.e., 1..65535).

Following the trail
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/issues/1139
https://github.com/kubernetes-sigs/aws-ebs-csi-driver/pull/1175
https://github.com/openshift/aws-ebs-csi-driver/pull/206
 
Looks like the fix should be in 4.12, but it still see it being 39 vs ~24 on an m6i instance type.

It seems that the kubelet applies this capacity to the node in 4.11 and earlier and, thus, unlikely to receive this fix for attachable volumes in the upstream CSI driver. 4.12 behavior is currently unknown but it seems that the kubelet might still be setting this capacity.

The actual issue is that kube scheduler schedules pods that require PVs to nodes where those PVs can not be attached.

Description of problem:

This is just a clone of https://bugzilla.redhat.com/show_bug.cgi?id=2105570 for purposes of cherry-picking.

Version-Release number of selected component (if applicable):

4.13

How reproducible:

 

Steps to Reproduce:

1.
2.
3.

Actual results:

 

Expected results:

 

Additional info:

 

This is a clone of issue OCPBUGS-15099. The following is the description of the original issue:

This is a clone of issue OCPBUGS-14943. The following is the description of the original issue:

This is a clone of issue OCPBUGS-14668. The following is the description of the original issue:

Description of problem:

visiting global configurations page will return error after 'Red Hat OpenShift Serverless' is installed, the error persist even operator is uninstalled

Version-Release number of selected component (if applicable):

4.14.0-0.nightly-2023-06-06-212044

How reproducible:

Always

Steps to Reproduce:

1. Subscribe 'Red Hat OpenShift Serverless' from OperatorHub, wait for the operator to be successfully installed
2. Visit Administration -> Cluster Settings -> Configurations tab

Actual results:

react_devtools_backend_compact.js:2367 unhandled promise rejection: TypeError: Cannot read properties of undefined (reading 'apiGroup') 
    at r (main-chunk-e70ea3b3d562514df486.min.js:1:1)
    at main-chunk-e70ea3b3d562514df486.min.js:1:1
    at Array.map (<anonymous>)
    at main-chunk-e70ea3b3d562514df486.min.js:1:1
overrideMethod @ react_devtools_backend_compact.js:2367
window.onunhandledrejection @ main-chunk-e70ea3b3d562514df486.min.js:1

main-chunk-e70ea3b3d562514df486.min.js:1 Uncaught (in promise) TypeError: Cannot read properties of undefined (reading 'apiGroup')
    at r (main-chunk-e70ea3b3d562514df486.min.js:1:1)
    at main-chunk-e70ea3b3d562514df486.min.js:1:1
    at Array.map (<anonymous>)
    at main-chunk-e70ea3b3d562514df486.min.js:1:1

 

Expected results:

no errors

Additional info:

 

Discovered in the must gather kubelet_service.log from https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-gcp-sdn-upgrade/1586093220087992320

It appears the guard pod names are too long, and being truncated down to where they will collide with those from the other masters.

From kubelet logs in this run:

❯ grep openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste kubelet_service.log
Oct 28 23:58:55.693391 ci-op-3hj6pnwf-4f6ab-lv57z-master-1 kubenswrapper[1657]: E1028 23:58:55.693346    1657 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-1" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste"
Oct 28 23:59:03.735726 ci-op-3hj6pnwf-4f6ab-lv57z-master-0 kubenswrapper[1670]: E1028 23:59:03.735671    1670 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-0" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste"
Oct 28 23:59:11.168082 ci-op-3hj6pnwf-4f6ab-lv57z-master-2 kubenswrapper[1667]: E1028 23:59:11.168041    1667 kubelet_pods.go:413] "Hostname for pod was too long, truncated it" podName="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-master-2" hostnameMaxLen=63 truncatedHostname="openshift-kube-scheduler-guard-ci-op-3hj6pnwf-4f6ab-lv57z-maste"

This also looks to be happening for openshift-kube-scheduler-guard, kube-controller-manager-guard, possibly others.

Looks like they should be truncated further to make room for random suffixes in https://github.com/openshift/library-go/blame/bd9b0e19121022561dcd1d9823407cd58b2265d0/pkg/operator/staticpod/controller/guard/guard_controller.go#L97-L98

Unsure of the implications here, it looks a little scary.

4.12 will have an option in cri-o: add_inheritable_capabilities which will allow a user to opt-out of dropping inheritable capabilities (which comes as a fix for CVE-2022-27652). We should add it by default as a drop-in in 4.11 so clusters that upgrade from it inherit the old behavior

This is a clone of issue OCPBUGS-12839. The following is the description of the original issue:

Description

As a user, I would like to see the type of technology used by the samples on the samples view similar to the all services view. 

On the samples view:

It is showing different types of samples, e.g. devfile, helm and all showing as .NET. It is difficult for user to decide which .Net entry to select on the list. We'll need something like the all service view where it shows the type of technology on the top right of each card for users to differentiate between the entries:

Acceptance Criteria

  1. Add visible label as the all services view on each card to show the technology used by the sample on the samples view.

Additional Details:

OCPBUGS-1251 landed an admin-ack gate in 4.11.z to help admins prepare for Kubernetes 1.25 API removals which are coming in OpenShift 4.12. Poking around in a 4.12.0-ec.2 cluster where APIRemovedInNextReleaseInUse is firing:

$ oc --as system:admin adm must-gather -- /usr/bin/gather_audit_logs
$ zgrep -h v1beta1/poddisruptionbudget must-gather.local.1378724704026451055/quay*/audit_logs/kube-apiserver/*.log.gz | jq -r '.verb + " " + (.user | .username + " " + (.extra["authentication.kubernetes.io/pod-name"] | tostr
ing))' | sort | uniq -c
parse error: Invalid numeric literal at line 29, column 6
     28 watch system:serviceaccount:openshift-machine-api:cluster-autoscaler ["cluster-autoscaler-default-5cf997b8d6-ptgg7"]

Finding the source for that container:

$ oc --as system:admin -n openshift-machine-api get -o json pod cluster-autoscaler-default-5cf997b8d6-ptgg7 | jq -r '.status.containerStatuses[].image'
quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81ab7ce0c851ba5e5169bba717cb54716ce5457cbe89d159c97a5c25fd820ed
$ oc image info quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f81ab7ce0c851ba5e5169bba717cb54716ce5457cbe89d159c97a5c25fd820ed | grep github
             SOURCE_GIT_URL=https://github.com/openshift/kubernetes-autoscaler
             io.openshift.build.commit.url=https://github.com/openshift/kubernetes-autoscaler/commit/1dac0311b9842958ec630273428b74703d51c1c9
             io.openshift.build.source-location=https://github.com/openshift/kubernetes-autoscaler

Poking about in the source:

$ git clone --depth 30 --branch master https://github.com/openshift/kubernetes-autoscaler.git
$ cd kubernetes-autoscaler
$ find . -name vendor
./addon-resizer/vendor
./cluster-autoscaler/vendor
./vertical-pod-autoscaler/e2e/vendor
./vertical-pod-autoscaler/vendor

Lots of vendoring. I haven't checked to see how new the client code is in the various vendor packages. But the main issue seems to be the v1beta1 in:

$ git grep policy cluster-autoscaler/core cluster-autoscaler/utils | grep policy.*v1beta1
cluster-autoscaler/core/scaledown/actuation/actuator_test.go:   policyv1beta1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/core/scaledown/actuation/actuator_test.go:                                   eviction := createAction.GetObject().(*policyv1beta1.Eviction)
cluster-autoscaler/core/scaledown/actuation/drain.go:   policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/core/scaledown/actuation/drain_test.go:      policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/core/scaledown/legacy/legacy.go:     policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/core/scaledown/legacy/wrapper.go:    policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/core/scaledown/scaledown.go: policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/core/static_autoscaler_test.go:      policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/utils/drain/drain.go:        policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/utils/drain/drain_test.go:   policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/utils/kubernetes/listers.go: policyv1 "k8s.io/api/policy/v1beta1"
cluster-autoscaler/utils/kubernetes/listers.go: v1policylister "k8s.io/client-go/listers/policy/v1beta1"

The main change from v1beta1 to v1 involves spec.selector; I dunno if that's relevant to the autoscaler use-case or not.

Do we run autoscaler CI? I was poking around a bit, but did not find a 4.12 periodic excercising the autoscaler that might have turned up this alert and issue.

Description of problem:

 

When creating a ProjectHelmChartRepository (with or without the form) and setting a display name (as `spec.name`), this value is not used in the developer catalog / Helm Charts catalog filter sidebar.

It shows (and watches) the display names of `HelmChartRepository` resources.

Version-Release number of selected component (if applicable):

4.11

How reproducible:

Always

Steps to Reproduce:

1. Switch to Developer Perspective
2. Navigate to Add > "Helm Chart repositories"
3. Enter "ibm-charts" as "Chart repository name"
4. Enter URL https://raw.githubusercontent.com/IBM/charts/master/repo/community/index.yaml as URL)
5. Press on create
6. Open the YAML editor and change the `spec.name` attribute to "IBM Charts"
7. Save the change
8. Navigate to Add > "Helm Chart" 

Actual results:

The filter navigation on the left side shows "Chart Repositories" "Ibm Chart". A camel case version of the resource name.

Expected results:

It should show the "spec.name" "IBM Charts" if defined and fallback to the current implementation if the optional spec.name is not defined.

Additional info:

There is a bug discussing that the display name could not be entered directly, https://bugzilla.redhat.com/show_bug.cgi?id=2106366. This bug here is only about the catalog output.

 

Since 4.11 OCP comes with OperatorHub definition which declares a capability
and enables all catalog sources. For OKD we want to enable just community-operators
as users may not have Red Hat pull secret set.
This commit would ensure that OKD version of marketplace operator gets
its own OperatorHub manifest with a custom set of operator catalogs enabled

This is a clone of issue OCPBUGS-1704. The following is the description of the original issue:

Description of problem:

According to OCP 4.11 doc (https://docs.openshift.com/container-platform/4.11/installing/installing_gcp/installing-gcp-account.html#installation-gcp-enabling-api-services_installing-gcp-account), the Service Usage API (serviceusage.googleapis.com) is an optional API service to be enabled. But, the installation cannot succeed if this API is disabled.

Version-Release number of selected component (if applicable):

4.12.0-0.nightly-2022-09-25-071630

How reproducible:

Always, if the Service Usage API is disabled in the GCP project.

Steps to Reproduce:

1. Make sure the Service Usage API (serviceusage.googleapis.com) is disabled in the GCP project.
2. Try IPI installation in the GCP project. 

Actual results:

The installation would fail finally, without any worker machines launched.

Expected results:

Installation should succeed, or the OCP doc should be updated.

Additional info:

Please see the attached must-gather logs (http://virt-openshift-05.lab.eng.nay.redhat.com/jiwei/jiwei-0926-03-cnxn5/) and the sanity check results. 
FYI if enabling the API, and without changing anything else, the installation could succeed. 

Description of problem:

The oc new-app command using a private Git repository no longer works with oc v4.10. Specifically, the private Git repository is authenticating over SSH and this occurs when no image stream is specified in the command so that the language detection step is required. Here is an example command:

oc new-app git@github.com:scottishkiwi/test-oc-newapp.git --source-secret github-repo-key --name test-app

Version-Release number of selected component (if applicable):
oc v4.10

How reproducible:

Easily reproducible.

Steps to Reproduce:

1. Download v4.10 of the oc tool and add to local executable path as 'oc':
https://access.redhat.com/downloads/content/290/ver=4.10/rhel---8/4.10.10/x86_64/product-software

➜ ~ oc version
Client Version: 4.10.9

2. Setup a private Github repository

3. Add SSH public key as a deploy key to the private Github repository

4. Push some empty test file like index.php to the private repository (used for language detection)

5. Create a new OpenShift project:
➜ ~ oc new-project test-oc-newapp

6. Create a secret to hold the private key of the SSH key pair
➜ ~ oc create secret generic github-repo-key --from-file=ssh-privatekey=/Users/daniel/test/github-repo --type=kubernetes.io/ssh-auth
secret/github-repo-key created

7. Enable access to the secret from the builder service account:
➜ ~ oc secrets link builder github-repo-key

8. Create a new application using the source secret:
oc new-app git@github.com:scottishkiwi/test-oc-newapp.git --source-secret github-repo-key --name test-app

Actual results:

➜ ~ oc new-app git@github.com:scottishkiwi/test-oc-newapp.git --source-secret github-repo-key --name test-app
warning: Cannot check if git requires authentication.
error: local file access failed with: stat git@github.com:scottishkiwi/test-oc-newapp.git: no such file or directory
error: unable to locate any images in image streams, templates loaded in accessible projects, template files, local docker images with name "git@github.com:scottishkiwi/test-oc-newapp.git"

Argument 'git@github.com:scottishkiwi/test-oc-newapp.git' was classified as an image, image~source, or loaded template reference.

The 'oc new-app' command will match arguments to the following types:

1. Images tagged into image streams in the current project or the 'openshift' project

  • if you don't specify a tag, we'll add ':latest'
    2. Images in the container storage, on remote registries, or on the local container engine
    3. Templates in the current project or the 'openshift' project
    4. Git repository URLs or local paths that point to Git repositories

--allow-missing-images can be used to point to an image that does not exist yet.

Expected results (with oc v4.8):

➜ ~ oc-4.8 version
Client Version: 4.8.37

➜ oc-4.8 new-app git@github.com:scottishkiwi/test-oc-newapp.git --source-secret github-repo-key --name test-app
warning: Cannot check if git requires authentication.
--> Found image 22f1bf3 (4 weeks old) in image stream "openshift/php" under tag "7.4-ubi8" for "php"

Apache 2.4 with PHP 7.4
-----------------------
PHP 7.4 available as container is a base platform for building and running various PHP 7.4 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts.

Tags: builder, php, php74, php-74

  • The source repository appears to match: php
  • A source build using source code from git@github.com:scottishkiwi/test-oc-newapp.git will be created
  • The resulting image will be pushed to image stream tag "test-app:latest"
  • Use 'oc start-build' to trigger a new build
  • WARNING: this source repository may require credentials.
    Create a secret with your git credentials and use 'oc set build-secret' to assign it to the build config.

--> Creating resources ...
imagestream.image.openshift.io "test-app" created
buildconfig.build.openshift.io "test-app" created
deployment.apps "test-app" created
service "test-app" created
--> Success
Build scheduled, use 'oc logs -f buildconfig/test-app' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/test-app'
Run 'oc status' to view your app.

Additional info:

Also tested with oc v4.9 and works as expected:

➜ ~ oc-4.9 version
Client Version: 4.9.29

➜ ~ oc-4.9 new-app git@github.com:scottishkiwi/test-oc-newapp.git --source-secret github-repo-key --name test-app
warning: Cannot check if git requires authentication.
--> Found image 22f1bf3 (4 weeks old) in image stream "openshift/php" under tag "7.4-ubi8" for "php"

Apache 2.4 with PHP 7.4
-----------------------
PHP 7.4 available as container is a base platform for building and running various PHP 7.4 applications and frameworks. PHP is an HTML-embedded scripting language. PHP attempts to make it easy for developers to write dynamically generated web pages. PHP also offers built-in database integration for several commercial and non-commercial database management systems, so writing a database-enabled webpage with PHP is fairly simple. The most common use of PHP coding is probably as a replacement for CGI scripts.

Tags: builder, php, php74, php-74

  • The source repository appears to match: php
  • A source build using source code from git@github.com:scottishkiwi/test-oc-newapp.git will be created
  • The resulting image will be pushed to image stream tag "test-app:latest"
  • Use 'oc start-build' to trigger a new build
  • WARNING: this source repository may require credentials.
    Create a secret with your git credentials and use 'oc set build-secret' to assign it to the build config.

--> Creating resources ...
buildconfig.build.openshift.io "test-app" created
deployment.apps "test-app" created
service "test-app" created
--> Success
Build scheduled, use 'oc logs -f buildconfig/test-app' to track its progress.
Application is not exposed. You can expose services to the outside world by executing one or more of the commands below:
'oc expose service/test-app'
Run 'oc status' to view your app.
➜ ~ oc status
In project dan-test-oc-newapp on server https://api.dsquirre.2b7w.p1.openshiftapps.com:6443

svc/test-app - 172.30.238.75 ports 8080, 8443
deployment/test-app deploys istag/test-app:latest <-
bc/test-app source builds git@github.com:scottishkiwi/test-oc-newapp.git on openshift/php:7.4-ubi8
deployment #3 running for 38 minutes - 1 pod
deployment #2 deployed 38 minutes ago
deployment #1 deployed 38 minutes ago

1 info identified, use 'oc status --suggest' to see details.